When analyzing algorithms, O(log n) is actually the same as O(1), because log n ≤ 64. Don't believe me? Try materializing 2^64 things on your computer. I dare you.
https://pmc.ncbi.nlm.nih.gov/articles/PMC10827157/
What other things are hiding in underanalyzed sequence data?
This paper is kind of hilarious: https://www.nber.org/papers/w31047
Apparently "hyperbolic discounting" - the phenomenon where humans incorrectly weight future rewards ("incorrectly" in that if you use any curve which isn't exponential you will regret it at some point) - isn't necessarily some kind of issue of "self-control", or due to uncertain future gains. It results from humans being really bad at calculating exponentials.
It's always "exciting" when you have a problem and it turns out that your problem is addressed by some research from the last year.
The posthuman technocapital singularity is reaching backward in time to give itself a good soundtrack: https://www.youtube.com/watch?v=86fZ50TysOg
(thanks to Dmytro and MusicPerson and I guess Udio's engineers.)
It begins.
Real computers pull several kilowatts and can be heard from several rooms away. Real computers need GPU power viruses to even out variations in power draw in order to not take down the grid. Real computers have to have staggered boot sequences to avoid destabilizing the radiation pressure/gravity equilibrium in the Sun.
Apparently the CalDAV server I use, Radicale, can in some circumstances permanently lock up and begin rejecting all requests to add or edit events with a 400 error, which it then doesn't explain due to poorly configured logging, and which then turn out to be buried three layers deep in libraries. In other news, I'm wiping that install and switching to an alternative ideally not written in Python.
Georgism is not going far enough. We need to apply Georgism to the akashic records and all mathematical abstractions in order to land-value-tax domain names, copyright, etc.
This is a very clean explanation of much of the modern media ecosystem: https://cameronharwick.com/writing/high-culture-and-hyperstimulus/. My read is basically that hard-to-replicate entertainment is higher-status because if you enjoy easy-to-produce things you're more open to exploitation (spending too many resources on those easy things).
I love how science fiction authors who are explicitly and intentionally writing an optimistic future apparently cannot imagine a world with reliable, stable, secure software. It's easier to imagine the end of the world humanity as a single-planet species than it is to imagine the end of capitalism broken software.
I like Rust most of the time, but borrow checking really does not lend itself well to game development.
I bought a used datacentre SSD for purposes, and apparently the last owner both did not wipe it and ran it in a gaming desktop (based on the unwiped files and SMART data reporting lots of unsafe shutdowns). How odd.
Every argument about intelligence is about souls, except arguments about souls, which are about social status.
"It's better to be happy than right" is a great belief if you do not intend to take any action which has any effect on anyone else ever.
Sometimes I daydream about what the world would be like if Intel had fewer skill issues. So much wasted potential: Optane, Tofino, 10nm, AVX-512, Xeon Phi (kind of), OmniPath, AADG (maybe).
https://ericneyman.wordpress.com/2020/11/29/an-elegant-proof-of-laplaces-rule-of-succession/
This is a cool proof, though I'm not sure why they discarded the idea of [SPOILER]
using an interval and counting segments of that. It is somewhat less symmetric but seems cleaner.
I just added a search mechanism to the website which also searches docs.osmarks.net and b.osmarks.net. The search overlay also brings a slightly new UI style (thick colored borders). I might roll this out to more things if I determine that it looks nice and doesn't compromise usability.
I have looked at some of the literature on WiFi sensing ((ab)using consumer WiFi equipment as janky radar). It has some weird quirks. Almost everyone seems to use hopelessly outdated hardware for no obvious reason - ESP32s (there was a paper on this) and Intel AX200s can dump CSI information. Also, while plenty of work uses deep learning and some of it is even using modern architectures, nobody seems to have caught onto the Bitter Lesson. I saw one paper doing useful transfer learning - and even then their "pretraining" was 20 minutes on a consumer GPU. Everyone uses tiny (<100k samples) datasets. Probably one organization willing to deploy a lot of devices for data collection and more than 20 minutes of computing power could beat all previous work and make something people actually want.
It's clear that the Apple Vision Pro EyeSight display was a mistake both for cost and uncanny-valley reasons. They should have used the elegant, expedient solution of sticking some googly eyes on.