Science must be simple, yet the human brain has a structure that gives it the capacity for relating to the world in its undivided complexity in ways that are not logical, though they are effective. Aesthetic interest aroused by observation and half-formed perception seems usually, perhaps always, to precede exact analysis.
Conventional tech industry product practice will not produce deep enough subject matter insights to create transformative tools for thought.
...The aspiration is for any team serious about making transformative tools for thought. It’s to create a culture that combines the best parts of modern product practice with the best parts of the (very different) modern research culture. You need the insight-through-making loop to operate, whereby deep, original insights about the subject feed back to change and improve the system, and changes to the system result in deep, original insights about the subject.
The details are fascinating, but the central argument — that the birth of modernity can be traced to a meta-crisis spawned by the 0.1s problem — is worth understanding and appreciating whether or not you’re a time nerd like me.
There is no convenient leitmotif, comparable to the 0.1s problem, for our contemporary version of the rhyming conditions, but something very similar to the “tenth of a second crisis” is going on today. I suspect our Great Weirding too involves some sort of limiting factor on human cognition that we haven’t yet properly wrapped our minds around. It isn’t reaction time, but something analogous.
Sometimes there’s a Heuristic That Almost Always Works, like “this technology won’t change everything” or “there won’t be a hurricane tomorrow”.
And sometimes the rare exceptions are so important to spot that we charge experts with the task. But the heuristics are so hard to beat that the experts themselves might be tempted to secretly rely on them, while publicly pretending to use more subtle forms of expertise.
…Maybe this is because the experts are stupid and lazy. Or maybe it’s social pressure: failure because you didn’t follow a well-known heuristic that even a rock can get right is more humiliating than failure because you didn’t predict a subtle phenomenon that nobody else predicted either. Or maybe it’s because false positives are more common (albeit less important) than false negatives, and so over any “reasonable” timescale the people who never give false positives look more accurate and get selected for.