So when you have a bad take machine, you get the following processes:
They make a bad take.
People are outraged and talk about it.
The bad take machine likes it and does more of that behaviour in future.
If, on the other hand, they make a take and nobody cares, they do not get reward and the behaviour is selected against.
The behaviours drove the spread of the outrage replicator, and the outrage replicator provides the selection mechanism for the behaviours. Thus, via the spread of our outrage on Twitter, we have operant conditioned the bad take machine into producing worse takes.
Which is to say, it's bad on purpose to make you replicate it.
Sometimes there’s a Heuristic That Almost Always Works, like “this technology won’t change everything” or “there won’t be a hurricane tomorrow”.
And sometimes the rare exceptions are so important to spot that we charge experts with the task. But the heuristics are so hard to beat that the experts themselves might be tempted to secretly rely on them, while publicly pretending to use more subtle forms of expertise.
…Maybe this is because the experts are stupid and lazy. Or maybe it’s social pressure: failure because you didn’t follow a well-known heuristic that even a rock can get right is more humiliating than failure because you didn’t predict a subtle phenomenon that nobody else predicted either. Or maybe it’s because false positives are more common (albeit less important) than false negatives, and so over any “reasonable” timescale the people who never give false positives look more accurate and get selected for.