So when you have a bad take machine, you get the following processes:
They make a bad take.
People are outraged and talk about it.
The bad take machine likes it and does more of that behaviour in future.
If, on the other hand, they make a take and nobody cares, they do not get reward and the behaviour is selected against.
The behaviours drove the spread of the outrage replicator, and the outrage replicator provides the selection mechanism for the behaviours. Thus, via the spread of our outrage on Twitter, we have operant conditioned the bad take machine into producing worse takes.
Which is to say, it's bad on purpose to make you replicate it.
Now, I understand deadlines. I understand that the plane will take off whether or not I’m on it, or the importance of beating the holiday retail rush, or that "the show must go on". It is perfectly clear to me how people use timekeeping technology to coordinate social activity. It’s actually quite remarkable when you step back and look at it. But, over the years, I have observed that there is a difference between those examples and the ones around the delivery of Things, which tend to be completely arbitrary. When you wrap an arbitrarily complex endeavor up in a neat launch date, the goal seems to be more about coercing the people beneath you to absorb the overhead of all the details you left out—that or sweating it yourself. As a tool for coordinating human activity, I have come to believe that the Thing-deadline calculus is, considering more sophisticated alternatives, unnecessarily crude.