So when you have a bad take machine, you get the following processes:
They make a bad take.
People are outraged and talk about it.
The bad take machine likes it and does more of that behaviour in future.
If, on the other hand, they make a take and nobody cares, they do not get reward and the behaviour is selected against.
The behaviours drove the spread of the outrage replicator, and the outrage replicator provides the selection mechanism for the behaviours. Thus, via the spread of our outrage on Twitter, we have operant conditioned the bad take machine into producing worse takes.
Which is to say, it's bad on purpose to make you replicate it.
I once read a good definition of aptitude. Aptitude is how long it takes you to learn something. The idea is that everybody can learn anything, but if it takes you 200 years, you essentially have no aptitude for it. Useful aptitudes are in the <10 years range.
Your first short story takes 10 days to write. The next one 5 days, the next one 2.5 days, the next one 1.25 days. Then 0.625 days, at which point you’re probably hitting raw typing speed limits. In practice, improvement curves have more of a staircase quality to them. Rather than fix the obvious next bottleneck of typing speed (who cares if it took you 3 hours instead of 6 to write a story; the marginal value of more speed is low at that point), you might level up and decide to (say) write stories with better developed characters. Or illustrations. So you’re back at 10 days, but on a new level.
This kind of improvement replaces quantitative improvement (optimization) with qualitative leveling up, or dimensionality increase. Each time you hit diminishing returns, you open up a new front. You’re never on the slow endzone of a learning curve. You self-disrupt before you get stuck.
The interesting thing is, this is not purely a function not of raw prowess or innate talent, but of imagination and taste.