Hearing about AI companies like OpenAI raising billions while estimating losses in the tens of billions by 2029 is almost comical, even if it's not meant to be. The backbone of productivity shouldn't be a money sink, in the same way that a revolution isn't something that happens when you're losing instead of gaining ground.
What do these companies have to say for themselves? They tell us that their models are getting smarter, mimic our human ways even better and are becoming more human-like all the time. But when they're asked to explain AI's thinking or reasoning, they don't inspire confidence at all.
Neither do they inspire confidence when they talk about model collapse, a phrase that's been thrown around a lot since summer.
When fresh, human-written texts aren’t available, the nonsense from AI starts to accumulate. You'd think, with all the funding, they must have solved this by now. But the issue is deeper than that. AI, for all its power, gets stuck in a loop. It feeds off the data we create.
Abandon that data, and what do you have? A self-running engine with no new fuel. And this is where we humans still matter. We supply the AI with what it doesn't have, the unpredictability of the human touch, the creativity of a mind that's not just a boring old record player, the good old flaws and emotions that a sentient being is supposed to offer.
If you ask me, maybe the fundamental question is still not resolved, can an AI really innovate, or is it doomed to always just reflect what we give it?