https://img.inleo.io/DQmYZhDKxuKfT9yeD9x6UPDSTssrd3tQ6SkVZYggRZJ2Jb3/training-3185170_1280.webp
Reality is hitting a lot of AI companies including OpenAI. I think the problem now is that the companies building the AI models are not getting as much out of the models as they expect to get based on the resources they're pouring into the Models.
At the early stages you feed the model with data and you see significant improvements but now they put in even more effort and don't see as much advancements.
There's a term called "diminishing returns" and it is where no matter how much extra effort you put in, the output just doesn’t grow as much. And that’s a really big ass problem for an industry that's banking on constant growth to keep the investors enthusiastic and customers hopeful.
Ilya Sutskever.a cofounder at OpenAI made a comment regarding this issue. He said we’re not in the age of rapid AI growth anymore but instead we're entering a new age of wonder.
I can't tell for sure if it's a problem with the limited power of the Supercomputer they're using or it's a problem with limited data. If the supercomputer is the issue then more money to build something more powerful will be the solution. But if limited data is the problem then companies like xAI and Meta who own social media platforms with people pouring in data every single second will have the upper hand in the future.
But what does this mean for the AI companies though? Maybe this means we're actually done with the big leap advancements. Maybe we would be getting smaller and smaller breakthroughs in AI till we get to the level of AGI. Maybe the focus will be on quality of data rather than volume of data.
The challenges regarding data needed to train AI goes even deeper than what I just described. At the moment, some AI models are being trained with AI created data. That means AI is only learning from itself because they're running out of human data to train with. It's like looking at a copy of a copy, hahaha. What happens if that the quality aspect of the models fade because there's no new data.
It's a confusing topic for me but after spending some time to think about it, I think what we need is a new focus on ethical uses and better data practices. A focus on quality data and change in the way we're building the Models otherwise we might never reach AGI level.
But I'm not an expert working at OpenAI so what I can tell is that only time will tell
Something different and very could come up and things will move forward faster than we ever thought.
Posted Using InLeo Alpha