AI at risk of collapse due to… inbreeding?!

Artificial Intelligence is revolutionizing the world of technology, but not without raising concerns about its long-term sustainability. A recent study published in Nature highlights the risks of the so-called “collapse of models” in generative AI systems.

Tech giants like Microsoft, Google and Meta are investing heavily in the development of large language models (LLMs) and generative AI tools such as ChatGPT, Microsoft Copilot, and Google Gemini. These systems promise to radically transform our relationship with technology.

However, the high operating costs are putting a strain on even leading companies like OpenAI, which will face another loss-making year, despite having significantly increased its profits.

At the same time, Tech giants struggle to monetize effectively these technologies, since the general public does not yet seem willing to pay for many of the tools currently available.

The phenomenon of “model collapse,” however, is an entirely different concept, which posits that as the amount of AI-generated content on the web increases, AI systems may begin to “feed” predominantly on data they themselves have created for training, in an effect similar to inbreeding.

The study published in Nature confirms that Indiscriminate use of model-generated content in training causes irreversible defects in the resulting modelswith the disappearance of parts of the original distribution of the contents.

123RF/tongpatong321



The authors of the research underline that This phenomenon must be taken seriously if you want to retain the benefits of training on large-scale data from the web. In the future, data from genuine human interactions with systems will become increasingly valuable compared to LLM-generated content.

The frantic competition to capitalize on this supposed computational revolution is leading tech giants to act irresponsibly.

Google ha prematurely released his AI-based researchwith sometimes ridiculous results. Microsoft had to backtrack on Copilot’s “Recall” feature due to serious problems.

Furthermore, The environmental impact of data centers needed for AI is putting these companies’ climate commitments at risk. Microsoft even fired its AI ethics teamsignaling a lack of attention to the ethical implications of these technologies.

The actions of these companies appear to be driven primarily by greed and irresponsibility. It is unlikely to take warnings about “model collapse” seriouslyas this is a problem that will only manifest itself in the future.

Microsoft and Google are aggressively looking for ways to steal revenue from content creatorsdirectly incorporating information into search results. This risks making content creation financially unsustainable for many, further degrading the quality of online information and accentuating the potential for “model collapse.”

At the same time, a dangerous centralization of information in the hands of a few powerful entities is taking place. Tech giants are unlikely to take these risks seriously or offer compensation for the content used to train their AI systems.

If this trend continues, the future of the Internet could be characterized by a progressive degradation of quality of content and an ever-increasing concentration of information power in the hands of a few dominant players.

Source: www.tomshw.it