Linguistic models: The year 2024 marked a turning point for generative artificial intelligence, with an unprecedented surge in investment. Companies worldwide poured capital into this emerging field, bringing total investment to $13.8 billion, a 500 per cent increase over the previous year. This boom is fueled by a growing awareness of AI’s transformative potential in a variety of areas, from code generation to data analysis to automation of complex processes. Although OpenAI, creator of ChatGPT, remains a dominant player in the market, competition is becoming increasingly fierce. With its Claude 3.5 model, Anthropic is rapidly gaining ground, doubling its market share. This trend reflects the increasing diversification of the AI landscape, with companies adopting several models to meet their specific needs. However, the focus remains on advanced language models, such as ChatGPT, Google’s Gemini and Claude, which have attracted nearly half the total investment.
AI is raising positions
Code generation ranks first among the most promising applications of generative AI, followed by chatbots for customer support, enterprise search, and data analytics. But innovation does not stop there. We are seeing a new generation of tools emerge: “AI agents.” These agents, more sophisticated than traditional chatbots, can perform complex tasks autonomously without constant user supervision. Tech giants such as Google, Microsoft and Amazon invest heavily in this area, which promises to significantly increase corporate productivity and generate new business opportunities. However, the rapid development of generative AI also raises important ethical and legal issues. Using dialogue from movies and TV series to train AI has sparked a heated debate about copyright protection and the role of creators. In addition, although Nvidia CEO Jensen Huang’s predictions point to an exponential increase in computational power in the coming years, some experts question the effective long-term scalability of AI models.
Jensen Huang, CEO of Nvidia, predicts an exponential increase in the computing power underlying generative artificial intelligence in the next decade. During a conference in Atlanta, Huang said that computing power is growing at a fourfold annual rate, leading to a million-fold increase within the next decade. This growth is critical for AI development, as it powers large language models (LLMs) and improves performance.
Not only investments
Nvidia, a leading AI chip manufacturer, has seen significant growth due to the demand for GPUs, which are essential for training increasingly sophisticated AI models. Huang stressed the importance of “scalability laws,” which show that the performance of AI models improves as computing power and data increase. However, recent studies have questioned the long-term effectiveness of these laws. Some major AI labs, including OpenAI, are finding it difficult to significantly improve the performance of their next-generation models.
Despite these uncertainties, Huang reiterated the validity of the scalability laws, pointing out that they apply to both model training and inference or the process of responding to user queries. Nvidia is committed to keeping pace with the growing demand for AI computing power by accelerating the development of new technologies and aiming to reach new heights in artificial intelligence. So, generative AI is in a great ferment phase characterised by accelerated innovation, record investments and complex challenges ahead. The future of AI promises to be exciting, but it requires careful thought about the ethical and social implications of this revolutionary technology.