Meta planlegger massiv AI-utvidelse med 1,3 millioner GPUer og rekordstort datasenter
The race for AI supremacy is intensifying, and Meta is making bold moves to stay ahead. CEO Mark Zuckerberg announced that the company plans to end 2024 with over 1.3 million GPUs – more than double its current stock of enterprise GPUs. This expansion aims to accelerate the development of Meta's AI capabilities, including the upcoming Llama 4 model, which will rival OpenAI’s ChatGPT and Google’s Gemini.
Image credit: David Paul Morris | Bloomberg via Getty Images
Meta's ambitious goals include powering AI assistants for over a billion users and creating an "AI engineer" capable of contributing directly to research and development efforts. To support these initiatives, Zuckerberg revealed plans to construct a massive data center capable of housing 1 gigawatt of computing power by the end of the year, with a final capacity exceeding 2 gigawatts – an unprecedented scale in AI computing. The facility is expected to span an area comparable to a significant portion of Manhattan and will be part of a $65 billion capital investment in 2025.
This move comes amid fierce competition. Elon Musk's Memphis-based AI supercomputer is targeting 1 million GPUs, while Sam Altman’s $500 billion Project Stargate aims to build AI-dedicated data centers in the U.S. Microsoft has also committed $80 billion to AI-enabled infrastructure, including restarting a nuclear facility to meet energy demands.
As the power requirements for AI data centers skyrocket, with some projections reaching 5 gigawatts, the AI computing race shows no signs of slowing down. Meta's investments highlight its commitment to leading the next generation of technological innovation.
Llama, or Large Language Model Meta AI, is a series of autoregressive language models developed by Meta AI, first launched in February 2023. The initial Llama model was available only as a foundation model, with weights restricted to academic and research organizations under a non-commercial license. Despite strict access, the model’s weights leaked online shortly after its release, sparking debates about accessibility and misuse in AI development.
Llama 2, introduced in July 2023, marked a turning point by including fine-tuned models for chat and expanding its licensing to allow certain commercial uses. It also improved performance by training on 40% more data than its predecessor. In August 2023, Meta released Code Llama, a variant specifically designed for coding tasks, demonstrating the platform’s versatility.
Llama 3, unveiled in April 2024, pushed the boundaries further with multilingual and multimodal capabilities, enhanced reasoning, and extended context windows. Its largest model, with 70 billion parameters, was pre-trained on 15 trillion tokens, outperforming competitors like Gemini Pro 1.5 on key benchmarks. By July 2024, Llama 3.1 introduced a groundbreaking 405-billion-parameter version, further establishing Meta’s position in the AI landscape.
Throughout its iterations, Llama has powered advancements in Meta’s virtual assistants and coding tools while driving innovation in multilingual and multimodal AI systems.
Kommentarer