. . AI: Meta launches Llama 3 (4.18.24)

Friends, Llama 3's release by Meta is far and away the top story today. Llama's been a trailblazer (Llama 2 was launched a year ago July) but was surpassed by models open and closed. Now Meta says it's retaking technical leadership with a focus on openness. And it's going to put Llama in front of billions of more people, in its various consumer apps.

These are the stories the AI community is talking about.

Here are today's headlines,

Marshall Kirkpatrick, Editor

Top Stories

First impacted: AI developers, Cloud platform users
Time to impact: Immediate

Meta has launched Llama 3, an updated version of its open-source LLM, which is now available in Instagram, WhatsApp, at Meta.ai, on AWS, Google Cloud, Microsoft Azure, and elsewhere. Meeting or surpassing state of the art performance for 7B and 70B model sizes, it's text only now but is slated to take multi-modal input soon. The research paper is forthcoming. How good is it? Ethan Mollick said it well: “Based on benchmarks, the current model is not quite GPT-4 class, but their larger one (still training) will reach GPT-4 level… We went from only one GPT-4 class model to four this year. Next move is OpenAI’s, which will tell us what the future looks like- plateauing technology or continued rapid development.” [fb.me] Share this story by email

First impacted: Educators, Customer Service Representatives
Time to impact: TBD

Microsoft has published research on a new AI model, VASA-1, which users say can generate realistic talking faces from a single photo and audio input in real time. "We are exploring visual affective skill generation for virtual, interactive charactors, NOT impersonating any person in the real world. This is only a research demonstration and there's no product or API release plan." Ok. [VASA-1 - Microsoft Research] Share this story by email

More Headlines

[via @julien_c] Share this story by email

[via @PropheticAI] Share this story by email

[via @mickeyxfriedman] Share this story by email