. . AI: Big gains in foundation models (2.7.24)

Google SELF-DISCOVER, Ecosystem Graphs

Friends, today's edition is all about the power of big AI model development. The main story demonstrates just how quickly the frontier and rate of development continue to advance; rarely do you see 20-30% increases in capability whilst also achieving a 10-40x improvement in efficiency. (And I love the idea of selecting and combining task-specific reasoning modules!) Our second story illustrates how quickly a robust community has been formed and has advanced state-of-the-art models. Take a look below!

Marshall Kirkpatrick, Editor

First impacted: AI developers, AI researchers, Knowledge workers
Time to impact: Short

A new research paper co-authored by University of Southern California and Google has shared the results of a new logic and reasoning framework called 'SELF-DISCOVER'. It reportedly enhances the performance of LLMs on complex reasoning tasks such as BigBench-Hard, grounded agent reasoning, and MATH by up to 32% compared to Chain of Thought (CoT), and over 20% in comparison with inference-heavy methods like CoT-Self-Consistency. According to the authors, this framework reduces the need for inference computation by 10-40x and works by enabling LLMs to select and combine atomic task-specific reasoning modules to enhance the performance of models like GPT-4 and PaLM 2. It works across a number of different models and seems very related to human reasoning. [Self-Discover: Large Language Models Self-Compose Reasoning Structures] Explore more of our coverage of: Google SELF-DISCOVER, LLM Performance, Reasoning Tasks. Share this story by email

First impacted: AI researchers, AI model developers
Time to impact: Short

Percy Liang, Associate Professor in computer science at Stanford, and cofounder of Together Compute, has shared the latest update to the 'Ecosystem graphs'. Ecosystem graphs is a framework that documents the foundation models ecosystem, both the assets (datasets, models, and applications) and their relationships. It is a fantastic tool to keep tabs on the macro-level developments in the ecosystem: Counting the number of foundation models released each year shows they've doubled each of the last 3 years, from 27 in 2021 to 146 new foundation models launched last year. [Ecosystem Graphs for Foundation Models] Explore more of our coverage of: Beijing Academy AI, AI Model Parameters, Dense Model Development. Share this story by email

First impacted: RAG beginners, RAG Application Developers, LLM Technology Specialists
Time to impact: Short

We love sharing learning and development resources here and Anyscale is launching a retrieval augmented generation (RAG) Developer Bootcamp in March. Experts from Anyscale and Pinecone will lead hands-on training sessions, discussing the role of the RAG pattern in reducing hallucinations in LLMs, addressing scalability issues, and presenting key evaluation techniques to improve RAG applications. Tickets are $500. [RAG Developer Bootcamp] Explore more of our coverage of: RAG Development, LLM Applications, AI Training. Share this story by email

That’s it! More AI news tomorrow!