Your personal AI assistant, OpenAI's 1st acquisition, and more

6 remarkable stories from the AI industry today

6 remarkable AI stories today. Can you see the narrative arc here? Wow.

Thanks for your time,

- Marshall, Editor

OpenAI Acquires Global Illumination

OpenAI has acquired its first company, Global Illumination, an expert team in AI-driven creative tools, including early employees from Instagram, Facebook, YouTube, Google, and Riot Games. The company was reportedly working on an open source version of Minecraft. This is expected to greatly improve ChatGPT's interactivity and features, and has led people to speculate about AI agents and metaverse applications. [OpenAI]

Karpathy Says LLM Nodes Will Someday be Like Personal Computers

OpenAI's Andrej Karpathy drew a historical parallel between peoples' confusion why everyone would need a computer of their own and now the widespread confusion about why a person would need a local, personalized, large language model. Karpathy, of course, is building an Iron Man JARVIS style personal assistant project at OpenAI.

How is LLaMa.cpp possible?

"Recently, a project rewrote the LLaMa inference code in raw C++. With some optimizations and quantizing the weights, this allows running a LLM locally on a wild variety of hardware: On a Pixel5, you can run the 7B parameter model at 1 tokens/s.

On a M2 Macbook Pro, you can get ~16 tokens/s with the 7B parameter model. You can even run the 7B model on a 4GB RAM Raspberry Pi, albeit at 0.1 tokens/s. If you are like me, you saw this and thought: What? How is this possible? Don’t large models require expensive GPUs? I took my confusion and dove into the math surrounding inference requirements to understand the constraints we’re dealing with." [How is LLaMa.cpp possible?]

A Survey on Model Compression for Large Language Models

New research surveys model compression methods for Large Language Models, such as quantization, pruning, and knowledge distillation. It addresses the difficulties of deploying LLMs in limited-resource settings and serves as a valuable guide for field experts. The paper also explores the potential need for model compression in adapting LLMs to smaller devices. [A Survey on Model Compression for Large Language Models]

Open challenges in LLM research

Chip Huyen highlights ten research areas in Language Learning Models (LLMs), such as reducing hallucinations, optimizing context length, and finding GPU alternatives. Huyen notes that hallucination, where AI models create false data, is a major issue for companies like Dropbox, Langchain, and Anthropic using LLMs. The need for new model structures and GPU alternatives, deep learning's backbone since 2012, is also emphasized. Huyen also discusses multi-modality (eg vision plus text), improving learning from human preference, chat interface efficiency, and developing non-English LLMs. [Open challenges in LLM research]

The AI Power Paradox Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?

Foreign Affairs ran a piece today by famous political scientist Ian Bremmer and Mustafa Suleyman, co-founder of DeepMind and Inflection AI. It's about AI governance and misuse. The authors say global policymakers are addressing AI governance through initiatives like the G-7's "Hiroshima AI process" and the EU's AI Act. They propose a precautionary, agile, inclusive, impermeable, and targeted AI governance, suggesting three overlapping regimes. They warn that AI's rapid advancement could disrupt traditional governance models, highlighting the need for a new regulatory framework.

Here's how the piece opens: "It’s 2035, and artificial intelligence is everywhere. AI systems run hospitals, operate airlines, and battle each other in the courtroom. Productivity has spiked to unprecedented levels, and countless previously unimaginable businesses have scaled at blistering speed, generating immense advances in well-being. New products, cures, and innovations hit the market daily, as science and technology kick into overdrive. And yet the world is growing both more unpredictable and more fragile, as terrorists find new ways to menace societies with intelligent, evolving cyberweapons and white-collar workers lose their jobs en masse.

"Just a year ago, that scenario would have seemed purely fictional; today, it seems nearly inevitable." [The AI Power Paradox Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?]

That's it! More stories tomorrow.