AI this weekend and Monday

Top stories plus a new map of the AI news

Happy Monday, everyone. Below you’ll find summaries of the top news stories in AI from the weekend through earlier today.

I also want to share with you a new experiment: an AI industry news map. So much happens in this field everyday, it’s overwhelming. So I used a hybrid of automation and my own editorial judgement to make a categorized map of the key stories over the past 2 weeks, the 2nd half of July. I hope you’ll check it out, zoom around to get an overview of what’s happening, and share it with a friend or co-worker.

And now today’s news…

Why Meta is giving away its extremely powerful AI model: The AI debate splitting the tech world, explained.
Vox: "Last week, Meta made a game-changing move in the world of AI. At a time when other leading AI companies like Google and OpenAI are closely guarding their secret sauce, Meta decided to give away, for free, the code that powers its innovative new AI large language model, Llama 2. That means other companies can now use Meta’s Llama 2 model, which some technologists say is comparable to ChatGPT in its capabilities, to build their own customized chatbots." [Why Meta is giving away its extremely powerful AI model The AI debate splitting the tech world, explained]


AI's Hidden Thirst: Microsoft's Servers Guzzle Water
Microsoft's servers need 0.5L of water for cooling after 20 interactions with ChatGPT. Big language-model AI requires water to cool servers during training and inference. Tech companies don't disclose the exact amount of water used for computation, but veteran tech journalist Clive Thompson says it's a lot. [AI Is Thirsty: Each chat with a large language-model is like dumping a bottle of water on the ground]

AI Firms' Voluntary Commitments Criticized
Computational linguist Emily Bender criticised AI firms' voluntary pledges at the White House for lacking data and model documentation. She called these commitments weak for only addressing future models and excluding current ones and synthetic text. Bender also dismissed "AI for social good" initiatives as marketing ploys to dodge regulation. She supports regulation requiring transparency, accountability, privacy, and data rights, rejecting the idea that technology is too fast-paced to regulate. [“Ensuring Safe, Secure, and Trustworthy AI”: What those seven companies avoided committing to]

Comparing AI Regulation to Nuclear Control? Watch Out, Argues Antonio García Martínez
This weekend Author Antonio García Martínez criticized those using "Oppenheimer" as an AI regulation cautionary tale, claiming nuclear regulation obstructed clean, infinite energy use. Erik Brynjolfsson countered, pointing out that we've averted a large-scale nuclear war. [via @antoniogm]

MIT CSAIL Finds ChatGPT Struggles with Uncommon Tasks
MIT CSAIL found that despite their versatility, ChatGPT-like models struggle with rare tasks, indicating a lack of fundamental abilities. "This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving." [Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks]

"Skeleton-of-Thought" is a Prompting Strategy That Can Enhance Large Language Models
This automated prompting strategy is similar to one of my favorite strategies to do by hand: ask an LLM to make a list of ways it could help you with a type of problem and then follow that by asking it about your specific problem. "One of the major causes of the high generation latency [for LLMs] is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs), but it can also potentially improve the answer quality on several question categories in terms of diversity and relevance." [Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding]

Hugging Face Changes Open Source Licensing on Popular Text Generation Inference Server
Hugging Face has changed the license for its TGI project "we added a restriction: to sell a hosted or managed service built on top of TGI, we now require a separate agreement." Critics say that it will introduce confusion and uncertainty. See, for example, @HamelHusain's thoughts.

Reinforcement Learning from Human Feedback: A Flawed System
"Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems."
[Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback]

Large Language Models and Nearest Neighbors
Lightning AI researcher Sebastian Raschka's research into "Nearest Neighbour (NN) + Gzip vs Large Language Models (LLMs)" found that NN methods, while not universally applicable, offer many innovation opportunities. His study shows the gzip method, using gzip and kNN for text classification, is competitive, almost equaling BERT on multiple datasets. However, gzip's predictive performance might be overestimated, and kNN may struggle with larger datasets. [Large Language Models and Nearest Neighbors]

That’s it! Thanks for reading. More news tomorrow - if you know anyone who would appreciate reading this, please send it their way!