. . AI: Asking Machines to Do The Thinking (1.11.24)

Friends, today's top stories in AI are all about the supposed superiority of machines over human thinking, even if you sometimes have to ask them more than once. If you write prompts a lot, check out the first story especially.

On the flip side of that perspective, though, I want to highlight a new interview that friend of the newsletter and AI artist community leader Kris Krug did this week. (via his newsletter) The whole thing is so good it flipped my diamonds-in-the-rough ratio for most interviews completely on its head, but I’m going to link directly to the part where he talks about a consulting gig he recently did for Wikipedia. It’s so cool! The opening 60 seconds are also key to watch.

Thanks for joining us today. If you want some more context on what you’re reading, I just put up a new About page on our website.

Now let’s get into the top stories in AI today, according to our weighted analysis of AI community engagement. (Video walk-through.)

First impacted: AI developers, users
Time to impact: Short to Medium

A new study finds that increasing the number of reasoning steps can improve the reasoning skills of LLMs. In one experiment, the researchers added "Chain of Thought" style calls to "think about" to their prompts. Like: Think about experiments...think about researchers...think about "Chain of Thought." In a little bit of testing, this is returning better, more thoughtful answers to my questions! [The Impact of Reasoning Step Length on Large Language Models] Explore more of our coverage of: Prompting, Reasoning, Chain of Thought. Share this story by email

First impacted: AI researchers, Computational linguists
Time to impact: Short

Stanford University researchers have developed a new technique, Direct Preference Optimization (DPO), which they say simplifies the training process for LLMs by removing the previously fundamental step RLHF (reinforcement learning from human feedback). It was reportedly used in Mixtral. In this article, Andrew Ng applauds the development and explains its global significance: this is the kind of groundbreaking work that can be done with algorithms, math, and deep thinking - by independent and university researchers outside the context of big tech companies. [AI Discovers New Antibiotics, OpenAI Revamps Safety, Researchers Define AGI, and more] Explore more of our coverage of: Stanford, RFHL, DPO. Share this story by email

First impacted: App builders
Time to impact: Short

Together AI has launched the Together Embeddings endpoint for finding related documents, which includes eight leading embedding models that outperform OpenAI's ada-002 and Cohere's Embed-v3 in MTEB and LoCo Benchmarks, according to a company blog post. The endpoint is compatible with MongoDB, LangChain, and LlamaIndex, and enables the creation of embeddings for long documents without splitting them, claiming to be up to four times more cost-effective than other popular platforms. [Building your own RAG application using Together AI and MongoDB Atlas] Explore more of our coverage of: AI Embedding Models, Document Similarity, Cost-Effective AI Platforms. Share this story by email

First impacted: Startups
Time to impact: Medium

Hugging Face acquired the machine learning model testing tool Gradio in 2021 and today the story of the acquisition was told in a long tweet by co-founder Abubakar Abid. It's a story of a 1X exit being blocked by one member of a party round, until it was unblocked. Hugging Face's CEO has cited the acquisition as a good example of deals they'd like to do in 2024. [via @abidlabs] Explore more of our coverage of: AI Industry Acquisition, Machine Learning Tools, Startup Growth Strategies. Share this story by email

Alright, that’s it for today! More AI stories tomorrow! Thanks for sharing with friends and co-workers!