• AI Time to Impact
  • Posts
  • . . AI: NVIDIA's 2nd Brain, fine-tuning for factuality, and more (11.15.23)

. . AI: NVIDIA's 2nd Brain, fine-tuning for factuality, and more (11.15.23)

Friends, here's what the AI community is talking about today.  The NVIDIA internal LLM and factual fine tuning stories are especially remarkable today.

Those of you most into ChatGPT may also appreciate Dan Shipper's new interview show on X How Do You Use ChatGPT?, his first interviewee is Sahil Lavingia, founder of Gumroad.

As always, these are the stories that the AI community is talking about most today, based on a weighted analysis, then analyzed by an ensemble of LLMs, and edited by me, Marshall Kirkpatrick.

Special thanks to those of you sharing these with your co-workers when you see a story you think is relevant to their work. Double thanks to those of you who do that and have a weekly AM jogging routine with me. :)

Cheers,

Marshall Kirkpatrick, Editor

And now here’s today’s news…

OpenAI Pauses ChatGPT Plus Sign-ups

First impacted: Aspiring GPT+ customers
Time to impact: Short

Sam Altman of OpenAI has tweeted that they're temporarily not accepting new sign-ups for ChatGPT Plus due to a usage spike that's beyond their current capacity. That's pretty remarkable: they say they have so much customer activity that they won't take anyone else's money right now. [via @sama] Share by email

NVIDIA Condenses Corporate Memory into "ChipNeMo" LLM

First impacted: Chip Design Engineers, 2nd Brain Dreamers
Time to impact: Medium

NVIDIA says it's compressed 30 years of company data, such as chip designs and engineering logs, into a 13 billion parameter internal model named "ChipNeMo". It's pretty inspiring, especially given the incredible complexity of the work involved in chip design; the model creates electronic design automation scripts, functions as an engineering assistant chatbot, offers bug analysis, and much more. This is one of the most enjoyable corporate blog posts I've read in awhile. [Silicon Volley: Designers Tap Generative AI for a Chip Assist] Share by email

Fine Tuning LLMs to Minimize Factual Errors

First impacted: AI researchers & developers
Time to impact: Short to medium

Stanford researchers say they have improved LLMs to reduce factual errors and hallucinations, without human labeling. The team used two strategies (reference based: measuring consistency with an external knowledge base, and reference-free: considering the model's own confidence scores) and both methods worked well. The results from a 7B parameter model: compared to Llama-2-chat, a 58% decrease in factual errors when creating biographies and a 40% decrease when answering medical questions. [Fine-tuning Language Models for Factuality] Share by email

Google's Mirasol3B Outperforms Larger Models in Multimodal Q&A

First impacted: AI researchers
Time to impact: Medium to long (manual override)

Google says its new multi-modal Mirasol3B has outdone bigger models in video and audio-video question answering. The team says the model specializes in answering questions about "longer videos" but in this conext, long still means under 3 minutes. [Scaling multimodal understanding to long videos] Share by email

Open Catalyst Project Launches Climate Change Video Series

First impacted: AI researchers, Chemists
Time to impact: Long

The Open Catalyst Project, a project of Meta AI and Carnegie Mellon, has launched an educational video series that aims to help AI researchers fight climate change using AI and chemistry. Yann LeCun suggests using AI to find new chemical compounds to solve energy storage problems, for example. [Open Catalyst Intro Series] Share by email

That’s it! More AI news tomorrow.