• AI Time to Impact
  • Posts
  • . . AI: GPT's Supremacy Isn't Secure, RAG and Robotics (3.11.24)

. . AI: GPT's Supremacy Isn't Secure, RAG and Robotics (3.11.24)

OpenAI, Cohere, Covariant, and more

Your daily summary of the top 5 stories in AI.

Friends, if you've got documents you want your AI to focus on, or robots you'd like to have learn quickly, today's your lucky day. Plus Sam Altman's back on the OpenAI board, and more.

As always, these are the stories the AI community is talking about most today, and over the weekend. These are the top 5.

Thanks for sharing with friends and colleagues!

Here’s today’s news…

-Marshall Kirkpatrick, Editor

First impacted: Enterprise-scale production managers, AI technology developers
Time to impact: Short

Cohere has launched Command-R, a new LLM designed for large-scale production tasks, with a focus on long context tasks such as retrieval augmented generation (RAG) and the use of external APIs and tools. According to Cohere, Command-R offers high accuracy on RAG and Tool Use, quick response times, high throughput, and strong performance in 10 key languages, and is the first in a series of model releases aimed at enhancing enterprise-scale capabilities. [Command-R: Retrieval Augmented Generation at Production Scale] Explore more of our coverage of: Cohere, Retrieval Augmented Generation. Share this story by email

First impacted: Robotics Engineers, AI Developers
Time to impact: Medium

Covariant, a 7 year old company that's raised over $220M in funding, has launched RFM-1, a Robotics Foundation Model that merges online data with real-world physical interactions. The company says that RFM-1 is more than a robotics model; it's a multimodal sequence model that can convert all modalities into a shared space and perform autoregressive next-token prediction. The model is said to comprehend physics by learning world models, simulating changes in the world every fraction of a second. It can also predict the high-level outcomes of a robot's action, which can be utilized for real-time decision-making and offline training of other models and policies. [Covariant | Powering the Future of Automation, Today] Explore more of our coverage of: Robotics, Predictive Models. Share this story by email

First impacted: AI researchers, AI developers
Time to impact: Medium

In a blog post, developer Simon Willison notes that recently released LLMs, including Google's Gemini 1.5, Mistral Large, Claude 3 Opus, and Inflection-2.5, have all begun to challenge GPT-4's leadership, which had been the clear leader for more than a year. These models exhibit benchmarks near or above the reigning champion across various categories such as reasoning and coding.

  • Despite the impressive performance of these new models, Willison expresses disappointment that none of these models are openly licensed or weights available. He also questions the lack of transparency around their training data.

First impacted: NIST staff members, AISI staff members
Time to impact: Medium

The US AI Safety Institute (AISI), established in November 2023 with a $10 million grant from the National Institute of Standards and Technology (NIST), is said to be in turmoil over the potential appointment of Paul Christiano. Staff members at NIST are reportedly considering resignation due to Christiano's ties to the effective altruism movement and concerns that the AISI is prioritizing existential risk over immediate, measurable AI risks.

  • Paul Christiano's appointment has sparked controversy, with claims that his hiring process was expedited, leaving many NIST employees in the dark until the announcement day.

  • Despite internal objections, Divyansh Kaushik, an associate director at the Federation of American Scientists, maintains that Christiano is "extremely qualified" for the responsibilities outlined in President Biden’s AI Executive Order, which includes managing chemical, biological, radiological, and nuclear materials.

First impacted: OpenAI Board members, OpenAI leadership team
Time to impact: Medium

Following a review by the Special Committee, OpenAI's Board has expressed full confidence in the leadership of Sam Altman and Greg Brockman. The Board has also added three new members, Dr. Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo, and implemented updates to OpenAI's governance, such as new corporate governance guidelines, a strengthened Conflict of Interest Policy, a whistleblower hotline, and additional Board committees.

  • According to a review by WilmerHale, which involved numerous interviews and the analysis of over 30,000 documents, the departure of Sam Altman and Greg Brockman from the OpenAI Board was attributed to a loss of trust, rather than issues related to product safety, development speed, financial matters, or investor communications.

That’s it! More AI news tomorrow.