. . AI: Top 5 Links of the Week & Today's AI News

Plus Why Meta AI is Open Source, HF Challenges GPTs

Friends, we’ve got a big day of AI news here on a Friday AND I want to share with you something I think you’ll enjoy: the top 5 most-clicked links readers have clicked on this week. Did you see them all? I bet you may have missed one or two!

Thanks for joining us, as always, these are the stories the AI community is engaging with the most each day, summarized briefly and sent to your email inbox. Every time you introduce someone new to this newsletter, it makes my day!

Ok, here are the Top 5 Links for This Week. Fascinating!

  1. Jeremy Howard’s YouTube tutorial: Getting Started With CUDA for Python Programmers

  2. VC Andreesen Horowitz’s American Dynamism 50: The AI Edition

  3. MoleculeSTM: Anima Anandkumar’s LinkedIn post about her team’s tool for “zero-shot text-guided molecule editing”

  4. Consensus GPT, which summarizes academic research. We mentioned it as an example of a good GPT to @ mention into your conversations on ChatGPT.

  5. AbacusAI’s new open source Smaug 34B model, which made Abacus CEO Bindu Reddy say “GPT-4 is in sight!”

Very cool to see what folks are finding interesting to read!

Now on to today’s top AI news.

Marshall Kirkpatrick, Editor

First impacted: AI Researchers, Software Developers
Time to impact: Medium

On an earnings call, Mark Zuckerberg explained in detail why Meta plans to open source its AI research and resources, arguing that this strategy will enhance safety, increase security, improve efficiency, and attract developers. He also maintains that while the company's AI research will be public, Meta's specific product features and data will remain confidential and that he expects Meta's longer term AI strategy to be consistent with the above. [via @soumithchintala] Explore more of our coverage of: Meta Open Source, AI Research, AI Security. Share this story by email

First impacted: AI developers, AI researchers
Time to impact: Short

Hugging Face has introduced HuggingChat Assistants, a competitive alternative to OpenAI's custom GPTs, powered by the high-performance MistralAI Mixtral 8x7B. The UI is just like GPTs; in my brief testing it works faster (great!) but not as well. [HuggingChat - Assistants] Explore more of our coverage of: Hugging Face, AI Assistants, MistralAI Mixtral 8x7B. Share this story by email

First impacted: AI researchers, AI experts
Time to impact: Medium

Infini-gram, an advanced n-gram language model developed by researchers at U. of Washington and the Allen Institute for AI, has been trained on a massive dataset of approximately 1.4 trillion tokens. This significant development enables n-gram models to process datasets on a scale comparable to that of Large Language Models, enhancing their predictive capabilities. Distinguished by its ability to handle context windows of arbitrary length, Infini-gram overcomes the limitations of shorter context windows seen in n-gram language models, potentially making it a valuable tool for enhancing Neural LMs using a hybrid approach to get the best of both worlds. [infini-gram - a Hugging Face Space by liujch1998] Explore more of our coverage of: Infini-gram Controversy, N-gram Models, AI Industry News. Share this story by email

First impacted: AI developers, NLP researchers, Edge Developers
Time to impact: Short

OpenBMB, the nonprofit group responsible for the UltraFeedback data set, has "quietly" launched a selection of advanced edge AI models, including a 2.4B base model and a 3B dual-language model. The models are said to be highly capable given their size, appropriate for the edge of a network, such as on a mobile device. [openbmb (OpenBMB)] Explore more of our coverage of: Edge AI Models, OpenBMB Launch, Hugging Face Platform. Share this story by email

First impacted: Machine learning researchers, AI technology developers
Time to impact: Medium

Programming committee member Zico Kolter announced that the 2024 International Conference on Machine Learning (ICML) has received 9,653 submissions, including 220 position papers, an increase from 6,538 submissions last year. In case you thought AI research might be slowing down. [via @zicokolter] Explore more of our coverage of: ICML 2024, Machine Learning, Submission Increase. Share this story by email

First impacted: Software Engineers, Knowledge Researchers
Time to impact: Medium

According to an Australian study published last month but being discussed more this week, software engineers are increasingly utilizing RAG systems to enhance semantic search capabilities within applications but facing common challenges. The study, which analyzed use-cases across education, research, and biomedicine, identified seven areas of weakness in these systems. They comment that the "the two key takeaways arising from our work are: 1) validation of a RAG system is only feasible during operation, and 2) the robustness of a RAG system evolves rather than designed in at the start". In short, you will have to play around to see what the end result looks like and iteratively improve based on the outputs. [Seven Failure Points When Engineering a Retrieval Augmented Generation System] Explore more of our coverage of: Semantic Search, RAG Systems, Software Engineering. Share this story by email

First impacted: Governments, Japan, Trade Commissions
Time to impact: Short

Sakana AI has announced receipt of a supercomputing grant from Japan's New Energy and Industrial Technology Development Organization (NEDO), under the Ministry of Economy, Trade, and Industry. The grant, part of the Generative AI Accelerator Challenge (GENIAC), will be used to enhance Sakana AI's nature-inspired AI models and bolster Japan's technical infrastructure. [Sakana AI] Explore more of our coverage of: Supercomputing Grants, AI Development, Post-5G Infrastructure. Share this story by email

Ok, that’s a lot of AI news! More coming end of day Monday. Thanks for reading!