• AI Time to Impact
  • Posts
  • . . AI: Beff Jezos raises money for AI supercomputer, Mamba challenges transformers, and more (12.4.23)

. . AI: Beff Jezos raises money for AI supercomputer, Mamba challenges transformers, and more (12.4.23)

Friends, thanks for joining me today. I narrowed things down to just the top 7 stories being discussed across the AI community today, according to our weighted analysis of community engagement. And they are remarkable stories today.

-Marshall Kirkpatrick, Editor

First impacted: AI researchers, Quantum computing engineers
Time to impact:

AI firm Extropic, established by Guillaume Verdon, a former Google leader in quantum deep learning and the man (unhappily) unveiled last week as the human behind the AI accelerationist leading personality @BasedBeffJezos, has raised $14.1M in seed funding from an all star lineup of AI industry leaders. Extropic plans to use this funding to speed up the development of its physics based computational model, "Building an AI supercomputer by harnessing the first principles of thermodynamics and information, like an alien would." [Extropic assembles itself from the future] Share by email

First impacted: OpenAI researchers, Rain AI investors
Time to impact:

WIRED writes up a controversial confluence of deals where OpenAI, led by CEO Sam Altman, is said to have written a letter of intent to buy $51 million in AI chips from Rain AI, a NVIDIA-challenging startup in which Altman made a $1M seed investment. The letter was reportedly shown to investors in subsequent rounds. The US government also required the firm to reverse sales of shares to round-leading Saudi-affiliated investors and Rain AI CEO has quietly left his position. [OpenAI Agreed to Buy $51 Million of AI Chips From a Startup Backed by CEO Sam Altman] Share by email

First impacted: Machine Learning Engineers, Data Scientists
Time to impact:

The CEO of Hugging Face, Clement Delangue, tweeted that their Transformers library has been adopted by over 100,000 GitHub repositories. "Truly powering all open-source AI out there." [GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.] Share by email

First impacted: AI researchers, Data scientists
Time to impact:

Albert Gu and Tri Dao have developed a new neural network architecture called Mamba. They say it outperforms Transformers in language modeling, with linear scaling and 5x higher inference throughput. Mamba, which incorporates selective structured state space models (SSMs), is designed to tackle the computational inefficiency of Transformers on long sequences. It reportedly improves performance on real data up to million-length sequences and achieves top results across language, audio, and genomics modalities. [Mamba: Linear-Time Sequence Modeling with Selective State Spaces] Share by email

First impacted: Business owners, Content creators
Time to impact:

Runway is partnering with Getty Images to launch a new video model, the Runway & Getty Images Model (RGM). This model is designed to help businesses create custom video content using their own data. The RGM is expected to be available for commercial use in the near future and is anticipated to improve creative capabilities across various sectors. [Runway partners with Getty Images to build enterprise ready AI tools | Runway Blog] Share by email

First impacted: Python Developers, Animation Artists
Time to impact:

ByteDance Inc's team has introduced MagicAnimate, a tool they claim can animate human images based on a specific motion sequence. The demo is too slammed for me to test, but the idea is that you upload a picture of the Mona Lisa or your uncle and you select a dance sequence, and the app animates your still photo to do the dance. [GitHub - magic-research/magic-animate: MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model] Share by email

First impacted: AI scientists, Nvidia's computer chip users
Time to impact:

Meta's top AI scientist, Yann LeCun, disagrees with Nvidia CEO Jensen Huang's 5 year forecast, saying that AI systems are still years away from achieving human-level intelligence, according to a write up on CNBC. On Twitter he clarifies that he means "clearly not in the next 5 years" though! He also believes that the current emphasis on language models and text data is not enough, citing multimodal input and output as essential steps in the future of AI. [Meta's AI chief doesn't think AI super intelligence is coming anytime soon, and is skeptical on quantum computing] Share by email

That’s it! More AI news tomorrow. Please share this newsletter with friends - the continued expansion of this newsletter’s readership makes it possible for me to put in the time to write it!

-Marshall