• AI Time to Impact
  • Posts
  • . . AI for Lucid Dreaming, Google Cloud + Hugging Face (1.25.24)

. . AI for Lucid Dreaming, Google Cloud + Hugging Face (1.25.24)

Friends, here are 6 headlines and summaries from yesterday. If you want to see something far-out, check out the video demo of this LLM-style AI for lucid dreaming in the first story!

Consider this your AM edition. We'll be sending out a PM edition later today as well, just due to logistical requirements.

Hope these are useful to you!

Marshall Kirkpatrick, Editor

First impacted: Researchers, Neuroscientists, Lucid dream enthusiasts
Time to impact: Medium

Texas startup Prophetic is inviting beta testers for its Halo prototype, the Morpheus-1, marking a first in the realm of dream control technology. Scheduled for testing in spring 2024, this innovative device diverges from conventional language models by interpreting brain states, not words. Unlike generating textual content, Morpheus-1 crafts ultrasonic holograms, employing neurostimulation to guide users into a lucid dreaming state. [via @PropheticAI] Explore more of our coverage of: AI Technology, Lucid Dreaming, Beta Testing. Share this story by email

First impacted: AI software developers, Google Cloud users
Time to impact: Short

Google Cloud has launched a partnership with AI platform Hugging Face to streamline AI software development on the Google platform. Reuters reports: "The companies reached a revenue-sharing agreement, but did not disclose the terms. The arrangement will allow developers using Google Cloud's technical infrastructure to access and deploy Hugging Face's repository of open source AI software, such as models and data sets."

  • Google Cloud's CEO, Thomas Kurian, has stated that he anticipates a surge in the demand for AI computing on the cloud that may eventually exceed the traditional cloud software market, signifying a potential change in the tech sphere.

  • Google also announced that they are in the process of integrating Nvidia's H100s, and expect to finish within the upcoming "weeks" rather than "months".

First impacted: AI developers, AI researchers
Time to impact: Short

OpenAI has launched new embedding models and improved versions of GPT-4 Turbo and GPT-3.5 Turbo, and introduced new tools for managing API usage. Among the improvements, the company highlights the reduced input and output costs of GPT-3.5 Turbo, the enhanced features of GPT-4 Turbo, and a text-moderation model they say is their most effective yet, text-moderation-007. They also plan to launch GPT-4 Turbo with vision in the coming months. [New embedding models and API updates] Explore more of our coverage of: OpenAI, AI Models, API Tools. Share this story by email

First impacted: AI developers, AI researchers
Time to impact: Short

WIRED has written a profile of Beijing-based startup 01.AI, which has launched Yi-34B, a model it claims surpasses Meta's LLaMA 2 and has quickly climbed the AI language model rankings on the Hugging Face platform. WIRED writes about how the open-source nature of the model contrasts with most other larger players' approaches of keeping models closed so far and is receiving praise from the open-source community.

  • Despite its recent launch, 01.AI has garnered considerable investment, having raised $200 million from entities including Chinese ecommerce titan Alibaba, and currently holds a valuation over $1 billion.

First impacted: Python developers, JavaScript developers
Time to impact: Short

Ollama has released its initial Python and JavaScript libraries, aiming to streamline app integration with support for multimodal input. These libraries allow for quick incorporation of Ollama's features into both new and existing applications, mirroring the capabilities of the Ollama REST API. [Python & JavaScript Libraries · Ollama Blog] Explore more of our coverage of: Python Libraries, JavaScript Libraries, Ollama Integration. Share this story by email

First impacted: AI researchers, NLP engineers
Time to impact: Medium

Researchers have developed MambaByte, a token-free adaptation of the Mamba state space model, that operates on byte sequences rather than subwords, potentially leading to bias free and quicker language modeling. According to their experiments, MambaByte has demonstrated computational efficiency and competitive performance against state-of-the-art subword Transformers, and due to its linear scaling in length, it offers faster inference compared to Transformers. [MambaByte: Token-free Selective State Space Model] Explore more of our coverage of: MambaByte Development, Language Modeling, Computational Efficiency. Share this story by email

Ok, that’s it for now - more AI stories coming up later today!