• AI Time to Impact
  • Posts
  • . . AI: News for Google users, AI founders, Data scientists (2.1.24)

. . AI: News for Google users, AI founders, Data scientists (2.1.24)

New image generation, AI grants, big open data sets

Friends, Meta AI leader Yann LeCun says “machine learning sucks” in the lecture published in today’s 5th story. That may be the case, but it sure it useful already anyway.

Today’s stories include news about Google Gemini, AI Grant for startups, and a whole lot of openly released data.

As always, these are the stories the AI community is engaged with most today. I hope at least some of it will prove useful to you.

Marshall Kirkpatrick, Editor

First impacted: Consumers, Multilingual users, Google Customers
Time to impact: Short

Google's Bard now supports Gemini Pro in over 40 languages and is available in more than 230 countries, according to a blog post by Bard's Product Lead, Jack Krawczyk. The tool now also includes a free image generation capability (faster but not as good as DALLE in my initial tests) and a feature to double check responses in multiple languages. [Bard’s latest updates: Access Gemini Pro globally and generate images] Explore more of our coverage of: Google AI, Conversational AI, Multilingual Support. Share this story by email

First impacted: AI startup founders, Technical founders focused on AI products
Time to impact: Short

AI Grant, an accelerator co-founded by Nat Friedman and Daniel Gross, is accepting applications for its third batch of startups. The accelerator offers a choice of two funding options: $250k on an uncapped SAFE or $2.5m on a $25m cap SAFE, in addition to benefits like cloud and compute credits; it is seeking technical founders who aim to create AI products. The application deadline is in two weeks! February 16. [AI Grant] Explore more of our coverage of: AI Startup Funding, AI Grant Accelerator, Artificial Intelligence Products. Share this story by email

First impacted: AI sector professionals, Data scientists
Time to impact: Short

AI2 (the non-profit Allen Institute for AI) has launched its first open language model, OLMo, providing the AI community with access to data, training and evaluation codes, and models, all under a permissive Apache 2.0 license. "OLMo 7b edges ahead of Llama 2 on many tasks for generative tasks or reading comprehension (such as truthfulQA), but is slightly behind on popular question-answering tasks such as MMLU or Big-bench Hard." [OLMo: Open Language Model - AI2 Blog] Explore more of our coverage of: AI2, Open Language Model, AI Community. Share this story by email

First impacted: AI developers, Data scientists
Time to impact: Short

Startup Nous Research has made public the dataset used in creating Open Hermes 2.5 and Nous-Hermes 2. The co-founder of Nous said on X, "This dataset was the culmination of all my work on curating, filtering, and generating datasets, with over 1M Examples from datasets from across the open source ecosystem and some that I generated as well." [huggingface.co] Explore more of our coverage of: Teknium, Open Hermes 2.5, Dataset Publication. Share this story by email

First impacted: AI researchers, AI developers
Time to impact: Short

Yann LeCun, VP and Chief AI Scientist at Meta, discussed the limitations of current AI systems and proposed a new cognitive architecture in a just-published lecture that also provides a good overview of the state of AI. According to LeCun, the proposed structure employs a predictive world model and a Hierarchical Joint Embedding Predictive Architecture (H-JEPA), enabling the system to predict the consequences of its actions and optimize for specific objectives. Ultimately, the goal is "AI systems that can learn, remember, reason, plan, have common sense, yet are steerable and safe." [Lytle Lecture Series | UW Department of Electrical & Computer Engineering] Explore more of our coverage of: AI Architecture, Predictive Modeling, Yann LeCun. Share this story by email