• AI Time to Impact
  • Posts
  • . . AI: Open Source Gets Some Mojo, Grok-1.5 and LLM Guide from HuggingFace's Co-founder (3.29.24)

. . AI: Open Source Gets Some Mojo, Grok-1.5 and LLM Guide from HuggingFace's Co-founder (3.29.24)

Grok, Google, Modular, Lilac, HuggingFace

Friends, learning together is a huge part of the contemporary AI industry and today's top stories are all about people sharing their work in order for everyone to go faster.

The relatively closed OpenAI may still be at the top of nearly all the leaderboards and dominate in market share, but it's clear that open source is a very capable competitor, superior in some cases, and is only getting stronger.

We hope today’s top stories serve your ongoing learning. And check out the very end for a bonus prompt we’ve been having great success with!

-Marshall Kirkpatrick, Editor

First impacted: Software Developers, Open-Source Community Members
Time to impact:

Modular has launched the core modules from the Mojo programming language for AI under the Apache 2 license. "At Modular, our team has either built, contributed to, or driven incredible open source projects like LLVM, Swift, TensorFlow and PyTorch, and we want to make sure we do things right. This means not just 'making source code available', but truly fostering and cultivating a vibrant and engaged open development community around Mojo🔥." [Modular: The Next Big Step in Mojo🔥 Open Source] Explore more of our coverage of: Open-Source Development, Apache 2 License, Mojo Standard Library. Share this story by email

First impacted: Software Developers, Data Scientists
Time to impact:

xAI says it plans to launch Grok-1.5, a new AI model on the 𝕏 platform, which it claims has enhanced reasoning and problem-solving capabilities, particularly in coding and math tasks, and can process long contexts of up to 128K tokens. The company says 1.5 competes with all the state of the art models (Mistral, Claude, Gemini, GPT-4) and will roll out to early testers over the coming days. [Announcing Grok-1.5] Explore more of our coverage of: AI Model Development, Enhanced Reasoning, Benchmark Performance. Share this story by email

First impacted: AI researchers, Data scientists
Time to impact:

Thomas Wolf, cofounder and CSO of HuggingFace, shared a detailed guide on training LLMs, focusing on data preparation, modeling techniques, fine-tuning, and rapid inference methods. The guide introduces multiple libraries, and discusses parallelism, synchronization challenges, new architectures, RLHF, quantization, speculative decoding, and other concepts. [A little guide to building Large Language Models in 2024] Explore more of our coverage of: LLM Training, AI Modeling Techniques, Rapid Inference Methods. Share this story by email

First impacted: Startup founders, Venture capitalists
Time to impact:

Lilac AI's co-founder, Nikhil Thorat, who sold the startup to Databricks earlier this month, has shared insights from his first year of running a startup, emphasizing the value of a reliable co-founder, the necessity of risk-taking even when timing seems imperfect, and the importance of adhering to your vision without over-relying on venture capitalist opinions. Thorat also advises frequent pitching, being mindful of your next two milestones when seeking funding, and not overvaluing your startup's worth, noting that Lilac AI's ability to attract serious enterprise users with only 600 GitHub stars demonstrates that popularity doesn't necessarily equate to business potential. [Lessons of a first time founder] Explore more of our coverage of: AI Startups, Venture Capital, Entrepreneurial Advice. Share this story by email

First impacted: Software Developers, AI Researchers
Time to impact:

Andrew Ng recently commented on areas he sees ripe for innovation with respect to AI powered agents. He has now released a deeper summary, this time covering reflection. He says that reflection enables a large language model (LLM) to self-evaluate and refine its output, leading to improved performance in tasks such as code creation, text composition, and question answering. Check out his post for a deeper dive. [One Agent For Many Worlds, Cross-Species Cell Embeddings, and more] Explore more of our coverage of: AI Workflows, Reflection Design Pattern, Large Language Models. Share this story by email

Bonus: If you'd like to experiment with simple reflection by the AI you use, try this series of prompts, it's been serving us well for weeks:

1. Make your prompt as you normally would

2. Follow up with: "Please reflect on the question I just asked you: Describe the problem of answering the above question, in bullet points, while addressing the problem goal, inputs, outputs, rules, constraints, and other relevant details. I am looking for a sophisticated reflection and answer."

3. Once it's responded, ask it to answer again: "Given those reflections, answer the question again in this new light."

I’ve got that in a Google doc bookmarked in my browser and I regularly add it to anything I’m doing!

Have a good weekend!