• AI Time to Impact
  • Posts
  • . . AI: From Music to Video Generation, AI is in Our DNA Now! (2.27.24)

. . AI: From Music to Video Generation, AI is in Our DNA Now! (2.27.24)

AI Music Generation, Text-to-Video, Genomic sequence

Friends, today’s spotlight is on the tools of the future, from generating music to generating videos, it is remarkable to think how easy production will be soon.

4 of the following 5 stories have not yet been covered by any other leading AI newsletter or Techmeme. As always, these are the AI stories the AI community is talking about and we aim to make them easy for you to keep up with. At least as easy as possible in this fast-moving industry.

If you find this newsletter valuable, we hope you’ll share it with someone else who you think would like it.

Now here are today’s top stories.

-Marshall Kirkpatrick, Editor

First impacted: Music enthusiasts, DJs
Time to impact: Short

Google's MusicFX has launched a feature called "DJ mode", an AI-powered tool that lets users blend their chosen instruments and genres to create a continuous AI jam. The tool, developed through a collaborative effort between Google DeepMind GenMedia and several Google Labs partners, restricts sessions to one hour, encouraging users to manipulate the music in real time. I played around with it and I really liked it! All you have to do is enter a prompt and use the sliders, it felt like the no-code equivalent of music generation! [aitestkitchen.withgoogle.com] Explore more of our coverage of: AI Music Generation, Google DeepMind, Interactive Features. Share this story by email

First impacted: Geneticists, Bioinformatics Researchers
Time to impact: Medium

**This story has not been covered yet by any other leading AI newsletter or Techmeme.**

Together AI, in collaboration with the Arc Institute, has developed Evo, a biological prediction model, which they say can generate novel CRISPR gene editing systems and manage sequences of over 650k tokens on a single GPU. This has been difficult in the past due to the extremely large context lengths required. In the future, Together AI plans to train larger models and extend Evo's pretraining to human genomes. This is a huge step forward in genomic research and potentially unlocks new things like hyper personalized treatments, editing of DNA and new cures! [Evo: Long-context modeling from molecular to genome scale] Explore more of our coverage of: AI Genomics, Predictive Modeling, GPU Efficiency. Share this story by email

First impacted: Cinematic Industry, Visual FX
Time to impact: Short

**First newsletter coverage.**

Pika Labs says it has launched a text-to-video platform, with an additional Lip Sync feature being offered to Pro users that is powered by ElevenLabs. They have an awesome sizzle reel, but I played around with the freemium version and I suspect your results may vary as the product is still young! [HOME | Pika Labs] Explore more of our coverage of: Pika Labs, Text-to-Video Platform, Lip Sync Feature. Share this story by email

First impacted: Machine Learning Researchers, Fine Tuners, Model Trainers
Time to impact: Short

**First newsletter coverage.**

Argilla and Hugging Face have launched a dataset named 10k_prompts_ranked as part of the Data-is-Better-Together project, according to a collaborative initiative blog post. The dataset, which includes 10K prompts and +14K human ratings from +300 contributors, was compiled in three weeks and is intended to aid in the training and evaluation of language models and the study of annotator behavior. It's also really interesting just to click around and see some of the very sophisticated prompts in the dataset! [DIBT/10k_prompts_ranked · Datasets at Hugging Face] Explore more of our coverage of: Open-Source ML, Hugging Face, Dataset Development. Share this story by email

First impacted: Graphic Designers, Content Creators
Time to impact: Short

**First newsletter coverage.**

Playground AI has launched Playground v2.5, a new text-to-image generative model that they claim outperforms previous models like SDXL, PixArt-α, DALL-E 3, and Midjourney 5.2 in visual appeal. Now available on the Hugging Face Diffusers platform, the model is designed to generate visually appealing images in multiple aspect ratios. You can test it here https://huggingface.co/spaces/AP123/Playground-v2.5 [playgroundai/playground-v2.5-1024px-aesthetic · Hugging Face] Explore more of our coverage of: AI Generative Models, Text-to-Image Technology, Hugging Face Platform. Share this story by email

That’s it! More AI news tomorrow!