. . AI: Claude can use tools (4.5.24)

Friends, in today's edition, we delve into the buzz surrounding 'Tool Use', with Anthropic AI announcing their latest offering. We also bring you updates from Alibaba, and Google. Happy reading!

-Marshall Kirkpatrick, Editor

It's Friday: want to know what three links people clicked on most in this newsletter this week? StableAudio music generation, OpenUI: describe a UI and see it generated, and Anthropic's Many Shot Jailbreaking

First impacted: AI developers, AI researchers
Time to impact: Short

Anthropic has launched a public beta for the Claude API that allows Claude to interact with external tools using structured outputs, enabling it to perform tasks that require real-time data or complex computations, and even orchestrate Claude subagents for more granular tasks. Tools are a step forward in emulating complex and intelligent actions as they allow models to unlock basic levels of planning, goal setting and collaboration to achieve a desired outcome. Tool use is not yet available on third-party platforms like Vertex AI or AWS Bedrock, but the release says that it is coming soon. [Tool use (function calling)] Explore more of our coverage of: Anthropic AI, External Tool Interaction, AI Model Development. Share this story by email

First impacted: Software developers, AI chatbot developers
Time to impact: Short

Alibaba Cloud has launched two new models, Qwen1.5-32B and Qwen1.5-32B-Chat, as part of the Qwen1.5 language series, aiming to balance performance, efficiency, and memory usage. The model has incorporated Grouped Query Attention (GQA), a unique feature that enhances its inference performance and deployment costs by grouping the attention layers to balance quality and speed. The company says the Qwen1.5-32B model performed well in tasks such as MMLU, GSM8K, HumanEval, and BBH, outperforming other 30B models in most tasks and competing with models twice its size. [Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series] Explore more of our coverage of: GitHub Updates, Language Models, Chat Applications. Share this story by email

First impacted: AI developers, Machine learning engineers
Time to impact: Long

Google has developed a method to train LLMs using neurally compressed text (which involves transforming raw text into a more concise and meaningful format, akin to compressing a high-definition image into a smaller file size without sacrificing clarity). The researchers say the technique outperforms byte-level baselines in perplexity and inference speed benchmarks, but falls short of subword tokenizers, leading the researchers to conclude the tech has room for improvement before it is ready for broader adoption. [Training LLMs over Neurally Compressed Text] Explore more of our coverage of: Google AI, Text Compression, Language Models. Share this story by email

That’s it! More AI news on Monday!