. . AI: If AI code is inferior, should we worry?

AI code generation, US national interests, and more

Friends, want to read short profiles of 50 companies A16Z says are using AI to advance American geopolitical interests? Evidence that AI powered code generation lowers code quality? It’s an interesting day of AI news!

As always, these are the stories being discussed most around the AI community. Along with some brief thoughts on who will be impacted first and when. Thanks for reading and sharing this collection.

Marshall Kirkpatrick, Editor

First impacted: Software Developers
Time to impact: Short

GitClear, a company that provides analytics to evaluate the quality of codebases, released a white paper suggesting that tools like AI-driven generation tools may lead to lower code quality due to increases in "churn" lines, "added code", and "copy/pasted code". "Code churn -- the percentage of lines that are reverted or updated less than two weeks after being authored -- is projected to double in 2024 compared to its 2021, pre-AI baseline." Even though companies like GitHub reported in 2022 that Copilot helped developers finish tasks 55% quicker, GitClear says the tool could promote code duplication rather than reuse and refactoring. [New GitHub Copilot Research Finds 'Downward Pressure on Code Quality' -- Visual Studio Magazine] Explore more of our coverage of: AI-Assisted Coding, GitHub Copilot, Code Quality. Share this story by email

First impacted: Software Developers, AI Researchers
Time to impact: Short

Meta's AI division has released a new version of its code creation tool, Code Llama-70B, which they claim is more capable than earlier versions. They have also launched two specific versions, CodeLlama-70B-Python and CodeLlama-70B-Instruct, the latter of which scored a notable 67.8 on HumanEval, which ranks it as one of the highest performing open models available today. [Request access to the next version of Llama] Explore more of our coverage of: Meta AI, Code Generation, Open Models. Share this story by email

First impacted: AI model creators, AI researchers, AI evaluators
Time to impact: Short

Nous Research announced the launch of a new evaluation system for open-source models using the Bittensor, a decentralized AI network. This system employs fresh data from the Cortex subnet, a synthetic data source created by GPT-4, to test models, while addressing perceived manipulation issues in traditional benchmarking that uses public datasets. [via @NousResearch] Explore more of our coverage of: Open-Source Models, Decentralized AI, Benchmarking Analysis. Share this story by email

First impacted: AI researchers, AI technology developers
Time to impact: Short

Gradio has launched MoE-LLaVA, a new Large Vision-Language Model (LLM) with only 3 billion selectively activated parameters. MoE stands for 'Mixture of Experts' which is a machine learning paradigm that involves a collection of models (the "experts"), each specializing in different parts of the input space.The model is claimed to perform on par with LLaVA1.5-7B and even surpass LLaVA1.5-13B in an object hallucination test. You can test it for yourself with any image you upload. [MoE LLaVA - a Hugging Face Space by LanguageBind] Explore more of our coverage of: Gradio, MoE-LLaVA, Vision-Language Model. Share this story by email

First impacted: Emerging Industries, Consumers, Businesses, Government
Time to impact: Short

Venture capital firm Andreessen Horowitz (a16z) has released its annual American Dynamism 50 list and this year, they selected companies using AI. It's in interesting set of short profiles of companies across aerospace, defense, energy, transportation, manufacturing, and more. [The American Dynamism 50: AI Edition] Explore more of our coverage of: AI Startups, Emergency Response Technology, Manufacturing Automation. Share this story by email

That’s it! More AI news tomorrow!