These are today's top stories in AI

2 politics/business stories, 6 technical stories

Good evening, friends! Welcome to the inaugural edition of the aiTTENTION email newsletter. Thanks for subscribing.

Each day we’re going to send a collection of top stories like this; they were selected by a weighted analysis of AI community engagement online. Our system read thousands of news stories and conversations today and these are the ones people are talking about. We’re working to summarize them as succinctly as possible, but not so succinctly they’re inaccessible.

I hope that these will prove useful to you. In some cases directly, in other cases as inspiration for your own work. If you know someone else who would be inspired by these stories, please do share this email with them.

Let me know if you have any questions, thoughts, feedback, or just want to say hi.

Thanks again,

Marshall

Business & Political Stories

The Workers Behind AI Rarely See Its Rewards. This Indian Startup Wants to Fix That
Indian non-profit Karya pays rural workers to read text in their native languages for AI data. Their app offers an hourly wage of $5, nearly 20 times the Indian minimum wage, plus a 50% bonus for validated voice clips. Workers also own the data they generate, earning from future resales. Extreme working conditions and low pay for AI data labelers has been in the news alot lately, so this is a notable contrast. [Time.com: The Workers Behind AI Rarely See Its Rewards. This Indian Startup Wants to Fix That]

Tech Whistleblowers Discuss AI Concerns with Emily Chang
Bloomberg's marquee interviewer Emily Chang sits down with tech whistleblowers Safiya Umoja Noble, Ifeoma Ozoma, and Timnit Gebru. In contrast with AI doomerism, they discuss the harmful effects of current AI, arguing that these problems are already evident. Article behind a paywall, but 90 sec video preview here, including discussion of the danger of centralization. [Bloomberg: The Consequences for Tech Whistleblowers: ‘People Come After You’ ]

Technical Stories

Automatic Waypoint Extraction Enhances Robotic Learning
An Automatic Waypoint Extraction (AWE) system has been created to improve robotic learning. It simplifies demonstrations into waypoints, like checkpoints along the way, instead of requiring robots to learn through imitation a complex action in its entirety. The system boosts Behavioral Cloning algorithm success by up to 25% in simulations and enhances real-world bimanual manipulation tasks by 4-28%. Let that be a lesson to us all! [Video demo via @chelseabfinn]

Stack Overflow Announces Its AI Roadmap: OverflowAI

After a lot of public conversation about Stack Overflow traffic plummeting since developers could use Copilot and ChatGPT to answer programming questions, Stack Overflow has announced plans to integrate generative AI into its own platform, Stack Overflow for Teams, and other products. The company says it will introduce semantic search capabilities for instant, accurate, and conversational solutions. Stack Overflow for Teams will benefit from enterprise knowledge ingestion for quick knowledge base curation and there will be an IDE extension for Visual Studio Code. None of it appears available yet but you can sign up to be an alpha or beta tester. [Announcing OverflowAI]

SDXL DreamBooth Optimized in Hugging Face Spaces
The SDXL DreamBooth (LoRA), a method for fine-tuning Stable Diffusion image generation models using a small number of images to create high-fidelity images of specific subjects, can now be tweaked in Hugging Face Spaces without a dedicated GPU. Users can duplicate a space and start training, no coding required. [Dreambooth - a Hugging Face Space by autotrain-projects]

GitHub Copilot Builds Markdown Editor Rapidly
A blog post shows how to use GitHub Copilot to build a markdown editor in less than two minutes using Next.js, GitHub Codespace, and the React Markdown npm package. The author also uses Copilot Chat to improve the editor's UI and create unit tests. The post highlights the non-deterministic nature of GitHub Copilot and similar AI tools, which can yield different results. [How to build a markdown editor in two minutes (with GitHub Copilot)]

Anthropic's Interpretability Team Hiring to Try to Reverse Engineer AI Models
"When you see what modern language models are capable of, do you wonder, 'How do these things work? How can we trust them?' The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe." [Anthropic: Research Scientist, Interpretability]


Fusion: Reinforcement Learning Enhances Plasma Control
DeepMind says: "Last year, we worked with the Swiss Plasma Center to successfully control the nuclear fusion plasma inside a tokamak using AI. Since then, our experiments have improved plasma shape accuracy in simulation by up to 65%." They made algorithmic improvements that increased shape accuracy, reduced steady-state error, and decreased training time. [Arxiv: Towards practical reinforcement learning for tokamak magnetic control]

That's it! Thanks so much for subscribing here. If you know someone who’s excited about learning the newest and most important developments in AI each day, we would be very grateful if you forwarded this email to them. Let us know if we can answer any questions or if you have any feedback!

Thanks,

Marshall Kirkpatrick