Exploring & Debating AI Fundamentals: 7 Key Articles Today

Today's AI stories cover the wide ranging debate over AI. Is it a multi-trillion dollar opportunity? Is it hopelessly unfixable? Will it hurt society? Will we grow dependent on it either way? One of the stories just calls it "weird." That's almost the only thing we can know for sure.

May these summaries help you do your work more thoughtfully and effectively.

Marshall

1. Zoom can now train its A.I. using some customer data, according to updated terms

"The latest update to the video platform’s terms of service establishes Zoom’s right to utilize some aspects of customer data for training and tuning its AI." Big controversy, which Zoom responded to with a blog post. I won't try to summarize this complex matter (metadata, opt-out, etc) in a few sentences, but it may be something to consider if you use Zoom. [CNBC: Zoom can now train its A.I. using some customer data, according to updated terms]

2. Catching up on the weird world of LLMs

Developer and explorer Simon Wilson gave a 40 minute presentation bringing together a discussion of basic principles and his extensive experimentation with LLMs. He's posted it as an annotated blog post and it's one of the best overviews you'll find online right now. [SimonWillison.net]

3. Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’
Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs. [Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’]

4. Capturing the Value of Generative AI | VMware News & Stories
VMWare writes a big, good overview of AI in the enterprise: "Generative AI could transform business operations such as software development and customer service, adding $7.9tn annually to the global economy, according to McKinsey. However, high costs, talent shortage, and risks like privacy and intellectual property leakage hinder its adoption." Even for startups, this articulates some of the challenges and opportunities pretty well. [Capturing the Value of Generative AI] 

5. “AI” Hurts Consumers and Workers -- and Isn’t Intelligent
OpEd by Alex Hanna, Director of Research at the Distributed AI Research (DAIR) Institute, and Emily M. Bender, professor of linguistics at the University of Washington: "The gold rush around so-called 'generative artificial intelligence' tools like ChatGPT and Stable Diffusion has been characterized by breathless predictions that these technologies will be the harbingers of death for the traditional search engine or the end of drudgery for paralegals because they seem to 'understand' as well as humans. In reality, these systems do not understand anything. Rather, they turn technology meant for classification inside out: instead of indicating whether an image contains a face or accurately transcribing speech, generative AI tools use these models to generate media. They may create text which appears to human eyes like the result of thinking, reasoning, or understanding, but it is in fact anything but." [“AI” Hurts Consumers and Workers -- and Isn’t Intelligent]

6. Eight Months Pregnant and Arrested After False Facial Recognition Match
Porcha Woodruff thought the police who showed up at her door to arrest her for carjacking were joking. She is the first woman known to be wrongfully accused as a result of facial recognition technology. [NYT]

7. Large Language Models Predicted to Surpass Human Expertise
Mustafa Suleyman said this weekend that while LLMs are still unreliable, in 3 to 5 years we will rely on them "more than even the best trained, most experienced human experts." Eric Horvitz, Chief Scientist at Microsoft, agrees and shares a presentation he gave last week to the Stanford Dept of Medicine on the topic of overcoming reliability challenges (AI Aspirations Healthcare Futures). [via @erichorvitz]