• AI Time to Impact
  • Posts
  • . . AI: Open source rivals ChatGPT, emotionally manipulating your LLM, and more

. . AI: Open source rivals ChatGPT, emotionally manipulating your LLM, and more

Friends, here's what the AI community is talking about today.  

For those of you just joining us, these are short summaries of the news most-discussed according to a weighted analysis of AI community engagement. It’s then analyzed by an ensemble of LLMs and edited by me.

We’re going to have some cool changes over here tomorrow, corresponding with a milestone I’m excited to have just crossed.

But for now, here’s today’s AI news!

Cheers,

Marshall

Alignment Lab AI Launches OpenChat 3.5, Claims to Rival ChatGPT

First impacted: AI researchers, Open-source community developers
Time to impact: Short

The open source organization Alignment Lab AI has launched a new high-performance model, OpenChat 3.5, which it says delivers comparable results to ChatGpt despite being only a third of its rumored size.  You can test it at openchat.team. The organization says the launch underscores the open-source community's leadership in alignment and explainability research, contrasting with organizations that withhold such advancements. [via @alignment_lab/Share by email

OpenChat’s UI

Study: Emotional Prompts Boost Large Language Models' Performance

First impacted: AI researchers, AI developers
Time to impact: Short

Ethan Mollick shared a research paper today with this explanation: "This was a study I was waiting for: does appealing to the (non-existent) 'emotions' of LLMs make them perform better? The answer is YES. Adding 'this is important for my career' or 'You better be sure' to a prompt gives better answers, both objectively & subjectively!"  I just tested this on Openchat 3.5 (see above) - I got a short paragraph when I asked for its advice on something, but I got a 5 point checklist when I said "you better be sure - this is important for my career!" [Large Language Models Understand and Can Be Enhanced by Emotional StimuliShare by email

AI-Powered Fusus System Expands Surveillance Capabilities Across US

First impacted: Law enforcement agencies, City councils
Time to impact: Short (manual override; the AIs say this is nothing urgent!)

According to a report from the new publication 404 Media, a company called Fusus is selling surveillance software across the US, stitching together formerly disconnected camera feeds into a unified hub for law enforcement to use and incorporating object recognition capabilities. The report says Fusus is now covering nearly 150 jurisdictions and over 33,000 cameras across 2,400 US locations. [AI Cameras Took Over One Small American Town. Now They're EverywhereShare by email

Runway Enhances Video Generation with Gen-2 Update

First impacted: Content Creators, Video Editors
Time to impact: Short

Runway's Gen-2 video generation tool has received an upgrade, enhancing its text-to-video and image-to-video capabilities, with users reporting a significant improvement in video quality.  There is a learning curve, though; I'll spare you the nightmarish video I accidentally created with it. The tool is part of a suite of over 30 AI tools designed to streamline content creation. [Runway - Everything you need to make anything you want.Share by email

Luma Labs Launches Genie, a Generative 3D Foundation Model

First impacted: 3D Model Designers, Discord Users
Time to impact: Medium

Luma Labs has unveiled Genie, a generative 3D foundation model in its research preview stage, enabling swift creation, prototyping, and customization of 3D models on Discord. You can see some examples of UGC here ; this is reminiscent of Roblox's plans to release 3D generative AI later this year or early next year.  [lumalabs.aiShare by email

AI Risk Overestimated, Argues Security Expert Halvar Flake

First impacted: AI security experts, AI developers
Time to impact: Short (manual override; these discussions are happening now)

In a counter to the theory of an "intelligence explosion", security expert Halvar Flake (a nom de plume) posits that the risks associated with artificial intelligence are often overestimated, arguing that most processes are self-limiting and thus, creating deliberate exponential chain reactions is challenging. He argues that even a malicious AI would necessitate extensive real-world experimentation to inflict actual harm, thereby limiting its potential for damage. "People are subconsciously incentivized to overestimate the risk of their own work, because it makes them feel important." [via @HalvarFlakeShare by email

That's it! More AI news tomorrow.