• AI Time to Impact
  • Posts
  • . . Peek into AI workshops of industry leaders, today's top 8 stories (8.17.23)

. . Peek into AI workshops of industry leaders, today's top 8 stories (8.17.23)

Friends, today’s top stories in AI feel like a peek into the workshops of AI industry leaders large and small. I hope you find them thought-provoking and useful.

Cheers,

Marshall Kirkpatrick, Editor

RoboAgent: Meta AI and Carnegie Mellon's Skill-Learning Robot

Meta AI and Carnegie Mellon University have developed RoboAgent, a robot that can learn 12 skills (from "flap-open" to "slide") and apply them in various scenarios. Trained over two years on 7,500 trajectories, it can perform 38 tasks like object manipulation. It learns efficiently, avoids overfitting, and evolves with new experiences. Lots of video in the announcement.
[RoboAgent Towards Sample Efficient Robot Manipulation with Semantic Augmentations and Action Chunking]


Platypus Tops Open LLM Leaderboard with Innovative Approach
The open-source Large Language Model, Platypus, has hit the #1 position on HuggingFace's Open LLM Leaderboard. It uses the curated Open-Platypus dataset, removing similar questions to boost performance. It also includes fine-tuned LoRA modules, focusing on non-attention modules. Impressively, a 13 billion parameter Platypus model can train on a single A100 GPU with 25,000 questions in 5 hours. [Platypus: Quick, Cheap, and Powerful Refinement of LLMs]

Google Research and MIT Unveil CHITA for Efficient Neural Network Pruning
Neural networks are pruned to prevent overfitting to training data, to improve generalization, to reduce computation time and memory usage, and to improve interpretability. Google Research and MIT have developed CHITA, a method for efficiently pruning large-scale pre-trained neural networks. CHITA outperforms existing methods in scalability and performance. [Neural network pruning with combinatorial optimization]


Andrew Ng Explores Open-Source Language Models
Andrew Ng, co-founder of DeepLearning.AI, highlights the growing range of open-source large language models for developers. He suggests starting with prompting for rapid application development, then adding more complex techniques as needed. Ng also discusses the rising trend of fine-tuning—improving a pre-trained LLM's performance with a specific, smaller dataset. He underscores the difficulty of choosing a model, stating that smaller models require less processing power, but larger ones provide better reasoning abilities. ["The Batch Weekly Issues Issue 210"]


Sakana AI Labs to Develop Collective Intelligence Inspired by Nature
Former Google Brain team member David Ha and Llion Jones have co-founded Sakana AI Labs in Tokyo, aiming to develop a unique foundation model based on intelligence found in nature. The company focuses on collective intelligence, evolutionary computing, artificial life, and neuro-inspired methods to advance foundation models beyond current AI paradigms. [Sakana AI]

Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI
A new paper from 3 AI industry leaders argues that major corporations predominantly control the resources needed for large-scale AI systems, despite misunderstandings from misapplying 90's open source ideologies to AI. Some large corporations use 'open' AI to strengthen their dominance, the authors say, setting development standards and benefiting from unpaid open source contributors' work. [Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI]

Teach LLMs to Personalize -- An Approach inspired by Writing Education
A new paper proposes a method for personalized text generation with large language models. The authors say their multistage, multitask framework, inspired by writing education, improves LLM capabilities. The process includes retrieval, ranking, summarization, synthesis, and generation stages. The paper highlights LLMs' potential in areas like personal communication, dialogue, marketing, and storytelling, by generating personalized text without predefined attributes. [Teach LLMs to Personalize -- An Approach inspired by Writing Education ]

AI Coding Assistants Need Test-Driven Development
AI coding assistants like GitHub Copilot still require Test-Driven Development for accurate feedback and problem-solving. Influential consultancy Thoughtworks has been using TDD with Copilot since early this year, discovering effective practices they've shared in this blog post. [Latest Memo: TDD with GitHub Copilot]

That's it! Who do you know that would find these interesting? Shoot this email they’re way!

More stories tomorrow…