MIT researchers “speak objects into existence” using AI and robotics The speech-to-reality system combines 3D generative AI and robotic assembly to create objects on demand.
When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length.
Google DeepMind's new AI models, built on Google's Gemini foundation model, are making robots fold origami and slam dunk tiny basketballs. Gemini Robotics can interpret and act on text, voice, and ...
A vision-based control system called Neural Jacobian Fields enables soft and rigid robots to learn self-supervised motion control using only a monocular camera. The system, developed by MIT CSAIL researchers, combines 3D scene reconstruction with embodied representation and closed-loop control.
Capable of lifting and sorting packages of various sizes and shapes, Robin is a robotic system that Amazon designed to tackle some of the most complex logistical challenges in the world.
Isaac Asimov's Three Laws of Robotics have long guided discussions on robot ethics. As AI advances, a proposed Fourth Law aims to prevent AI deception by requiring robots and AI to identify ...
MIT Associate Professor Luca Carlone works to give robots a more human-like perception of their environment, so they can interact with people safely and seamlessly.
The Robotics and AI Institute, founded by Marc Raibert, presents new research that uses reinforcement learning to teach Boston Dynamics' Spot to run three times faster. The same technique is used ...