Google’s DeepMind robotics team has published a new research paper that details how it is changing robot navigation using Gemini 1.5 Pro -- one of the company’s advanced AI models.
According to the research paper, Google Gemini 1.5 Pro is using an extended context window. This enables the AI model to process much more information than earlier. What it also does is that it empowers the robots to be more flexible and adaptable by remembering and understanding their environment.
The process of making robots smarter starts with researching and filming video tours of offices, homes, and other surroundings. The robot, backed by Gemini 1.5 Pro, then watches the videos to learn the layout, where things are stored, and get a better idea of the area.
Also Read: Elon Musk’s X To Bring ‘Dislike’ Button For Downvoting Replies
DeepMind Already Testing Gemini-Powered Robots
The robots are then given commands and use their memory from the video in order to navigate better. Google said that it evaluated the method and achieved 86% and 90% success rates in office and home-like environments, respectively. It was 26% and 60% higher than the baseline method compared to earlier models.
The DeepMind team has tested these Gemini-powered robots in a huge 9000-square-foot area. Almost 50 different instructions were followed with 90% accuracy by the robots. However, it seems like a lot of improvements can still be made, as per the researchers. As of now, even with Gemini 1.5 Pro, the robots take 10 to 30 seconds to process each instruction, which is rather slow for real-world use.
Also, the testing so far has been done in controlled environments. So, robots aren’t really to take over their home or office at the moment but Google is working to make them navigate the surroundings smartly and more efficiently.
Also Read: Apple Issues ‘Mercenary Spyware Attack’ Warning For Indian Users
Apple Can Bring Google’s Gemini AI’s To IOS 18
Apple at its latest WWDC event made some big announcements. Everything from the calculator app on the iPad to iOS 18’s take on AI (Apple Intelligence), the company brought some exciting upgrades. After the event, however, there’s some information floating around regarding the iOS 18. A new report suggests that Apple could implement Gemini into iOS 18.
Some of the more notable announcements seen during the event involved the latest version of iOS. The latest iteration of the iOS comes with several AI goodies. The artificial intelligence is integrated deeply into the software, and it goes far beyond what users are getting from the likes of Galaxy AI.
A few weeks before Apple’s annual event, there were rumors that the company was looking to partner with either OpenAI or Google to use their models. When the company announced Apple Intelligence, it also mentioned OpenAI. When the system needs to access larger language models in the cloud, it will delegate the task to GPT-4o.