Categories: News

Google I/O: Hands-on With Project Astra, the AI Assistant of the Future – Gizmodo

As usual, Google I/O 2024 is an absolute whirlwind of news and announcements. This year, rather than focusing on hardware, Android, or Chrome, Google spent most of this year’s developers’ conference convincing us that its AI features are worth prioritizing. One of those projects is Project Astra, a multimodal AI assistant you can semi-converse with that can simultaneously use the camera to identify objects and people.

Advertisement

I say “semi” because it’s evident after the demo that this part of Gemini is in its infancy. I spent a few brief minutes with Project Astra on the Pixel 8 Pro to see what it works like in real-time. I didn’t have enough time to test it to its full extent or try to trick it, but I got a feel for what the future might feel like as an Android user.

Advertisement

The point of Project Astra is to be like an assistant who also guides you in the real world. It can answer questions about the environment around you by identifying objects, faces, moods, and textiles. It can even help you remember where you last placed something.

Advertisement

There were four different demonstrations to choose from for Project Astra. They included Storyteller mode, which asks Gemini to concoct a story based on various inputs, and Pictionary, essentially a game of guess-the-doodle with the computer. There was also an alliteration mode, where the AI showed off its prowess at finding words with the same starting letter, and Free-Form let you chat back and forth.

Advertisement

The demo I got was a version of Free-Form on the Pixel 8 Pro. Another journalist in my group had requested it outright, so most of our demonstration focused on using the device and this assistant-like mode together.

Advertisement

With the camera pointed at another journalist, the Pixel 8 Pro, and Gemini could identify that the subject was a person—we explicitly told it that the person identified as a man. Then, it correctly identified that he was carrying his phone. In a follow-up question, our group asked about his clothes. It gave a generalized answer that “he seems to be wearing casual clothing.” Then, we asked what he was doing, to which Project Astra answered that it appeared he was putting on a pair of sunglasses (he was) and striking a casual pose.

Advertisement

I took hold of the Pixel 8 Pro for a quick minute. I got Gemini to identify a pot of faux flowers correctly. They were tulips. Gemini noticed they were also colorful. From there, I wasn’t sure what else to prompt it, and then my time was up. I left with more questions than I had going in.

Advertisement

With Google’s AI, it seems like a leap of faith. I can see how identifying a person and their actions could be an accessibility tool to aid someone who is blind or has low vision as they navigate the world around them. But that’s not what this demonstration was about. It was to showcase the capabilities of Project Astra and how we’ll interact with it.

Advertisement

My biggest question is: Will something like Project Astra replace Google Assistant on Android devices? After all, this AI can remember where you put your stuff and pick up on nuance—at least, that’s what the demo conveyed. I couldn’t get an answer from the few Google folks I did ask. But I have a strong inkling that the future of Android will be less about tapping to interact with the phone and more reliant on talking to it.

Original Author: Florence Ion | Source: Gizmodo.com

Akshit Behera

Share
Published by
Akshit Behera

Recent Posts

Trump administration’s deal is structured to prevent Intel from selling foundry unit | TechCrunch

The deal allows the U.S. to take more equity in Intel if the company doesn't…

5 months ago

3 Apple Watches are rumored to arrive on September 9 – these are the models to expect

We're expecting two new models alongside the all-new Apple Watch Series 11. | Original Author:…

5 months ago

Fujitsu is teaming with Nvidia to build probably the world’s fastest AI supercomputer ever at 600,000 FP8 Petaflops – so Feyman GPU could well feature

Japan’s FugakuNEXT supercomputer will combine Fujitsu CPUs and Nvidia GPUs to deliver 600EFLOPS AI performance…

5 months ago

Microsoft fires two more employees for participating in Palestine protests on campus

Microsoft has fired two more employees who participated in recent protests against the company’s contracts…

5 months ago

Microsoft launches its first in-house AI models

Microsoft announced its first homegrown AI models on Thursday: MAI-Voice-1 AI and MAI-1-preview. The company…

5 months ago

Life 3.0 – Being Human in the Age of Artificial Intelligence by Max Tegmark

A comprehensive review of Max Tegmark's Life 3.0, exploring the future of artificial intelligence and…

5 months ago