Originally, my interest in Google came down to how I think it does cool-to-wild things with technology in an accessible/affordable manner that nobody else is doing.
9to5Google has a rebooted newsletter that highlights the biggest Google stories with added commentary and other tidbits. Sign up here!
When I bought into the ecosystem in 2012 (the first-generation Nexus 7), Google had Google Now and was teasing Glass. The former was a centralized feed that showed the weather, calendar events, commute/travel info, and much more. This was mostly derived from first-party data, like Gmail, Calendar, and Maps, that Google already had by default.
Google Now was integrated and easy to access on Android before coming to iOS and Chrome/OS. Cards were used to show relevant information with users not needing to open various first- and third-party apps to see what is fundamentally your information. As I said in 2021, “Google Now broke apart the siloing of data and put it in a consistent and familiar interface.”
Google Glass launched in 2013 and made these Google Now cards a central part of the UI. It felt wildly futuristic to me, with Google seemingly having a tremendous lead. Around the same time, Google announced Android Wear in a full embrace of next-generation form factors.
As a user of Google products, that three-year period was so exciting for the sheer consistency that it brought. It seemed that the company had a clear vision of your data being free of app-based silos and accessible through Google Now. The seemingly parallel development of Glass and Android Wear seemed to indicate that something was coming after the app-driven smartphone.
I thought this was Google building out a foundation for the future on both the software and hardware side.
Then the inevitable happened. Google Now was phased out over the course of 2016 and eventually became Discover and Assistant. Meanwhile, what Google started with Glass is in no way the foundation of any future smart glasses hardware it may or may not be working on today.
It was emblematic of how Google has a habit of winding down projects and replacing it with something entirely different — UI and UX-wise — rather than building on what end users are already familiar with.
The most recent reset is the transition from Google Assistant to Gemini today. Until the mobile updates announced at I/O launch, I don’t think Gemini is a good phone assistant, which is what Android users want. To take just one example, not being able to play music conveys a lack of understanding of the product people need in their day-to-day lives.
Yet, coming out of I/O 2024, I think Google might be outgrowing its reset tendencies.
Fundamentally, Google Now and Assistant were at technological dead ends. I equate both to offering assistive experiences through hard-coded rules where it was very easy to hit the constraints of what was possible.
AI today looks to be a meaningful step forward that can actually deliver on the promise of a virtual assistant. Google went ahead and laid out its vision for AI agents: “I think about them as intelligent systems that show reasoning, planning, and memory. Are able to think multiple steps ahead, work across software and systems, all to get something done on your behalf, and, most importantly, under your supervision.”
Sundar Pichai provided a pair of “agentive” examples, starting with taking a picture of shoes you purchased and now want to return. Gemini will search Gmail for the receipt, fill out a return form, and schedule a pickup. The more complex example was having Gemini and Chrome help you move to a new city by finding a dry-cleaner, dog-walkers, and other services to updating your address across the apps/site you use.
There are more than a few shades of Google Duplex, the company’s 2018-era term for training AI to accomplish tasks and save you time that involved phone calls and filling out forms for you on the web. That online aspect has since been canceled.
Meanwhile, Google DeepMind talked about its goal of building a “universal AI agent that can be truly helpful in everyday life.” To offer this, Project Astra has to:
- “…understand and respond to our complex and dynamic world just like we do.”
- “It would need to take in and remember what it sees so it can understand context and take action.”
- “And it would have to be proactive, teachable and personal, so you can talk to it naturally, without lag or delay.”
What Google could have done better with Gemini — and all its other deprecations — is provide a better transition away from Assistant, starting with feature parity before offering generative capabilities, like image creation, that I don’t think most people are asking for.
Meanwhile, I still don’t think Gemini is a better name than Google Assistant. Broadly, the company’s naming strategy is “Google” + purpose. It’s not interesting, but it’s naturally informative.
There are two ways to grow. Either from your own desire or because you’re forced to by external pressures. In Google’s case, there’s serious competition.
That said, Google has always wanted to create a personal assistant as seen with the 2016 proclamation that Assistant is a step towards building your own Google.
This time, it doesn’t seem like technology will be what limits Google’s ambition. Rather, the stumbling block would be how it executes.
Hopefully, Gemini is the last reset.
FTC: We use income earning auto affiliate links. More.