ShortList is supported by you, our amazing readers. When you click through the links on our site and make a purchase we may earn a commission. Learn more

The big problem with Google trying to get us to talk to our phones

Google's new Assistant could change everything, but just not for this generation

The big problem with Google trying to get us to talk to our phones
05 October 2016

Hands up all of those who've asked Siri a question out loud today? Or vocalised a question to Cortana? Or uttered "Okay Google..." and rattle off an inane request about what time Sainsbury's closes?

Anyone? Anyone at all? At the back there - is that a hand? Just the four of you then. The point is that while we've been able to talk to our phones, smartwatches and tablets for a good few years now, pestering them with questions, the minority of us bother using this sci-fi tech.

So when Google kicks off its major 2016 event by telling us that the future of computing is artificial intelligence, the present iteration of which lives in the new Google Assistant, a big, awkward question hangs heavy in the room: do any of us actually want to talk to it? Will we ever want to talk to it when we've spent years interacting with keyboards? And how is Google (et al) going to aid that transition?

Here's how Google introduced its new Assistant system, living at the heart of the new Pixel smartphone and set to roll out to more devices in the near future...

The Google Assistant is framed as the next step in the evolution of search and computers: Google has been at the heart of how humans interact with the internet and their gadgets for close to two decades, noticing how people search the web with human language rather than core search terms ("What was last night's Arsenal score?" rather than "Arsenal result"), how they flick from a search result to an app to another app to carry out what should be one seamless sequence.

Google Assistant is the pinnacle of this process, and due to its 'personal' learning functions - adjusting to user behaviour - it's a pinnacle that's only going to get sharper. The technology involved is incredible; the artificial intelligence, the understanding of human language, the capacity to translate a search result back into human language to generate a 'conversation' - all of that is staggering. 

It's just that, well... do we want it? Because, in a way, we've already got most of it available to us.

The above video demonstrates Google's voice search functions back in 2012. They were impressive then, and they're impressive now. Yet if you're on the bus, in a bar, sitting around with your flatmates or on the loo, I bet my smartphone that you've not used this voice feature, have you? You've typed the search in with your thumbs?

Google has been tweaking and improving vocal search and search apps ever since - and you've probably not been using it, have you? 

If you've got an Android phone, you might have played around with the delights on offer by services like Google Now - which pulls in useful information from all your Android apps and search in the form of 'Cards' - and Google Now on Tap - which lets you hold down your home button, allowing Google to scan all the information on your screen and offer you relevant search results to browse through. Watch the video below for an idea of how it works.

In a very crude approximation of what Google was emphasising with its new Assistant is that you can now do all the stuff you've been able to do with Google's numerous services, but with a vocal assistant. It's easier. It's smarter. It's interactive. It understands you and your preferences. But the challenge for Google - and Siri/Cortana/Amazon Alexa - is how the hell can you get us to change from having a preference for the privacy that comes with typing out a personal search, message or query, to saying it out loud?

Because I don't want people to know I'm booking a table for two at a Spanish tapas bar in Brixton. I don't want the blissful silence of the bus commute home to be ruptured by me asking Google to remind me put the washing on when I go home. I don't need everyone else around me to know I'm interested in buying the new Dan Brown novel. 

New gadgets are going to help this process become less weird than they currently are: smart speakers like Google Home and the Amazon Echo don't have a screen, they don't have a keyboard, they only do vocal interaction. Amazon has sold some three million of the Echoes in the US since its 2014 launch, finally touching down in the UK this month. Google Home will turn up "sometime next year". 

Those of us who have a habit of typing out our searches will find them useful; we'll ask them to read us an audio book, or a recipe, or ask what's on our Gmail calendar. But the transition to vocal interaction is going to take more than a few new gadgets and apps. For this generation, a gradual erosion of new interactions will chip away at our rock solid typing habits.

For the kids growing up with these devices - the digital natives - vocal search won't be weird. They will form habits that rub older users up the wrong way. They will embrace this change, while we spend an extra four seconds typing out the same requests and booking tables at tapas bars with our thumbs. 

The Google Assistant will change everything, just not immediately. And not for us, but for our kids - or those who don't blush when a lift full of strangers hear you order grubby Chinese takeaway and an Uber home.