Google is right. Artificial Intelligence can help us, dammit.
The notion of a coming AI apocalypse has grown tiresome, especially because it invariably makes the leap from the nascent forms of AI we experience now to a terrifying future were every robot can out-think and, eventually, annihilate us.
SEE ALSO: Android O: Everything you need to know
I’m not saying it’s not an eventuality, but it is also decades or more away.
It’s time to focus on the now, which is why I was so pleased with Google’s I/O 2017 developer’s keynote on Wednesday.
In it, Google CEO Sundar Pichai described the fundamental shift from a mobile-first landscape to an AI-first one.
Putting AI first doesn’t cut out mobile. In fact, mobile hardware and software remain a crucial part of Google’s strategy, but now all of it is infused, at some level, with artificial intelligence and, especially, machine learning.
Google’s approach to neural nets and deep learning — core components of machine learning and AI — though, stands in contrast to Facebook’s approach where, during its own developer’s conference, Facebook took us right to the edge of computer-human interface insanity.
Google, on the other hand, appeared less interested in wowing us than it did it showing us repeatedly the practical applications of AI and machine learning in our everyday lives.
They also, smartly, extended last year’s open-source TensorFlow machine learning platform to the Google Cloud. This puts the awesome power of a machine learning brain in the hands of the broadest array of people— who, perhaps unlike Google, might see applications beyond the knowledge graph, Google Platforms and making sure apps don’t suck the life out of the Android phones.
That said, I’m a big fan of virtually every machine-learning integration Google showed off on Wednesday.
To understand what Google is doing with AI and machine learning, you need to look at the speech and vision systems. The two are set to transform how we engage with images, search and the unknown.
Broadly, Google AI wants to answer two of our biggest and most basic questions, “What is that?” and “Now what?”
Googles object identification (the “What is that” part) in Google Lens is impressive, but also not necessarily groundbreaking. Everyone from Samsung to even Pinterest uses image recognition tools to identify objects.
The second part is, obviously, where the machine-learning magic comes in.
Google’s ability to help you act (that’s the “Now what?” part) on what it identifies is another level of AI utility. My favorite example, and the one that generated the most applause during the keynote, was the ability to point Google Lens at a router wireless setting label and automatically pick up the SSID and password, enter them into your system settings, and connect you.
Google Photos has so many machine-learning capabilities, I could scarcely mention them all here. But the ability to ID faces in photos and auto-share new images with those people, and automate the creation of physical photo books, make the cloud-based backup tool worth a second look.
The Google Photo prowess is also a reminder of how much artificial intelligence can do for you when you’re not paying attention. Yes, everyone likes to make fun of AI assistants (like Google Assistant, which does a better job with conversation than Alexa and Siri) that can’t answer questions or carry on a decent conversation, but, like robotics, the AI’s best work is done in the background.
Google is more than willing to tout the work it’s doing with AI, but the results can be subtle. Google for Jobs, for example, is a powerful machine learning enhancement for Google Search solely designed to help people find jobs. That’s the least showy kind of AI, yet it could have the most meaningful and direct impact on individuals. That, I believe, is the true promise of AI.
My point is that while Elon Musk is busy working on ways to connect human brains to computers so we can get ahead of the AI apocalypse, Google is figuring out ways to make AI work for everyone.
Google’s approach to AI reminds me of Microsoft’s: less flash, more function. Microsoft’s intelligence is mostly inside its productivity tools, but is quickly bleeding out across the rest of its strategy. There is also a crucial difference between the two companies. Microsoft’s business model is not built on advertising, so it’s unlikely they’ll also use the data they’re collecting for profit.
Google’s approach to AI reminds me of Microsoft’s: less flash, more function
“For consumers, Google’s AI strategy seems a lot more compelling than Facebook’s, but no less scary,” said Patrick Moorhead, president and principal analyst for Moor Insights.
“For Google’s AI work well, it needs loads and loads of personal information. That personal information will improve functionality for Home and Photos and it will be used commercially to create denser user profiles.”
I know there are many who, like Moorhead, believe that Google is using the world’s search data to drive advertising and fill Google executives and shareholders’ pockets with cash – and maybe they are.
Yet, the Google I/O keynote reminds us that Google also has the potential to do, and help others do, enormous good with artificial intelligence.
As Sundar Pichai said in his blog post on AI:
We believe huge breakthroughs in complex social problems will be possible if scientists and engineers can have better, more powerful computing tools and research at their fingertips.
Google even launched an AI web site to connect everyone with its best work and tools in the space.
You may not love AI and perhaps you still fear the rise of our robot overlords, but I encourage you to pause for just a moment and appreciate the beauty of a selection tool that doesn’t assume you’ve highlighted six unrelated words on your phone, when you’re really selecting an address and want to find it on a map now.
Google’s AI knows the difference and you should say, “Thank you.”