Mountain View, California: As it continues to further its core mission of “organizing the world’s information”, Google (Alphabet Inc.), is moving from a mobile-first to an artificial intelligence (AI) first world, said CEO Sundar Pichai at the Google I/O 2017 developers conference at the Shoreline Amphitheatre in Mountain View, California.
“In an AI-first world, we are rethinking all our products,” Pichai said, adding that the company is using machine learning (ML), deep learning (DL) and computer vision in all its products—be it search, data centres, medical imaging, cloud, Google Assistant, the newly-launched Google Lens for Google Assistant, Google Home, or hands-free calling on Google Home.
All these innovations are now being clubbed under an umbrella unit called Google.ai, which comprises Research, Tools and Applied AI.
“Mobile brought multi-touch. Now we have voice and vision,” explained Pichai. He pointed out that computers are getting much better at understanding speech. “Similar is the case for Vision (with) great improvements in computer vision. Clearly at an inflection point with vision. So today we are announcing Google Lens, which will be first included in Google Assistant,” Pichai said.
As part of Google’s AI-first strategy, Pichai also unveiled its second-generation Tensor Processor Unit (TPU)—a cloud-computing hardware and software system that is part of Google’s AI-first data centre strategy. TPUs, first revealed last year, are chips designed specifically for Machine Learning. Pichai pointed out that TPUs were used by the AlphaGo AI system, DeepMind, that created a stir when it beat Go expert, Lee Sedol.
TPUs are being used by ML models to improve the company’s products like Google Translate and Google Photos. Google said its cloud TPUs are now being deployed across its Google Compute Engine—a platform that companies and researchers can tap for computing resources similar to Amazon Web Services Inc. and Microsoft Corp.’s Azure.
Google also announced that the Assistant is coming to iOS devices. Users will be able to open up the Google App, press the voice button, and speak to the Assistant.
Google wants to improve intelligence in cars too. Even as cars are rapidly transforming into connected, intelligent machines and provide a new opportunity for enabling a rich app ecosystem, they still present a challenging environment—driver distraction, varying screen sizes and shapes, different input mechanisms and local regulations to name a few.
Google on Wednesday said that two billion users are using Android users. Google is using Android Auto to enable developers to deliver “seamless experiences” to drivers through the number of Android Auto compatible cars and the new standalone phone app. The company is now beginning to integrate Android, the ecosystem, and the Google Assistant more deeply into cars.
Further, it was only on 12 May that Google announced Project Treble, insisting that it was “re-architecting Android to make it easier, faster and less costly for manufacturers to update devices to a new version of Android”. Project Treble is a utility that Google wants to implement in the Android ecosystem to roll out the latest updates to the end user, regardless of the device’s make.
Android was unveiled in 2007 as a free, open-source mobile operating system. Project Treble will be coming to all new devices launched with Android O and beyond, according to the blog.
In 2007, Google held its first annual developer conference, which it called Google Developer Day. In 2008, this evolved into a two-day developer gathering at the Moscone Centre in San Francisco and gave way to the Google I/O conference we know today. The “I” and the “O” stand for “input/output”, and Google’s statement of its commitment to “Innovation in the Open”. The goal of the event is to empower developers with the resources they need to create experiences on its platforms including Android, Chrome, and Cloud.
Technology companies such as Google, Facebook Inc., Amazon.com Inc. and Nvidia Corp. want to claim the AI mindspace. For instance, just as Google has its TensorFlow framework for Deep Learning, Facebook has Caffe2. Amazon’s Alexa, Microsoft’s Cortana and Apple’s Siri compete with Google Assistant.
Moreover, other than newer entrants like the Bridge Explorer Edition from Occipital, there are existing products like Amazon Echo and Microsoft’s Project Evo that compete with Google Home. Echo connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather and more. Google Home, on its part, is a voice-activated speaker powered by the Google Assistant.
The question, though, is whether Google have what it takes to deliver the goods, especially in the enterprise market where it does not have a significant presence?
“I think Google has everything it takes to deliver consumer services that are enhanced and improved by AI. Google derives 95% of their profit from consumers, so I’m a bit sceptical if they can convert that to enterprises. Their business model of mining personal information could also clash with what enterprises really want, which are ways to make more and save more money. There’s no doubt Google has the experience. I question whether they have the enterprise mindset,” said Patrick Moorhead, president and principal analyst, Moor Insights and Strategy.