Skip to main content

Google I/O 2021: AI is the Answer...Now What Was the Question?

If anyone was doubtful about the trajectory of AI and its propensity to solve all the world’s problems, Sundar Pichai’s keynote at Google I/O showcased several examples.

AI now underpins the majority of Google’s search algorithms, allowing the company to present even more contextually relevant results and surface insights that would be considered impossible with conventional search engine programming. To achieve this, Google has already gone beyond textual language search; it is using AI to create metadata to accurately describe images and identify content in videos – for example, Google Lens is used more than three billion times per month for image recognition and search – but more importantly is now able to deliver far more intelligent search results in near-real time.

The company described a new AI for understanding the context of information called the “Multitask Unified Model”, or MUM for short. The AI is built upon Google’s Transformer architecture, but it’s 1,000 times more powerful than the BERT model that came before it. Trained on over 75 distinct languages and many different contexts using both text and images, MUM not only understands natural language, but can also generate it to create textual and vocal results. The training allows the AI to develop a more comprehensive understanding of information and world knowledge than previous models. This creates search responses that are similar to that which a friend or subject expert might provide.

Futuresource’s Virtual Assistant Tracking service discovered that consumer usage patterns with voice-based platforms, including Smart Speakers and Smart Displays, is largely invariable year on year. Consumers play music, control smart home appliances, and ask random questions of their virtual assistant, however they rarely venture beyond these competencies to explore the true capabilities of platforms such as Google Assistant. Futuresource recommend that virtual assistants must move swiftly to conversational AI to improve engagement and expand the applications. Google recognise that voice must provide sensible responses in order to keep the dialogue flowing; they presented ongoing research into a new NLP (natural language processing) AI called LaMDA –Language Model for Dialogue Applications.

LaMDA builds on earlier Google research which illustrated that Transformer-based language models trained on dialogue could learn to talk about virtually anything. Google have been fine-tuning the AI to significantly improve the sensibleness and specificity of its responses. The model creates “Learned Concepts” during training, allowing LaMDA to guide a conversation through multiple stages, keeping the dialogue open and never repeating the same pathway. Google are now working on “multi-modal” training for LaMDA, extending the machine learning phase beyond textual inputs. In future you could ask Google Maps to “choose a route with beautiful mountain views” or play a segment of a movie by asking “show me the part where the car chase happens”. LaMDA isn’t live in any products today, however the expectation is that LaMDA-based AI will make computing even more accessible, and it will certainly form the basis of future versions of Google Assistant.

Whilst LaMDA looks destined to solve the conversational aspects of virtual assistants, Futuresource identify that there are also deeper, and potentially ethical, questions over how far the personalities and behaviours of virtual assistant AI might be allowed to develop. Much of what is achieved with virtual assistants today surrounds factual questions with definitive answers, or instructions to control products and services within the immediate environment of the user. In effect, the virtual assistant is working within a tightly-defined frame of reference, aiming to be realistic, honest and

truthful in handling various intents. In contrast, today’s virtual assistants are not appropriate for debate and opinion; they hold no judgement or particular beliefs; they cannot derive their own interpretation from a set of facts or assumptions. Furthermore, virtual assistants don’t have true personality or exhibit genuine interest; neither is the AI particularly friendly nor tactful in building relationships. Many of these aspects are now holding back the next stage in virtual assistant development. Whether machines could mimic emotional intelligence and empathy, or even approach anything that humans might respond to, is still an open question. Arguably, because empathy can be learned, AI could be equipped with artificial empathy in the years to come.

To power all of this AI, and make it accessible to third-parties, Google announced a new Tensor Processing Unit – the TPU v4 – which is more than twice as fast as the previous generation. These are assembled into Pods to deliver high availability cloud-based AI compute. A single TPU v4 Pod has 4,096 chips delivering over one exaFLOP in computing performance (over 1018 floating point calculations per second).

By the end of the decade, Futuresource predict that the trajectory for AI leads to an intersect between quantum computing and neural computation; indeed, the combined proficiencies are destined to deliver astonishing levels of AI performance. Google showcased their latest research into quantum machines, since they believe these provide the best chance of understanding the natural world. Achieving quantum advantage – calculating results beyond the reasonable means of classic computation – was a massive milestone, however the challenges of scaling quantum computing to new levels remain: stable qubits require temperatures near absolute zero (-273.15°C) to operate effectively, and no physical or electrical noise. Google are now working to create an error-corrected logical qubit to deliver more stable computation; they then aim to assemble thousands of these to deliver a 106 qubit error-corrected quantum computer. It’s incredibly challenging and, unsurprisingly, no specific timeframe was placed upon this research.

AI is clearly a rapidly developing area amongst companies with the necessary knowledge in deep learning and the required heft in compute resource. Many appear to be using AI as a buzzword or marketing tool, having found no particular advantage in replacing traditional algorithms already capable of the task at hand. Arguably Google IO demonstrated that there are indeed several useful applications of AI technology. The outcomes will be subtle, but also profound.

For more information about the Futuresource Virtual Assistants Tracker Report, please visit here.

Date Published:

Simon Forrest

About the author

Simon Forrest

As Principal Technology Analyst for Futuresource Consulting, Simon is responsible for identifying and reporting on transformational technologies that have propensity to influence and disrupt market dynamics. A graduate in Computer Science from the University of York, his expertise extends across broadcast television and audio, digital radio, smart home, broadband, Wi-Fi and cellular communication technologies.

He has represented companies across standards groups, including the Audio Engineering Society, DLNA, WorldDAB digital radio, the Digital TV Group (DTG) and Home Gateway Initiative.

Prior to joining Futuresource, Simon held the position of Director of Segment Marketing at Imagination Technologies, promoting development in wireless home audio semiconductors, and Chief Technologist within Pace plc (now Commscope) responsible for technological advancement within the Pay TV industry.

Latest Consumer Electronics Insights

Cookie Notice

Find out more about how this website uses cookies to enhance your browsing experience.

Back to top