I'm so excited to share the industry update on AI pack full week from Microsoft Build and Google I/O annual tech conferences. This article is about Google's AI digital assistant - Duplex.
During my early 90's graduation, had a paper on Artificial Intelligence. To be honest, I never dream in my life time about seeing/feeling the theoretical concepts, studied in college. You know what? Itz real now and disruptive technology reached its height.
During this week Google I/O summit, Sundar Pichai demonstrated Google Duplex, which is designed to pretend to be human, with enough human-like functionality to make similarly inane phone calls in real world.
As Sundar said, Google's AI technology has come a very long way. This demo was pretty incredible if you haven't seen
This new AI based digital assistant helps us to improve life mode by making simple boring phone calls intelligently on your behalf. During this demo, tech revolution has been witnessed in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks.
The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. How is technically possible?
Duplex is a recurrent neural network (RNN), built using TensorFlow Extended (TFX). The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation and more.
Smart Data readiness is quite tricky by training the understanding model separately for each task, but leveraged the shared corpus across tasks. In last state, hyper-parameter optimization from TFX is leveraged to improve the model further.
As artificial intelligence continues to improve, voice quality will improve and the AI will become better and faster at answering more and more types of questions. We’re obviously still a long way from creating a conscious AI.
During my early 90's graduation, had a paper on Artificial Intelligence. To be honest, I never dream in my life time about seeing/feeling the theoretical concepts, studied in college. You know what? Itz real now and disruptive technology reached its height.
During this week Google I/O summit, Sundar Pichai demonstrated Google Duplex, which is designed to pretend to be human, with enough human-like functionality to make similarly inane phone calls in real world.
As Sundar said, Google's AI technology has come a very long way. This demo was pretty incredible if you haven't seen
This new AI based digital assistant helps us to improve life mode by making simple boring phone calls intelligently on your behalf. During this demo, tech revolution has been witnessed in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks.
The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. How is technically possible?
Duplex is a recurrent neural network (RNN), built using TensorFlow Extended (TFX). The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation and more.
Smart Data readiness is quite tricky by training the understanding model separately for each task, but leveraged the shared corpus across tasks. In last state, hyper-parameter optimization from TFX is leveraged to improve the model further.
As artificial intelligence continues to improve, voice quality will improve and the AI will become better and faster at answering more and more types of questions. We’re obviously still a long way from creating a conscious AI.
No comments:
Post a Comment