Ash Munshi, CEO, Pepperdata, considers new discoveries in AI’s worth and its staying power.
AI will affect everything. It will fundamentally alter the way in which we interact with the environment, solve problems, conduct research and live our everyday lives. Below I indicate just some of the changes we are likely to see in the short and near term. The things I describe are just the tip of the iceberg — there are sure to be many other contributions and advancements in AI that will affect the world in ways that we can’t even imagine.
Images and Video
Traditionally, a specialized image processing chip is used for digital signal processing in digital cameras, mobile phones, and other devices. Image processors are used to perform a range of tasks to clean up images with different filters, such as noise reduction or image sharpening. It’s now been shown that deep neural networks are significantly better than the dedicated image processors that are being used — the accuracy and results are significantly better — and consequently, deep learning is dramatically affecting embedded processors. The applications extend far beyond traditional low-level image processing — for example, phone cameras now have embedded facial recognition to unlock your the phone, security cameras can follow individuals, and car cameras can “read” signs or “see” pedestrians.
Similar advances are happening in video. It is now possible to track objects in video streams in real time. A very interesting example is dubbing. Dubbing is most familiar to audiences as a means of translating foreign-language films into the audience’s language. When a foreign language is dubbed, the translation of the original dialogue is carefully matched to the lip movements of the actors in the film. Perhaps you’ve experienced the frustration of watching a film when the lip movements are completely wrong compared to the language. With AI, we can make the lips change and move in exactly the right way. So now you’ll see characters whose lips appear to be completely in sync with the audio, as if the actors were speaking in the dubbed language.
We’re already seeing this tremendous progress in processing, recognition, and manipulation of images and video. One of the consequences of this progress is that it is becoming increasingly difficult to tell the real image or video from one that has been altered or even completely computer generated.
Driverless cars fundamentally depend on processing video streams along with other signals from sensors mounted on the car. Video processing and understanding has come such a long way that it may in fact be possible to build these cars using video alone. Think of the amount of data and the speed of processing that requires!
Natural Language Understanding
Another hugely impactful application of AI is in natural language understanding. Just a few short years ago language translation and understanding speech was a lofty goal. But, today, it is becoming commonplace.
A good example is Google Translate, an incredibly useful tool that quickly and accurately translates many different languages from one language to another. Its quality continues to improve as more data are captured and the models become better and better thanks to so-called sequence to sequence translation using deep learning.
Language understanding is also improving dramatically. This means that products like Siri, Alexa, and Google Home will get much better at understanding intent from what’s being said. Think about actually conversing with devices instead of just issuing commands. Fluid conversations are on the horizon that will allow us to dramatically change the way we use and interact with technology and our environment.
Another area where AI will make a massive difference is in robotics. Robotics has vision, language, and the ability to perform actions in the environment. These actions have consequences that must be processed and used to guide further actions. Progress in this field, aided significantly by advances in deep reinforcement learning, has grown by leaps and bounds.fd
Using deep learning, researchers have been able to make robots that can learn how to perform tasks after being guided by a human. This is especially true for “pick and place” robots in the factory but is also true for medical robots.
Biped robots can walk, run, and even do backflips. More importantly, they can navigate in and understand unfamiliar surroundings, as they are able to adapt and learn from their own mistakes.
An interesting but perhaps mundane example is a dishwashing robot. Human dishwashers are hard to hire and retain at restaurants. Dishwashing is an example of a job that is very important, but very menial work that a lot of people don’t want to do. It turns out that it is possible to build a dishwashing robot really works thanks to advances in sensors, control systems, and AI.
AI is Going Vertical
AI will be integrated into many vertical markets. We will see AI being applied to everything from agriculture to manufacturing to medicine. In agriculture, for example, it will help answer questions about plant health, optimal feeding and watering, and the perfect time to pick fruits or vegetables. In medicine, it will “read” X-rays and MRIs and will be able to detect genes that might cause cancer. In many cases it will supplement a physician by doing tasks that a physician’s assistant performs today. In manufacturing, it will detect errors, know when a part needs to be replaced before it fails, and improve yield. The sheer number of startups attempting to address problems in various vertical markets guarantees that progress will be rapid and far-reaching.
One interesting killer combination is that of deep learning combined with DNA sequencing. Sequencing is experiencing rapid declines in cost that are more aggressive than those experienced by semiconductors due to Moore’s Law. As a result, the ability to sequence is getting extremely cheap extremely fast. Combine that with the data that sequencing spins off and you have a perfect set of applications for Deep Learning. This will be massively transformative to personalized healthcare.
An interesting question to ponder is whether progress in AI will be “owned” by companies that collect massive amounts of data like Facebook, Google, and Baidu.
Fortunately, this is an active area of research. Hinton and others are experimenting with learning methods that are different from deep learning, which have provided some good early results. So there is hope that AI will be applicable more broadly even in the presence of smaller datasets.
Privacy is becoming a bigger issue daily as more and more data is being gathered, harvested, and used to target individuals. As a result, there is considerable interest in being able to do AI where all the data are not visible to any given entity and personal data are well secured. Leveraging powerful end-user devices such as phones and being able to do computation in the presence of encrypted data is an active area of research that appears to be quite promising. In the near future, we may in fact be able to decide with whom we share what data and what things we allow on our device versus in the cloud.
AI is already changing our daily lives, and we can only expect it to accelerate. Strap in, because this is going to be a ride that is the equivalent of the first industrial revolution. It will be equally exciting and frightening. It will cause some industries to fail and others to rise.
Since lives and livelihoods are likely to be drastically affected, it is incumbent upon those of us shepherding this revolution to be always mindful of the human cost associated with this massive transformation. We can and must do better than what we did in the past. I am confident that we are now all better informed and more capable than ever before.