Artificial Intelligence has been pushing the boundaries of human imagination. The machines today are capable of doing a lot of things that we could not imagine doing, 20 years back. Artificial Intelligence has changed the way we look at learning and inventing. From drug discovery to sports analysis to protecting the oceans, AI has marked its presence everywhere. But, is artificial intelligence outperforming humans? It’s an undoubtful yes. And, in this article, we will tell you where and how. Here we go:
AI Impersonating Celebrities
A Canadian Artificial Intelligence Company, LyreBird uses realistic sounding voice audio by listening to the audio for a minute. By analyzing the voice of Trump, Hilary Clinton, Barack Obama and others, the system was able to reproduce their voices with a colossal accuracy. It’s Trump impression beat that of Alec Baldwin!
LyreBird takes the sample of any voice so as to analyze its waveforms and cues. It then picks up the deviations from platonic ideal of an English voice and instructs its voice synthesis component to make the exact same adjustments to its audio waveforms as those sound curves are generated. This process not only delivers the impeccable accent or general sound but also takes care of the minor quirks and jerks.
At 2014, ImageNet Large Scale Visual Recognition Challenge (ILSVRC), Google came in first with a convolutional neural network approach that resulted in just a 6.6 percent error rate, almost half the previous year’s rate of 11.7 percent. The program correctly identified 74.9 percent of the sketches it analyzed, while the humans participating in the study only correctly identified objects in sketches 73.1 percent of the time.
This started an arm-race between different research groups of the world with deeper neural network architectures coming up and changing the state of the art. Residual networks, densenetworks and very recently diracnetworks have kept coming up with deeper architectures to increase machines’ accuracy of visual recognition.
These convolutional neural networks have enabled the machines to even write suitable captions to images. There are still some situations where machines stagger but the continuous advancement is making it look promising.
Google’s AI for detecting Cancer
Alphabet, Google’s parent company, has been working in the direction of diversifying its research work so as to have a broader impact on human life. In a white paper, Detecting Cancer Metastases on Gigapixel Pathology Images, Google has disclosed it’s research on the diagnosis of breast cancer using its deep learning AI.
For testing the system, Google’s experts used a data set of images courtesy of the Radboud University Medical Center. After customizing and training the model to examine the image at different magnifications, it exceeded the performance of human doctors. Google’s algorithm produced improved prediction heatmaps and its localization score reached 89%, higher than humans’ 73%. Also, the time it took to complete the diagnosis was far less than that by humans.
AI is expert at Lip-Reading
Lip Reading has been considered a Human Art. However, the recent state of the art deep learning technologies has outperformed even the best of the lip readers. One such model is LipNet.
LipNet is the first end-to-end sentence-level deep lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy.
Your very Own Language Translator
Following in the success of Neural Machine Translation systems, researchers at Google thought of the translation which they called “zero-shot translation”. The idea was if the machine was taught to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them? The answer is yes! The “translations” are quite reasonable for the two languages with no explicit linking whatsoever.
Now, there’s one more side to this achievement. If the computer is able to establish connections between concepts and words which have no previous connection, does it mean that the machine has formed a concept of shared meaning for those words? Simply put, Is it possible that the computer has developed its own internal language? Based on the relation of various sentences with each other in the memory space of the neural network, Google’s language experts and AI researchers believe so.
A visualization of the translation system’s memory
At Arxiv, you can read the paper describing the research work on efficient multi- language translation. this paper also scratches the surface of the above-mentioned “interlingua” issue. Although, a lot of research is required to reach to any conclusion for the mystery. Until then, we can live with the idea of such possibility.
Speech Transcription AI better than Human Professionals
A paper from Microsoft claims to have achieved better transcription level than that of humans. To test how their algorithm stacked up against humans, Microsoft hired a third-party service to tackle a piece of audio for which they had a confirmed 100% accurate transcription. The professionals worked in two stages: one person types up the audio, and then a second person listens to the audio and corrects any errors in the transcript. Based on the correct transcript for the standardized tests, the professionals had 5.9% and 11.3% error rates. After getting trained for 2000 hours of human speech, Microsoft’s system tackled the audio while and managed to score 5.9% and 11.1% error rates which are minute but significant.
These were some of the niche domains where Artificial Intelligence is all the rage. The machines outperforming humans is probably one of the greatest human achievements. In the next article, we will come with yet another interesting list of domains where AI has outperformed humans.