Here is another project very useful Universal Translator Project: a computer was taught to read lips and does so with accuracy to 93%, while people with such training manage only a 52% (apparently did not like in the movies).
Computer is less said, is actually a smart artificial sense now, not in the books SF, a computer is trained to recognize the lips but not through the classical sound with sound, but being exposed to tens of thousands of records videos, allowing you to make an analysis of small differences there and learn the differences.
The applications are enormous many and equally important for the digitization of information and scanner books that I showed you the other day. With such powerful computers, a lot of videos and text version may have indexed. All speeches, vlogs, TED Talks sites and so on without any extra effort. Moreover, good enough computer algorithms could “write” the subtitles for any movie, and other algorithms of this type could translate in all languages.
Another example for this Universal Translator: I struggled for a while to find a dictation program in English, which even partially recognize what I say, minimizing the time required huge writing these articles. There is a good one. If instead of sound recognition lips would move to motion recognition?
But if I speak in front of a Chinese, I’m having something like Google Glass glasses on her head that her lips analyzes, detects and what he said in my ear speaks Romanian translation?
Think about this. A universal translator is not something worthy of Star Trek, but we have all the technologies necessary for what I said above, just refined and integrated.
A similar process of machine learning and I realized that was made for Kinect for Xbox system being exposed to tens of thousands of photos with various teaching positions and from there to recognize them.
The analogy given by him was the method of learning a new driver to navigate the city. If you take him 10 times in the same neighborhood, he learned just that place. If we go to 10 places totally different, not really learned anything. The secret is to have cases ranging between themselves enough to learn something, but little enough to learn accurately. Finding the ideal band is quite difficult and errors may occur. In other words, those who have learned to read lips that computer could have become, finally, that the system exhibited a thought wrong set of 28,000 videos and must start from the beginning.
If you want to see how an AI learns to recognize shapes, play with Quick, Draw from Google. Let’s see more about this Universal Translator Project: