Scientists at Kyoto University have developed a form of AI able to scan brain waves and visually recreate thoughts. The resulting images are somewhat abstract but the experiments already show promise.
Artificial intelligence is getting better by the day and has apparently already reached a point where it can read our minds. Now, before you start to panic at the impending robot apocalypse it’s worth noting that the technology is still very much a work in progress. Perhaps unsurprisingly, the technology comes from the Kyoto University of Japan and was first detailed in a report published late last month. While the result is not mind reading in the traditional sense, the AI does come surprisingly close to that definition.
The deep image reconstruction technology is exactly what its name indicates and allows artificial intelligence to scan a person’s brain patterns and recreate a visual representation of their thoughts. The full process takes several months, however, and the resulting images are a bit abstract. But while the AI can’t reproduce with 100% accuracy what a person is thinking, at least not yet, it’s still impressive that machine learning is making serious progress towards that goal.
The technology was demonstrated using three test subjects that were shown a variety of images over a period of around 10 months. The experiment actually had two parts, as the subjects’ brain waves were scanned both while they were directly looking at the images and while they were merely thinking about them. As expected, the scientists learned that the artificial intelligence was much better at recreating the images while the subjects were looking at them. On the other hand, when the subjects were simply thinking about the pictures, the AI did yield some interesting results but the recreations looked more distorted.
The researchers actually uploaded a video in which they showcased the results and later explained why their artificial intelligence performed better when the subjects were looking at the images as opposed to when they were merely thinking about them. As it turns out, our brains are actually not that good are recreating accurate images of an object or shape even when we know exactly what that object or shape looks like. And since most people aren’t able to recall something with a high degree of accuracy, the AI finds it harder to put together a visual representation of our thoughts.
While this may seem like new and groundbreaking technology, similar experiments have actually been conducted in the past. For example, back in 2011 the University of Berkley, California combined brain imaging and computer simulation to achieve a similar result. That said, using actual AI has already yielded better results and will undoubtedly continue to improve moving forward.