Grigory Rashkov at Russian research company Neurobotics and colleagues trained an AI utilizing video clips of different objects and brainwave recordings of somebody watching them. The recordings were made utilizing an electroencephalogram (EEG) cap, and the video clips incorporated nature scenes, people on jet skis, and human emotions.
The AI then tried to classify and recreate the videos from the EEG data alone. In 210 out of 234 trials, the AI successfully ranked each video, by presenting tags such as extreme sports, waterfalls, or human features.
Visually, the AI appeared to have the most success in recreating the primary themes of the models, such as large shapes and colors. More nuanced details such as those discovered on human faces were more challenging to regenerate, with most appearing distorted and beyond identification.
Artificial intelligence is getting more skilled at reading your mind. An AI could guess what videos somebody was watching altogether from their brainwaves.
Mind-reading AIs are still only staring at the surface of human thought, states Victor Sharma from the University of Arizona. “What we are currently witnessing is a caricature of human experience but nothing remotely following an accurate re-creation, he answers.