Wow. This will be incredible. Sure there are some scary sides, but the potential here is limitless.
Check it out:
A team of Yale researchers, led by a then-undergraduate student, have made an astonishing step forward in brain science. The (perhaps unsettling) breakthrough allows scientists to use a medical imaging machine and a well-trained algorithm to visually reconstruct faces seen by test subjects. As seen below, their technique returns some results with a truly astonishing level of accuracy. Oddly, their results seem to have been possible specifically because the brain processes faces in such a unique and distributed way. This study takes the field’s greatest and most intractable problem and leverages it to truly impressive effect.
Faces have historically been very difficult to see via the brain. Ever since brain scientists first found our visual processor (the occipital lobe, at the back of the head), they have tried to read and interpret its activity to reconstruct visual data. They reasoned that a detailed-enough model for how each “pixel” we see appears in the visual cortex would allow a one-to-one reconstruction — but that’s only true some of the time. When viewing images like buildings or furniture, simple and inherently unemotional objects, we see mostly with our eyes. When we view a human face, on the other hand, we “see” it in both the visual and emotional brains, evaluate it on a visual and a personal level, and look at it through rotating lenses of trust, safety, sex, and more.