Politicians and technocrats from around the world convened earlier this month at the World Economic Forum in Davos, Switzerland, to discuss how best to orient humanity’s future on its behalf. Among the speakers who had their ear was a so-called futurist and ethicist who hyped the adoption of neurotechnology that would afford employers, governments, and others to decode “brain activity in ways we never before thought possible.”
“What you think, what you feel: It’s all just data,” said Nita Farahany, professor of law and philosophy at Duke Law School and faculty chair of the Duke MA in bioethics and science policy. “And large patterns can be decoded using artificial intelligence.”
Farahany explained in her Jan. 19 presentation, entitled “Ready for Brain Transparency?” that when people think or emote, “neurons are firing in your brain, emitting tiny little electrical discharges. As a particular thought takes form, hundreds of thousands of neurons fire in characteristic patterns that can be decoded with EEG- or electroencephalography- and AI-powered devices.”
Once decoded, the resultant data can be used for a multitude of purposes, good and bad.
The ethicist did not dwell long on the resemblances between the possibilities at hand and the dystopian nightmares previously imagined by science fiction writers such as Philip K. Dick and William Gibson.
Instead, Farahany focused on what she perceived to be the positive outcomes of monitoring and socially engineering the species, emphasizing that the adoption of these technologies could help address “some of the root causes of human suffering, from neurological disease and degeneration to mental illness, but of also unlocking a lot of the secrets of the human brain.”
What would it look like in the field?
The WEF speaker indicated that wearable neurotechnology — contra implanted neurotechnology of the kind Elon Musk’s Neuralink deals in — will herald an era wherein “you can have an EEG sensor in each ear as part of your ear pods, where you also take conference calls and you listen to music but you have brainwave activity that is being monitored all day every day.”
Farahany began her talk with a series of hypothetical uses of such wearable neurotechnology, which she likened to “Fitbits for the brain”:
- An office employee monitors her stress levels as a deadline approaches and, noticing an unusual trend, sends her readings to her doctor for an update.
- Her technology-inflamed neuroses momentarily dissipate and she begins entertaining romantic thoughts about a male coworker. However, an alert appears on her desktop reminding her to refrain from intra-office romance. The employee focuses back on her work. Her prompt obedience is recognized by her boss — also monitoring the worker’s brain waves — who then rewards her with a bonus.
- The next day, in an unrelated incident, her coworker is carted off by police, having been deemed guilty of wire fraud on the basis of his mental activity. Police will eventually scrutinize mental wavelengths in the office for possible co-conspirators.
According to Farahany, it won’t just be white-collar environments where workers’ minds will be policed and tracked in this fashion.
With the purported aim of preventing distracted-driving accidents on the road, Farahany suggested that truck drivers could be equipped with hats containing embedded electrode censors that would score what stage of alertness they are at in any given moment.
Concerning the prospect of using brain surveillance to know when to preemptively intervene, Farahany underscored, “We as a society should want that.”
Farahany also championed the use of attention-tracking ear pods, noting that employers are now able to determine whether an employee’s distractions are central, peripheral, or unrelated to their assigned task. She did, however, couch her support with the caveat that it should be optional on the part of employees, used to determine benefits as opposed to penalties.
Although erring on the side of positivity, Farahany noted that the widespread adoption of these technologies will “change the way that we interact with other people and even how we understand ourselves.”
Furthermore, as technologies improve, she said, “more and more of what’s in the brain will become transparent.”
The WEF speaker suggested that there will be legal, privacy, and human rights implications, and societies will have to contemplate how they will safeguard or litigate around cognitive liberty protections, and big business will have to adopt best practices concerning how to adopt this technology.
Technologies here, adoption coming
Farahany’s sense that this technology will inevitably and widely be adopted is grounded in part in her observation of the meteoric rise in businesses’ adoption of employee surveillance tools during the pandemic. It is also grounded in the understanding that the technology involved in the above hypotheticals is already available today.
Presently, there are earbuds, headphones, and tattoos behind the ear that can be used to pick up or decode emotional states as well as a means to yank out mentally pictured faces, shapes, and numbers.
Neurotechnology company Brain Scientific has produced an ink composed of graphene that, when tattooed upon human flesh, can monitor brain activity.
Fast Company noted that Boston-based Neurable developed headphones that read the wearer’s brainwaves and adjust noise cancellation levels in response.
Neuroscientists at the University of Toronto Scarborough developed a technique in 2018 by which they could reconstruct images of what people perceive based on their brain activity.
U of T News reported that Dan Nemrodov hooked up human test subjects to EEG equipment and then showed them images of faces. That brain activity was recorded and then used as the basis of a digital recreation of the image with the assistance of machine learning algorithms.
Extra to past interest in employee surveillance tools and the present availability of brain-monitoring tools, Farahany noted that 5,000 companies area already tracking their employees’ “fatigue levels.”
The South China Morning Post reported in 2018 that Hangzhou Zhongheng Electric, like many Chinese companies, had been mining data from workers’ brains on an industrial scale to redesign workflows and adjust production speeds.
Sensors were hidden in safety helmets and hats that monitored wearers’ brainwaves and streamed the data to computers where AI algorithms would note emotional states.
In the Q&A following the presentation, moderator and Atlantic CEO Nicholas Thompson said, “Speaking as a CEO, I’m sure all CEOs will use [this technology] completely responsibly,” eliciting laughter from Farahany and the audience.
Didi Rankovic, writing for Reclaim the Net, said that with this new technology, for employers, it’ll be “just like prodding cattle.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!