Thought detection: AI has infiltrated our last bastion of privacy

Our thoughts are private – or at least they were. New advances in neuroscience and artificial intelligence are changing this assumption, while, at the same time, they invite new questions about ethics, privacy and the horizons of brain / computer interaction.

Research published last week by Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals used as radar. In this survey, study participants watched a video while radio signals were sent to them and measured when they returned. The analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of the four basic types of emotion: anger, sadness, joy and pleasure. The researchers proposed that this work could help manage health and well-being and be used to perform tasks such as detecting depressive states.

Ahsan Noor Khan, a doctoral student and first author of the study, said: “We are now looking to investigate how we could use existing low-cost systems, such as Wi-Fi routers, to detect the emotions of a large number of people gathered, for example , in an office or work environment. ”Among other things, it can be useful for HR departments to assess how new policies introduced at a meeting are being received, regardless of what the recipients may say. Outside an office, the police can use this technology to look for emotional changes in a crowd that can lead to violence.

The research team plans to examine public acceptance and ethical concerns surrounding the use of this technology. Such concerns would not be surprising and would evoke a very Orwellian idea of ​​the ‘thought police’ of 1984. In this novel, thought police observers are experts at reading people’s faces to discover non-sanctioned beliefs, although they have never learned exactly what a person was thinking.

This is not the only example of thinking technology on the horizon with dystopian potential. In “Crocodile”, an episode of the Netflix series Black mirror, the program portrayed a memory reading technique used to investigate accidents for insurance purposes. The “corroborating” device used a square knot placed on the victim’s temple and displayed his memories of an event on the screen. The researcher says that the memories: “may not be completely accurate and are often emotional. But, by collecting a series of memories of you and any witnesses, we can help to build a corroborative image. “

Above: Black mirror, “Crocodile”

If this seems far-fetched, consider that researchers at Kyoto University in Japan have developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. Using a neural network, they correlated them with the images shown to the individuals and projected the results on a screen. Although far from polished, it was essentially a reconstruction of what they were thinking. One forecast estimates that this technology could be in use in the 2040s.

Brain-computer interfaces (BCI) are making steady progress on several fronts. In 2016, research at Arizona State University showed a student wearing what appears to be a swimming cap that contained about 130 sensors connected to a computer to detect the student’s brain waves.

Above: demonstration by a Arizona State University doctoral student on a mind-controlled drone flight in 2016.

The student is controlling the flight of three drones with his mind. The device allows it to move drones simply by thinking of directional commands: up, down, left, right.

Move forward a few years to 2019 and the headgear will be much simplified. Now there are brain drone races.

Above: drones flying with your brain in 2019. Source: University of Southern Florida

In addition to flight examples, BCIs are being developed for medical applications. MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally, but does not actually speak out loud. A wearable device with electrodes captures neuromuscular signals in the jaw and face that are triggered by internal verbalizations, also known as subvocalizations. The signals are sent to a neural network that has been trained to correlate those signals with specific words. The idea behind this development is to fuse humans and machines “in a way that computing, the internet and AI intertwine in the human personality as a ‘second self'”. Those who cannot speak can use technology to communicate, as subvocalizations can connect to a synthesizer that would speak the words.

Above: Interface with devices through silent speech. Source: MIT Media Lab

Chip implants may arrive soon

The final BCI could be the one proposed by Neuralink, owned by Elon Musk. Unlike the previous examples, Neuralink promises direct brain implants. The short-term goal of Neuralink and others is to build a BCI that can cure a wide variety of diseases. In the long run, Musk has a broader view: he believes that this interface will be necessary for humans to keep pace with increasingly powerful AI. Last week, Musk announced that human testing of the implants could begin later this year. He says the company already has a monkey with “a wireless implant in [his] skull with tiny wires that you can play video games with your mind. “

The advances being made at BCI are beginning to match what science fiction authors dreamed of in fiction. At the Resistants, a new novel by Gish Jen, a “RegiChip” is implanted at birth in all those considered “Surplus”, which means that there will be no work for them after mass automation. Instead, they will receive a universal basic income and will have no responsibilities other than consuming, to keep the automated economy operating at an efficient level. Among other things, RegiChip is used to track everyone, their physical location, but also their activities, to complete a surveillance society. Of course, RegiChip, like all digital technologies, has the potential to be hacked.

Cognitive scientists said the mind is the brain’s software. Increasingly, physical software has the ability to merge and enlarge the human mind. If the achievements of AI-enabled BCI already seem unbelievable, it stands to reason that BCI’s discoveries in the not too distant future may be really important. Will the technology be used for positive use cases to cure diseases or for mind control? As with most technologies, there are likely to be good and bad things. The software is ready to eat the mind. For now, our unexpressed thoughts remain private, but this may no longer be true in the near future.

Gary Grossman is the senior vice president of technology practice at Edelman and a global leader at the Edelman AI Center of Excellence.

VentureBeat

VentureBeat’s mission is to be a digital city square for technical decision makers to gain insight into transformative technology and transact. Our website provides essential information on technologies and data strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on subjects of interest
  • our newsletters
  • leading closed-minded content and discounted access to our award-winning events such as Transform
  • network resources and more

Become a member

Source