The ongoing development of technology that can read minds is worrying human rights and privacy advocates as new research paves the way for extracting meaningful thoughts from brain waves using artificial intelligence.
A recent study from the University of Texas demonstrated the potential of training AI on a person’s brain activity to roughly capture the gist of what they were hearing, seeing, or thinking.
Researchers had participants lie in a functional magnetic resonance imaging (fMRI) scanner while they listened to hours of podcasts, generating data that was used to train a model to essentially decode their brain.
After the model was trained, participants went back under the scanner and listened to a new story – one that hadn’t been used to generate training data – or told themselves a story.
The result wasn’t a word-perfect transcript, but rather a rough gist of what was being thought.
“Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech, and even silent videos, demonstrating that a single decoder can be applied to a range of tasks,” the paper’s abstract states.
An early version of the research raised questions about potential misuses of technology that can literally read minds, leading the researchers to reassure people that the model could only be trained and used on willing participants.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Jerry Tang, a PhD student who led the study said.
“We want to make sure people only use these types of technologies when they want to and that it helps them.”
Still, there are other ways of gleaming information from people including a device called iCognative that is being marketed by US company Brainwave Science as “the perfect lie detector”.
Brainwave Science claims the device – a headset that captures brain signals using electrodes on the outside of the skull – can identify whether information “is present in the suspect’s brain or not” during an interrogation.
In a new paper for the Australian Journal of Human Rights, Dr Allan McCay from the University of Sydney questions whether Australia is prepared for the potential applications of neurotechnologies, including the question of “whether it would be a human rights infringement to monitor a suspect’s brain in the course of police interview”.
Dr McCay writes that the whole field is “under-theorised” in Australia and “lacks a response from regulatory/human rights institutions".
“Given the emerging challenges this seems unsatisfactory,” he said. “While work focussing on other emerging technologies is likely to be of assistance, there is something particular about the fusion of human and machine that requires special consideration.”
Neurotechnology raises human rights concerns
Australian Human Rights Commissioner Lorraine Finlay told Information Age she recognises the “profoundly positive impacts” of neurotechnology that can help people walk again, use computers for everyday tasks, and treat chronic health conditions.
“However, neurotechnologies also raise profound human rights problems,” Finlay said. “I am concerned about how neurotechnology will impact the right to privacy, among other things.
“The boundary between the external world and a person’s internal mental cognition has historically been impenetrable, but neurotechnologies challenge this by enabling scrutiny of a person’s thoughts and feelings, posing new and unique threats to human rights.”
Internationally, an organisation called the Neurorights Foundation is pushing to have companies, governments, and the United Nations recognise the potential implications of neurotechnologies.
It’s encouraging the development of rights in five specific neurorights: mental privacy, personal identity, free will, fair access to mental augmentation, and protection from bias.
“Boundaries must be developed to prohibit technology from disrupting the sense of self,” the foundation said.
“When neurotechnology connects individuals with digital networks, it could blur the line between a person’s consciousness and external technological inputs.”
The Human Rights Commissioner told Information Age that it ought to be “a strategic priority for both government and industry” to ensure human rights are “central to the design, deployment, and regulation of neurotechnology”.
A spokesperson from the Attorney-General's Department told Information Age that Australians “rightly expect greater protections, transparency and control over their personal information and a privacy system that keeps pace with new technology”.
The spokesperson said the government is currently considering submissions to the Privacy Act review.