This three-part series looks at brain-computer interfaces: what they are, what they could become, and the implications of connecting our brains directly to computers.

In part three, we look at what happens when the makers of brain implants go bust, and consider how the law will have to adjust to crimes committed by BCIs.

There is a grand vision for devices that connect people to machines to augment our intelligence, re-define the way we interact with the world, and alter what it is to be human.

But those lofty goals, as often happens, have come up against the cold hard facts of reality. The brain is immeasurably complex. Collecting neural activity is a difficult enough task, as is decoding that data into something meaningful.

As we step toward a future with ever greater connection to the machines that already govern much of our lives, it’s important to consider the broader implications of technology that is, by design, embedded in people’s lives.


A lot of technological development happens in the private sector. Scaling-up a product for mass market consumption is a crucial part of creating new technologies and getting them used by ever more people.

But sometimes companies simply go bust.

On 25 June 2020, auction house Global Partners ran a sale of 700 items from a lab of what it described as a “major manufacturer of medical technology devices”.

The listing shows all manner of lab equipment: electrometers, power supplies, soldering irons, assorted glassware, storage cabinets, bandsaws, drills, microscopes, scales, a scattering of black lab stools.

These were the physical assets of US neural implant firm Second Sight which had serious financial trouble early in the COVID-19 pandemic.

By March 2020, Second Sight had sacked most of its staff and was preparing to liquidate.

Most of the time when a tech company collapses and leaves behind a pile of defunct silicon and plastic, its frustrated customers can pick up something analogous from a competitor.

But Second Sight was different. It sold products that gave artificial vision to people living with blindness. The devices comprised a camera attached to a pair of glasses that sent a low-resolution video stream to a processing unit which spoke, via a transmitter, to either an implant on the patient’s retina, or one in the brain.

For patients, the device was incredible. One man, Benjamin Spencer, told IEEE Spectrum – which covered the effects of Second Sight’s collapse in-depth – that one device he was clinically trialling had let him see his wife for the first time, but the company’s sudden financial troubles left him feeling vulnerable with its tech implanted in his skull.

Another customer, Ross Doerr, was starting to feel extreme vertigo and his doctors recommended an MRI to rule out a brain tumour, but when they tried to call Second Sight for advice on how to proceed given the potential impact an MRI could have on his implant, no-one answered the phone.

Support for Barbara Campbell’s retinal implant ended suddenly one day as she was switching trains in the New York City subway system.

“I was about to go down the stairs, and all of a sudden I heard a little ‘beep, beep, beep’ sound” – it was the sound of her device shutting off, leaving Barbara in darkness.

Her retinal implant never worked again and Barbara opted against a risky surgery to have it removed, according to Spectrum.

Last November, the UK’s Regulatory Horizons Council published a report about the development of neurotechnology that referenced Second Sight which, a few months prior, had found a buyer in biopharmaceutical company Nano Precision Medical.

The council recommended that manufacturers of implant devices “present a plan describing how they intend to manage long-term implants installed in patients” which should include not just a commitment to repair and upgrade the device, but “specific instructions on how to maintain and remove the device that can be followed by a third-party in case the company folds”.


Regulating long-term hardware support is an important consideration for the development of BCI technology, as is the rights of people with neural implants.

A few jurisdictions are looking ahead to think about the implications of these technologies and how to protect citizens.

Spain adopted a Digital Bill of Rights in 2021 that directly references neurotechnologies.

The bill suggests future regulation that will ensure people with implants “preserve individual identity”, are guaranteed “individual self-determination, sovereignty and freedom in decision-making”, and that they maintain “confidentiality and security of the data obtained or related to their cerebral processes”.

The Digital Bill of Rights goes as far as to suggest future regulation of BCIs that enhance people’s cognitive abilities.

Chile, which is currently in the midst of major constitutional reform work, went even further in 2021 when its government passed a constitutional amendment to make sure technological development happens “at the service of people” and is done “with respect for life and physical and mental integrity”.

The change specifically mentions protecting “brain activity” and “the information coming from [the brain]”.

It was hailed as a major win for neurorights, an area concerned with how technology is advancing to capture and influence not just what we say and do, but how we think.

Spanish neurobiologist Rafael Yuste, in the wake of Chile’s constitutional changes, spoke about how external forces naturally change our brain and why it is so important to protect.

“When you learn a foreign language, there are things that change in your brain,” he told Interferencia. Likewise, our interactions with social media slowly change how our brains work.

“The difference with neurotechnologies is that changes instead of coming from outside, now come from within,” Yuste continued.

“In other words, if you change your brain with neurotechnology, you'll think that's you. Imagine being manipulated by a Facebook page with fake news. You always know that it comes from outside.

“On the other hand, if they put that information directly into you, you are going to think ‘this is what I know and what I am’”.

How will the law view cyborgs?

As we saw in part two of this series, even the most advanced BCI technology is far from transmitting fake news directly into your mind today. For the most part, it is used to help people living with severe paralysis use a computer or control prosthesis.

These existing use-cases create an interesting test-bed for potential legal complications that might arise as neural implants continue to develop.

Dr Allan McCay is a criminal law lecturer at the University of Sydney who has taken particular interest in the legal implications of BCIs and how courts might have to answer some questions that strike at the heart of what it is to be human.

“For the prosecution, if it’s a serious offense, they’ve got to prove two things beyond reasonable doubt,” he told Information Age.

“One is the mens rea which is the person’s mental state, the other is the actus reus, which is normally a bodily act.”

Dr McCay has published a paper considering this in relation to a law in NSW that criminalises the non-consensual distribution of intimate images, commonly known as revenge porn.

He proposes a hypothetical scenario in which a person uses an implanted device to control a mouse cursor using an imagined action – like thinking about waving their hand. In this manner, the person assembles a collection of intimate images of another person they want to upload on social media.

When the person thinks about moving their hand and the cursor clicks the ‘upload’ button, the crime is committed – but what was the voluntary action that caused it to happen, the actus reus?

Dr McCay explores four possibilities for how to respond to this problem. His conclusion is that either way, there is some uncertainty that courts around the world will have to face, and how they respond – the precedent they set – is bound to have “important downstream consequences”.

Perhaps it is the mental act of imagining the hand-wave, or whatever mental action corresponds with clicking a mouse cursor for this person, that is the voluntary act, but how does this distinguish from the mental state, the mens rea?

Or it could be that the specific neural activity, the firing of a set of neurons, was the act. This also raises questions, including around how to handle potential device malfunctions.

Another possibility is that the BCI device is treated as being part of the person, rather than as a tool they are using. In this view, the person is a essentially cyborg – the voluntary action they performed was to flow a signal through the inorganic, wired parts of their body, which caused the cursor to click the ‘upload’ button.

Dr McCay says this raises further questions around what it is to be human.

“One might wonder where defendants end and the devices they use, or even cyberspace, begins.”