12 Jan 2020

Jewish Facebook Is Building Tech To Read Your Mind ~ Ethical Implications Staggering

Technology Is a Massive Threat to Humanity
By Sigal Samuel: Facebook wants to create a device that can read your mind — literally. It’s funding research on brain-machine interfaces that can pick up thoughts directly from your neurons and translate them into words, the company announced in a blog post last week.
The short-term goal is to help patients with paralysis, by decoding their brain signals and allowing them to “speak” their thoughts without ever having to move a muscle. That could be a real public good, significantly improving quality of life for millions of people. In the US alone, 5.4 million people currently live with paralysis.
But Facebook’s long-term goal is to reach a much, much wider audience: The aim, it says, is to give all of us the ability to control digital devices — from keyboards to augmented reality glasses — using the power of thought alone. To do that, the company will need access to our brain data. Which, of course, raises some ethical concerns.
The Facebook-financed research is taking place at the University of California San Francisco. Scientists there published the results of a study in a recent Nature Communications paper. In a first for the field, they say, they’ve built an algorithm that’s able to decode words from brain activity and translate it into text on a computer screen in real time.
The human participants in their study — three volunteers with epilepsy — already had electrodes surgically implanted on the surface of their brains as part of preparation for neurosurgery to treat their seizures. They listened to straightforward questions (like “How is your room currently?”) and spoke their answers out loud. The algorithm, just by reading their brain activity, decoded the answers with accuracy rates as high as 61 percent.

That’s pretty impressive, but so far the algorithm can only recognize words from a small vocabulary (like “cold,” “hot,” and “fine”). The scientists are aiming to grow its lexicon over time. Importantly, Facebook also wants to develop a way of decoding speech that doesn’t require surgery. The ideal would be a noninvasive wearable headset, though that’s harder to build.

In the meantime, we have a chance to consider the ethical implications of this neurotechnology — and it’s crucial to do that, especially since Facebook isn’t the only one exploring brain-computer interfaces (BCIs).

Various scientists, the US military, and companies like Kernel and Paradromics are also working in this space. Elon Musk’s company Neuralink recently revealed that it’s developing flexible “threads” that can be implanted into a brain and could one day allow you to control your smartphone or computer with just your thoughts. Musk said he hopes to start testing in humans by the end of next year.

It’s necessary to discuss the ethical implications of these neurotechnologies now, while they’re still in development. They have the potential to interfere with rights that are so basic that you may not even think of them as rights: your mental privacy, say, or your ability to determine where your self ends and a machine begins. Neuroethicists like Marcello Ienca have argued that we may need new legal protections to safeguard these rights from emerging tech. But lawmakers move slowly, and if we wait for devices like Facebook’s or Neuralink’s to hit the market, it might already be too late to enshrine new rights for the neurotechnology age.

Brain-computer interfaces’ fast slide from sci-fi to reality


If you haven’t heard about BCIs before, it can be hard to believe this is now real life, not something out of a Neal Stephenson or William Gibson novel. But this research really is happening. And over the course of the past dozen years, it’s begun to actually change people’s lives.

BCI tech includes systems that “read” neural activity to decode what it’s already saying, often with the help of AI processing software, and systems that “write” to the brain, giving it new inputs to actually change how it’s functioning. Some researchers are interested in developing bidirectional interfaces that both read and write.

There are different reasons why you might be interested in developing this tech. On one end of the spectrum are useful, quotidian applications like translating paralyzed people’s thoughts into speech or helping them operate prosthetic limbs. As The Verge explained, early success in the field — which focused not on speech but on movement — dates back to 2006:

The first person with spinal cord paralysis to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

Some futurists have decidedly more fantastical motivations. Musk has said he ultimately aims “to achieve a symbiosis with artificial intelligence.” His goal is to develop a technology that enables humans “merging with AI” so that we won’t be “left behind” as AI systems become more and more advanced.

For now, the general invasiveness of BCI — implanting electrodes in or on the brain — drastically limits the commercial potential of this tech. But companies like Facebook are researching noninvasive methods, like a system using near-infrared light that could detect blood-flow changes in the brain while staying outside of it.

The ethical risks of brain-reading technology


As with many cutting-edge innovations, this one stands to raise ethical quandaries we’ve never even considered before. The scientists involved in the Facebook project acknowledged that they cannot, on their own, foresee or fix all the ethical issues associated with this neurotechnology.

“What we can do is recognize when the technology has advanced beyond what people know is possible, and make sure that information is delivered back to the community,” Mark Chevillet, who helms the project, says in the company blog post. “Neuroethical design is one of our program’s key pillars — we want to be transparent about what we’re working on so that people can tell us their concerns about this technology.”

In that spirit, here are five concerns about the tech Facebook is helping to develop.

1. Privacy: Let’s start with the obvious. Our brains are perhaps the final privacy frontier. They’re the seat of our personal identity and our most intimate thoughts. If those precious three pounds of goo in our craniums aren’t ours to control, what is?

Facebook took care to note that all brain data in the study will stay onsite at the university. And Chevillet told MIT Tech Review, “We take privacy very seriously.” Nevertheless, given that Facebook has been embroiled in a series of privacy scandals — of which Cambridge Analytica is only the most glaring — the public may not take such assurances to heart.

“Facebook is already great at peering into your brain without any need for electrodes or fMRI or anything. They know much of your cognitive profile just from how you use the internet,” Roland Nadler, a neuroethicist at the University of British Columbia, told me. “This is why I worry about this research program in the hands of Facebook in particular. It’s being able to couple that dataset with actual in vivo brain data that has the potential for any number of unforeseen consequences.”

What if Facebook were to, say, sell our brain data to companies for the purposes of advertising? Advertisers are already working on figuring out how the brain makes purchasing decisions and how to nudge those decisions along. That field, called neuromarketing, is still in its infancy. But Nadler warned that a powerful tech giant like Facebook could catalyze its growth to the point of “influencing purchaser behavior in potentially scary ways.”

2. Algorithmic accountability: One of the major problems with algorithmic decision-making systems is that as they grow in sophistication, they can become black boxes. The specifics of how they arrive at their decisions can get so complex that they’re opaque, even to their creators.

If that’s the case with the algorithm used by Facebook’s project, the consequences could be serious. If nobody can explain to you how and why the machine erroneously decoded your thought as X and X turns out to be very bad (“I intend to murder so-and-so”), then the lack of transparency means you will have a hard time demanding redress for the harm that befalls you as a result of this misread thought.

“There’s a risk that we come to trust what the machine says as gospel, without wondering if it goes wrong or how we even know if it goes wrong,” Nadler said. “The opacity of the machine is a real worry.”

3. Changing norms: Another big risk is that this neurotechnology might normalize a culture of mind-reading, causing us to give up — so slowly and subtly we almost don’t notice it’s happening — our expectations of mental privacy.

One day, our interiority could become a thing of the past, with the technology decoding not just the thoughts we’d like it to transcribe for our own convenience but also the thoughts we want to keep private. That could include everything we keep hidden in our inner sanctum, from sexual fantasies to political dissent.

“A lot of my concerns about Facebook accumulating this data are surveillance and civil liberties concerns. You’d worry about the way that Facebook would be helping build a surveillance state,” Nadler said, adding that being able to peer into the brain would be game-changing for law enforcement.

If you find it hard to imagine that a project incubated by Facebook could dramatically change norms around surveillance and law enforcement, just think for a minute about facial recognition technology. Facebook rolled out that tech years ago in an innocent context: tagging your friends in photos you posted on the social-media network. But now the tech is used for policing and surveillance, disproportionately harming people of color. And other giants like Apple, Amazon, and Microsoft are all mired in controversy over it.

4. Existential alienation: Rubbing out the distinction between mind and machine also comes with more philosophical risks, like the risk that we might feel alienated from ourselves. The more you meld with a machine, the more you might grow confused about your own agency — where you end and the device begins.

A recent article in Nature noted that the predictive nature of some BCI algorithms raises this concern:

Such algorithms learn from previous data and guide users towards decisions on the basis of what they have done in the past. But if an algorithm constantly suggests a user’s next word or action, and the user merely approves that option, the authorship of a message or movement will become ambiguous. “At some point,” [neuroethicist Philipp] Kellmeyer says, “you have these very strange situations of shared or hybrid agency.” Part of the decision comes from the user, and part comes from the algorithm of the machine.

The article also gives the example of an epileptic woman, identified only as Patient 6, who’d been given a BCI to warn her when one of her seizures was coming on so she could take medication to ward it off. She came not only to rely on the device, but to feel such a radical symbiosis with it that, she said, “It became me.” Then the company that implanted the device in her brain went bankrupt and she was forced to have it removed. She cried, saying, “I lost myself.”

On the other side of the spectrum, another epileptic patient who’d had the same device implanted in his brain became depressed because he felt it compromised his autonomy. It “made me feel I had no control,” he said.

In either case, the risk of the device was that it fundamentally shifted the patient’s sense of self. A BCI that reads and writes our thoughts could, if it becomes sophisticated enough, do something similar.

5. Oversight: One big risk — so big, in fact, that it could be considered a meta-risk that inflects all the rest — is the lack of existing regulation in this space. It’ll take time for politicians and lawmakers to catch up to the new realities that brain-reading tech makes possible. For now, tech giants can swoop into this legal vacuum with little or no oversight as to how they can gather, store, and monetize our brain data.

“Facebook has a track record of playing fast and loose even with already-established regulatory rules,” Nadler said. “And with an entity of their size, it’s difficult to imagine what kind of regulatory oversight would be truly effective. What would an agency action even do? There’s a certain level of impunity that they already operate with because there’s no fine big enough to sting.”

After all, when the Federal Trade Commission fined Facebook a record-breaking $5 billion for privacy violations last month, the company didn’t fold. It didn’t even blink. In fact, its stock shot up.


Image: Brain-computer interfaces like the mindBEAGLE system shown here can help people with paralysis communicate. Facebook wants to take the technology to the next level. AP 

Source


No comments:

Post a Comment