Brainwave-reading devices: privacy, security, discrimination, and manipulation
Whether humans will be telepathic soon and what implications the brain-hacking devices will have on the human species is a question that is engaging scientists in many fields.
Who could dispose of human thoughts? Will personal ideas be a company resource or will connecting the brain to different types of devices enhance individuals and the entire human society? How close we are to a new definition of a human being?
Among the risks highlighted by some ethic experts is the idea of thoughts or moods being accessed by big corporations as well as the bigger question about whether such devices fundamentally change what it means to be human.
Today, computers are used to perform or facilitate an enormous variety of tasks and activities of daily living, including, but not restricted to, banking, trading, scheduling and organizing events, learning, entertaining, gaming and communicating.
Computer usage does not restrict solely to the social and economic domain. Several activities that are considered inherent to our psychological and biological dimensions are now supported or facilitated by computing. Examples include the use of GPS systems in geolocation and spatial navigation, the use of wearables in monitoring bodily processes such as calorie intake, heartbeat rate, and weight loss, and the use of personal computers in performing cognitive tasks such as arithmetic calculus, writing, and memory.
However, the benefits that new brain-hacking technologies bring to patients who cannot be greatly helped by mainstream medicine will allay their fears of losing humanity. Brain-computer interfacing technologies are used as assistive technologies for patients as well as healthy subjects to control devices solely by brain activity. Yet the risks associated with the misuse of these technologies remain largely unexplored.
Over the past few months, Facebookand Elon Musk’s Neuralink haveannounced that they’re building tech to read your mind — literally.
Mark Zuckerberg’s company is funding research on brain-computer interfaces that can pick up thoughts directly from your neurons and translate them into words. The researchers say they’ve already built an algorithm that can decode words from brain activity in real-time.
And Musk’s company has created flexible “threads” that can be implanted into a brain and could one day allow you to control your smartphone or computer with just your thoughts. Musk wants to start testing in humans by the end of next year.
Other companies such as Kernel, Emotiv, and Neurosky are also working on brain tech. They say they’re building it for ethical purposes, like helping people with paralysis control their devices.
As neural computation underlies cognition, behavior and our self-determination as persons, a careful analysis of the emerging risks of malicious brain-hacking are paramount, and ethical safeguards against these risks should be considered early in design and regulation. This contribution is aimed at raising awareness of the emerging risk of malicious brain-hacking and takes a ﬁrst step in developing an ethical and legal reﬂection on those risks.
The number and quality of human activities enabled or mediated by computers are increasing rapidly. Emerging trends in information and computer technology such as big data, ubiquitous computing, and the “Internet of Things” are accelerating the expansion of computer use in our societies.
Brain-computer interface technology includes systems that “read” neural activity to decode what it’s already saying, often with the help of AI-processing software, and systems that “write” to the brain, giving it new inputs to change how it’s functioning. Some systems do both.
The problems of technology misuse and security of biological information are particularly critical in the context of neurotechnology as this type of technology applies (either directly or indirectly) to a very important organ in the human body, the brain. The brain not only contributes signiﬁcantly to life maintaining processes (such as nutrition and respiration) but also to faculties such as consciousness, perception, thinking, judgment, memory, and language and is of great importance to our behavior and our self-identiﬁcation as sentient-beings or persons. Therefore, misusing neural devices for cybercriminal purposes may not only threaten the physical security of the users but also inﬂuence their behavior and alter their self-identiﬁcation as persons.
Another threat to psychological continuity comes from the nascent field of neuromarketing, where advertisers try to figure out how the brain makes purchasing decisions and how to nudge those decisions along. The nudges operate below the level of conscious awareness, so these noninvasive neural interventions can happen without us even knowing it. One day a neuromarketing company is testing a subliminal technique; the next, you might find yourself preferring product A over product B without quite being sure why.
Several countries are already pondering how to handle “euro rights”. In Europe, the OECD is expected this year to release a new set of principles for regulating the use of brain data. In Chile, two bills that would make brain data protection a human right will come before parliament for a vote in November.
Some neuroethics argues that the potential for misuse of these technologies is so great that we need revamped human rights laws, a new “jurisprudence of the mind” — to protect us. The technologies have the potential to interfere with rights that are so basic that we may not even think of them as rights, as our ability to determine where our selves end and machines begin. Our current laws are not equipped to address this.
Lawmakers move slowly, and if we wait for devices like Facebook’s or Neuralink’s to hit the market, it could already be too late.