Computers powered by AI can decipher your brain activity yet there is limited discussion of AI safety or ethics

View this thread on: d.buzz | hive.blog | peakd.com | ecency.com
·@dana-edwards·
0.000 HBD
Computers powered by AI can decipher your brain activity yet there is limited discussion of AI safety or ethics
As many people know already (or maybe some do not?) there is a technology called FMRI. FMRI stands for functional magnetic resonance imaging.  New AI can allow computers to begin to decipher the meaning behind the detected brain impulses captured by imaging technology like FMRI. The fact that AI will likely continue to get better, more effective, cheaper, more ubiquitous, and that the technological development is unlikely to stop, now would be a good time to open up discussions on the ethical implications of a world where smart devices (Internet of Things) can read our brains, or where the government for national security reasons determines that brains need to be scanned in a similar way to how bodies get scanned at airports following 9/11.

https://www.youtube.com/watch?v=VvsfHDpkHOU
https://youtu.be/nvB9hAarzw4

An interesting paper discusses the ethics of using brain scanning and deciphering technology for national security and potential implications. The paper titled "Neuroscience, Ethics, and National Security: The State of the Art"  offers the abstract below:

> National security organizations in the United States, including the armed services and the intelligence community, have developed a close relationship with the scientific establishment. The latest technology often fuels warfighting and counter-intelligence capacities, providing the tactical advantages thought necessary to maintain geopolitical dominance and national security. Neuroscience has emerged as a prominent focus within this milieu, annually receiving hundreds of millions of Department of Defense dollars. Its role in national security operations raises ethical issues that need to be addressed to ensure the pragmatic synthesis of ethical accountability and national security.

National security organizations are not known for taking ethics seriously and the fact that there is any discussion at all about ethics is an improvement. 

Elon Musk wants to ban killer robots
--

Now if we connect dots here we can take into account the fact that more than likely the robots of the future will be able to read our minds. The AI of the future likely will also be able to potentially take into account what people are thinking as AI is simply the "smart part" of the robot. The fact is, AI and the ability to decipher the brain are the two most disruptive technological developers in all of human history. The unfortunate problem we face is that our social institutions haven't even considered the implications of either of these disruptions and even most AI researchers haven't fully considered the risks that go along with weaponization.

It is in my opinion that "AI experts" are biased typically in favor of their own toys because if you're developing something you are not necessarily the best person to actually conduct the risk assessment. It is true that AI experts may have a deeper understanding of the current state of AI technology, just as cryptocurrency experts or cybersecurity experts might have a deep understanding of the current state of the art in their fields, but this is a separate thing from conducting a risk assessment. A risk assessment should take the input from experts in the form of opinions on what is possible and timelines, but it doesn't  mean that for example Kurzweil or Ng have done an actual risk assessment into AI as neither of these AI experts are cybersecurity specialists. In fact most AI experts who are underestimating the risk of AI do not have any background at all in security and it must be noted that security is a conservative risk based field where the whole idea is to minimize or manage risk while AI researchers aren't exactly focused on the security or risk but on progressing the science to the state of the art.

So I would like to see more input from qualified cybersecurity experts into the AI safety debate. Only by having security experts weigh in can we actually have any idea what the risks and danger is. At the same time we have to also consider how various technologies are converging, such as the case with IoT, BCI, and AI. The Internet of Things can result in a world of ubiquitous smart devices, as AI continues to improve. The BCI on the other hand depending on how that technology evolves, could result in devices which can actually scan the brains.

What would an ubiquitous autonomous world of brain scanning devices look like?
---

Honestly I have no idea but a lot of things would be disrupted. First the entire justice system is based on the fact that a jury cannot read the brains and intentions of the suspect. In a world where juries can look inside the thoughts of every criminal then what would crime look like and would we still need prisons? We would be able to determine who is lying, we would be able to determine who genuinely feels guilty or who genuinely made a mistake vs people who did it on purpose, and we would know the motivations of everyone. When all motivations are known then prison becomes very archaic yet there has been no debate at all about what to do about the justice system even in a world with no privacy and no secrets.

How should justice work in a world where our brains are open books?
---

https://youtu.be/fEfXY995JVY
https://www.youtube.com/watch?v=tRpM0teSvlk

This is a question rather than some kind of answer because no single individual can answer it alone. In a world where all of our brains are open books, where all our motivations are known to AI, whether it's a company like Google, or the government, then what purpose does a justice system serve in that world? Should prisons be abolished? And if we are going to have that world is Internet access indeed a human right? A lot of questions should be addressed and what do you think justice should be assuming we get the world some people hope for which is a fully transparent open society where no one can lie and all motivations are known either to each other or to the AI?



References
----
- - - 

Pallarés-Dominguez, D., & González Esteban, E. (2016). The Ethical Implications of Considering Neurolaw as a New Power. Ethics & Behavior, 26(3), 252-266.

Tennison, M. N., & Moreno, J. D. (2012). Neuroscience, ethics, and national security: the state of the art. PLoS biology, 10(3), e1001289.

Willmott, C. (2016). Use of Genetic and Neuroscientific Evidence in Criminal Cases: A Brief History of “Neurolaw”. In Biological Determinism, Free Will and Moral Responsibility (pp. 41-63). Springer International Publishing.
👍 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,