How ethical is Artificial Intelligence?

--

Aadhar Sharma, Deepak Singh, Raamesh Gowri Raghavan, and Sukant Khurana

(please note that small portion on the contents of this article are an improvement or next version of what previously Aadhar and Sukant have penned on the topic)

A woman finds out about her pregnancy shortly after the death of her partner. Unable to cope with the grief, she orders a robotic replica of him. It’s strikingly witty like his former self; it even outperforms him in some ways. However, it fails to understand her intimate emotions repeatedly; together, their life becomes a big mess. The preceding was the synopsis of “Be right back”, an episode of Black Mirror — a sci-fi TV series that grotesquely portrays humanity’s obsession with technology. Most episodes are set in the alternative present. Therefore, some depictions are pertinent to the contemporary issues we face with Artificial Intelligence (AI). It’s imperative to analyze the impacts of AI and how it may shape our future.

Trending AI Articles:

1. Back-Propagation is very simple. Who made it Complicated?

2. Neural networks for algorithmic trading

3. Introducing Ozlo

Holistically, AI is defined as algorithms and models targeted at thinking, perception, and action. While the idea had been fascinating humanity for centuries, AI had its real beginning in the summer of 1956, at the Dartmouth College in the USA. A team of 10 eminent scientists — sharing a passion for the modeling of biological intelligence — convened for a seminal 6-week summer project and laid the foundation of AI. The effort was based on a conjecture that “intelligence can in principle be so precisely described that a machine can be made to simulate it”.

“In the six decades since this brash beginning, the field of artificial intelligence has been through periods of hype and high expectations alternating with periods of setbacks and disappointments” — Nick Bostrom, Superintelligence (2014)

While the original purpose of AI was to understand intelligence, it has now come a long way, generating both admiration and apprehension. Those who promote it believe that it will ‘aid human effort’ (like any other technology), others think that it walks into a minefield of moral and ethical obligations not fully understood.

Unemployment and AI:

A 2013 research published by the University of Oxford estimated that over 47% of total US jobs would be automated by the next two decades; in developing countries, these estimates can go beyond 70%. Another study reports more than 60% of people in the UK feel AI will steal their jobs. Unemployment is undoubtedly the most cited drawback of AI. Ever increasing needs for accuracy, efficiency, and economy compels industries to automate processes and terminate traditional jobs. In many cases, this is met by vehement protests and strikes that governments fail to allay.

Every technology which tends to revolutionize society on a global scale also becomes a target of accusations. For instance, the industrial revolution arguably transformed humanity for the better, but societies had initially blamed it for creating unemployment. Wendell Wallach, an ethicist and scholar at Yale University, comments:

“It’s a long-running concern — the Luddite concern going back 200 years ago — that each new form of technology will rob more jobs than it creates. Up to now, we haven’t seen that. Each new technology eventually creates more secondary jobs than it eliminates.”

The effects of mechanization (automation) on the economy are studied well. And the general perception that AI loots away jobs and contributes to growing rates of unemployment appear to be exaggerated. At present, AI can only automate excessively physical and monotonous jobs — it performs poorly at tasks that require high levels of cognitive skills. One may ponder, as Marvin Minsky once pointed out, why no artificially intelligent robots were successfully deployed to contain the Fukushima Nuclear Disaster.

Evidently, future jobs will be more complex and cognitively challenging; this may in-turn augment education and lead to a skilled workforce. However, the effects of this transition on blue and white collar jobs are concerning. Ravi Shankar Prasad, Electronics & IT minister of India, has found it a recurring theme in talks with companies to build the roadmap of $1 trillion digital Indian economy; he promises to create more jobs by training personnel in AI.

Master of one trade, Jack of none:

Hans Berliner, a Computer Science professor at Carnegie Mellon University (CMU), wrote BKG 9.8, a program that played and defeated Luigi Villa in 1979 (the world backgammon champion).This was the first time that a world champion of a recognized intellectual activity had been defeated by a man-created entity in a head-to-head test of skill”.

IBM’s Deep Blue vs. Kasparov (Wikimedia Commons) ; AlphaGo vs. LeeSedol (ceva-dsp)

Garry Kasparova pre-eminent Chess grandmasterlost to IBM’s Deep Blue in 1997. Lee Sedol — champion of the Chinese board game Go — lost to Google’s AlphaGo in 2016. And in 2017, AlphaGo-Zero (a variant of AlphaGo) championed Go by playing against itself. Beating world champions at games such as Chess and Go — which are widely perceived by society to epitomize human intellect — doesn’t necessarily mean that AI has become smarter than humans.

At present, the expertise of AI is limited to a handful of domains; this limits its capabilities of generality. For example, a cow may give milk, but it can’t learn to fly an airplane. In the cow’s defense, one may argue that it’s not engineered by nature to fly an airplane. And that’s exactly why AlphaGo (designed to play Go) can’t even beat a toddler in Chess — not until it learns to play chess (but then it would be called AlphaChess). Applied AI may overkill domain expert, but it’s very domain specific. Domain specificity raises many ethical concerns: if the AI is perceived (a case of mis-attribution) to be more general that it is, it may not only lead to dissatisfaction (which is somewhat acceptable) but also to some health problems [Article 2-AI&Society]. Also, if a domain specific AI gets deployed to do a substantially general job (a case of overselling), it’s highly probable that it will fail miserably.

Whatever may go wrong with AI?

AI can fail: In this case, Google translate completely changes the meaning of a sentence.

Neural Networks are a class of algorithms that are inspired by the way brain works. Industries extensively use them in products such as translators and chatbots. Although neural networks have set benchmarks in performance, they are not hard to fool (look at the image above). Some systems such as AlphaGo may outperform humans, but a good majority of them either underperform or fail occasionally — there’s always a chance of error. Intelligent Personal Assistants like Siri claim to improve one’s productivity but abysmally fail to understand simplest of statements.

Another set of concerns emerge from the autonomous applications of AI, such as driverless vehicles. What would an autonomous AI do if it’s stuck in catch-22 — a situation where any action will lead to causality. For illustration, consider the trolley car dilemma.

The trolley problem: Should you pull the lever to divert the runaway trolley onto the side track? (Wikimedia Commons)

Would the AI in control, flip the lever? If yes, why? If no, why not?

What would be the values of such an AI? And since, the definition of human values are fuzzy, how is one to model it and program it? Philosophical and technical questions as such, obligate researchers to collaborate and develop strategies for a future where humanity co-exists with smart machines.

Whether it’s too early to talk about ethics in AI or not, experts argue. Stephen Hawking, created a ruckus in the AI community with his 2014 interview with BBC, he warned:

“The development of full artificial intelligence could spell the end of the human race”.

However, his contemporary, Michio Kaku is not so gloomy about the prospects; he doesn’t expect a technological singularity (a time when AI becomes smarter than humans) anytime soon. Kaku is a proponent of the “off-switch” theory: If AI continues to behave undesirably, then it can be stopped by switching off the power. The disagreement mainly comes from the split in AI (Science vs. Engineering). Founders of the field have expressed that somewhere in the ’80s, the focus of the field turned to engineering. As Neural Networks gained prominence, they were employed for too many applications — this led to overselling, disappointments and stagnation of quality AI research.

Deep Learning (modern neural networks) collect heavy criticism because no one knows how the algorithms work unlike approaches like Bayesian Learning (algorithms based on probability theory). Some scientists argue: if the algorithms are intractable, i.e., they don’t know how they are doing something, then they are not doing it — in that way, they are not the appropriate model for a problem. Even if these algorithms produce a solution that appears to be promising, it’s often not the best solution for the problem. Noam Chomsky has consistently derided such models, he comments,

“Can machines think is like asking, can submarines swim? If you want to call that swimming, that’s fine. Do planes fly? In English, they do, in Hebrew, they glide.”

“Really knowing Semantics [of the problem] is a pre-requisite for anything to be called intelligence”, Barbara Parte, a fellow scientist, shares his view. The argument is that AI research should focus on understanding how intelligent systems work and then applying it to a problem. Blind computing not only drifts away from the original question but also exacerbates the situation.

Industrialization of AI has an immense potential not only for the overall technological advancement of society but also for unethical practices. Though Snowden’s revelations helped to elucidate the threat of data acquisition to privacy, we continue to bleed data. Big Data (massive data-sets that aid decision making) has become an excuse for unsolicited data collection, and trades.

AI can also be used to transform society into a test-bed for social experiments without informed consent. In 2014, Facebook published a study where they manipulated the news-feeds to test whether user emotions could be influenced. Academics critically slammed the research for the methods they used in the experiment. Google’s Job-recommender AI was brandished as sexist for presenting prestigious ads only to male candidates and its Photo app AI tagged a African-origin dark-skinned person as a gorilla!

Human Computation (Crowdsourcing) gets exploited for Crpytojacking: Millions of websites mine cryptocurrencies (such as Bitcoin) by smartly analyzing and controlling (load balancing) the host machines — leaving users with slow computers and degraded user-experience. Internet Service Providers monitor internet-traffic to smartly throttle speeds by detecting (an application of AI) the type of payload, an ethical issue which is being contested by the Net-Neutrality drive.

Autonomous weapons have also started to emerge from defense research. However, a dumb technology without ‘human in the loop’ can lead to cataclysm. Consider the 1983 Soviet nuclear false alarm incident. On September 26, a Soviet military early-warning system reported that the US had launched multiple missiles. Stanislav Petrov, the man-in-charge judged this to be a false alarm and disobeyed the protocols for a retaliatory strike. An investigation confirmed that warning system had indeed malfunctioned. Human intervention saved millions of lives. The consequences of retaliation (or even a security breach) could have been Ragnarok.

While some issues in AI demand immediate attention, others are more futuristic. Researchers often debate and warn about a technological Singularity: an instance when AI transcends human intelligence. The idea of singularity brings uneasiness because it launches prospects into the world of science-fiction (think about Frankenstein).

Would a super-intelligence dominate humanity?

Would such an AI be malevolent or empathetic to humans?

It is a tremendous technological challenge just to engineer an AGI (super-intelligence) in the first place, aligning it with human values and providing sentience appears to be an impossible task. In that manner, current attempts to create an AGI require deep introspection. Even though concerns about AGI seem to be too far-fetched, we will have to address them sooner or later.

Transhumanism, however, raises more contemporary issues. It aims to enhance the human condition, overcome biological limitations and experience post-human self by using present and future technologies: genetic engineering, AI and brain computer interface among others. Prospects have immense potentials, but if subjected to mischievous motives it may lead to the eradication of all intelligence, a claim made by Nick Bostrom.

Ethical Brainstorming:

Industries and governments have recently started to recognize the importance of ethics in AI.

The ‘Future of Life Institute’ (FLI), co-founded by physicist Max Tegmark, leads the international movement to promote engagement of ethics in AI research. Its scientific advisory board comprises of Industrialist Elon Musk and eminent scientists such as Stephen Hawking, Stuart Russel, and Christof Koch — for some weird reason, Morgan Freeman too. FLI has been instrumental in organizing conferences on ethical-AI and securing funds for relevant research projects. Elon recently donated $10m to the foundation with a focus on safe AI research.

FLI organized the Asilomar AI conference in 2017; it became a confluence of researchers, philosophers, and industrialists. Ray Kurzweil proposed that guidelines should be published in AI, just the way they were once published in Biotechnology — it has worked well for biotech and should work for AI too. Shane Legg, co-founder of DeepMind, pitched the need to understand the internal mechanisms and representations of Neural Networks and Machine learning algorithms.

Partnership in AI is another organization like FLI. Founded by industrial giants: Amazon, Facebook, Google, DeepMind, Microsoft, and IBM; it’s a consortium to bring together organizations, academic institutions, and companies to govern and invest the effort in creating AI that would contribute to humanity’s most significant challenges.

DeepMind, a UK based research company that developed AlphaGo, was acquired by Google for £400m in January 2014; the terms for acquisition made sure that they will not oblige to any unethical research ordered by Google. Recently, it launched an internal team to scrutinize the societal impacts of the technologies it develops, and tackle “key ethical challenges” such as privacy, transparency, governance, and morality among others. Companies such as Microsoft and Google have started to follow the trend internally as well. They are establishing ethics teams headed by top scientists and philosophers to ensure a safe development. A team representative from Google said,

We do not want to stifle the development of innovation but providing a closely monitored structure to these systems is crucially important to us and society.”

Openness has a great impact on software; it creates better programmers, detects bugs, and improves the quality of code. It also makes public: the source code, science, data, safety techniques, capabilities, and goals. Therefore, the global desirability of openness in AI is appearing. OpenAI is a non-profit AI research company (funded by, Elon Musk) that aims to promote and develop friendly AI and distribute the benefits humanity as a whole.

Academic institutions are the best place to introduce AI ethics since an early orientation towards ethical development would solve a chunk of the problem beforehand. Universities have started to add philosophy, humanities, and ethical studies in the AI curriculum. CMU, one of the world leaders in Machine Learning research, received a donation of $10 million from K & L Gates Foundation to promote ethical research in AI and robotics. Prof. Subra Suresh, president (now, former) of CMU, spoke,

“It is not just technology that will determine how this century unfolds. Our future will also be influenced strongly by how humans interact with technology, how we foresee and respond to the unintended consequences of our work, and how we ensure that technology is used to benefit humanity, individually and as a society.”

Learning to use AI:

AI, Internet of Things, and other technologies have transformed humanity as we know it. By collecting data and tracking our activities, websites know more about us than we may know about ourselves. Our algorithms make art but don’t understand it. Vehicles are factory equipped with auto-pilots that may not understand the consequences of a decision.

AI has evoked many issues in the past and will continue to do so; the aim is to play smart by taking precautions. A lot of introspection, science, and capital is being invested in ethical research, economics, and philosophy of AI. Researchers and industrialists have begun to recognize the appropriate role of AI in society. However, government policies haven’t yet attained a substantial stature.

Climate change and growing population have drastically reduced the forests cover of Sundarbans. As a result, humans are forced to kill the endangered Bengal tiger upon encounter. A single tiger’s territory may span 60–100 square kilometers; clashes are bound to happen. This is a problem that matters, the reason no one cares is that economic returns from solving this are too low.

AI is the harbinger of immense potential and boundless possibilities, but to fully experience them, governments should graduate from shallow motives of just countering unemployment. We must strategically deploy AI to target more significant issues such as climate change, health-care, education, security, and research. Addressing the most significant questions in the universe will reveal the real potential of AI. It’s not AI per se that deserves respect or hate in the first place; instead, it’s the will and actions of humanity that determines the course of our future.

— —

About:

Adhar Sharma was a researcher working with Dr. Sukant Khurana’s group, focussing on Ethics of Artificial Intelligence. Dr. Deepak Singh , a Ph.D. from Michigan, is now a postdoc based at Physical Research Laboratory, Ahmedabad, India and is collaborating with Dr. Khurana on Ethics of AI and science popularization.

Raamesh Gowri Raghavan is collaborating with Dr. Sukant Khurana on various projects, ranging from popular writing of AI, influence of technology on art, and mental health awareness.

Mr. Raamesh Gowri Raghavan is an award winning poet, a well-known advertising professional, historian, and a researcher exploring the interface of science and art. He is also championing a massive anti-depression and suicide prevention effort with Dr. Khurana and Farooq Ali Khan.

You can know more about Raamesh at:

https://sites.google.com/view/raameshgowriraghavan/home and https://www.linkedin.com/in/raameshgowriraghavan/?ppe=1

Dr. Sukant Khurana runs an academic research lab and several tech companies. He is also a known artist, author, and speaker. You can learn more about Sukant at www.brainnart.com or www.dataisnotjustdata.com and if you wish to work on biomedical research, neuroscience, sustainable development, artificial intelligence or data science projects for public good, you can contact him at skgroup.iiserk@gmail.com or by reaching out to him on linkedin https://www.linkedin.com/in/sukant-khurana-755a2343/.

Here are two small documentaries on Sukant and a TEDx video on his citizen science effort.

--

--

Emerging tech, edtech, AI, neuroscience, drug-discovery, design-thinking, sustainable development, art, & literature. There is only one life, use it well.