If there is one thing about artificial intelligence (AI) most people agree on it’s the fact that AI and the way in which its many ‘forms’ are leveraged, whether it’s in a context of cobots, sentiment analysis applications, autonomous decision-making in smart buildings, intelligence at the edge of IoT (Internet of Things) or any other solution, should serve human, business and societal goals one way or the other.
As artificial intelligence grows in its capabilities and its impact on people’s lives businesses must move to “raise” their AIs to act as responsible, productive members of society (Accenture, Citizen AI research)
That is of course easier said than done, it is an area of AI research for a reason. There are fears, there are different views on what societies need, there are several technologies which can be used by all industries and people (regardless of activities), there is the question about future applications enabled by rapidly evolving ‘forms’ of AI such as deep learning, there is the aspect of regulation as lawmakers start looking at AI, there are discussions about what type of goals are ethical and there are ample pioneers, researchers and thinkers looking at machine ethics and computational ethics or even ethics towards robots.
One of them is Nell Watson, a speaker at the AI for business event. Nell is, among others, an adjunct within the Artificial Intelligence and Robotics track of the Singularity University where she mainly lectures on machine intelligence, the relationship between people and robots and the future of society.
Machine ethics and the ethics of a beneficial AI
One of Nell’s many fields of expertise and favorite topics is machine ethics. Another one is the human factor in machine intelligence.
How do we make most of the capabilities of machine intelligence while ‘ensuring that the spirit and wisdom of the human isn’t lost in a sea of cold algorithms?’as she describes it on her site.
In the Summer of 2017 we had a long chat with Nell on these topics and on her EthicsNet (previously OpenEth) initiative. In the scope of the influence of ethics within trade and economic systems, EthicsNet is trying to map the space of these ethical machine and computing topics, for instance to make machines safer and to let machines behave in more pro-social ways or, resuming, ‘to create moral and ethical machines‘.
At the same time these ethical aspects can be added as economics, Nell said, for instance doing preferential trading with people with similar ethical values or even to do a moral trade, enforced with smart contracts (blockchain) with a reward.
Machines are intended to help liberate us, let’s help ensure they take our society in the right direction (Nell Watson)
According to Nell we are going to see ethics and economics linked in another layer, a fourth layer of the Web.
The ethics debate is one of several which is linked with that idea that AI and its applications should serve (hu)mankind. As another speaker at the event, Microsoft BeLux CTO, Myriam Broeders, said, reminding the words of CEO Satya Nadella: “The most critical next step in our pursuit of AI is to agree on an ethical and empathic framework for its design.” Food for more AI research on top of the ethical dimension in AI research we look at now.
Ethics in AI research and research into the malicious use of artificial intelligence
In 2015 one of the most vocal AI danger warning entrepreneurs, Elon Musk, pumped 10 Million USD in the Future of Life Institute as we described in an article on artificial intelligence fears.
That Institute, among others, had just published its famous AI Open Letter for a ‘robust and beneficial AI’ as a priority in AI research. An ethical commitment indeed. We also signed it at the time as various speakers at the AI for Business Summit such as Robovision CEO Jonathan Berte, a pioneer in the combination of deep learning, machine vision and robotics did.
Musk also founded OpenAI end 2015 which published the Malicious AI report in February 2018 (with the Future of Humanity Institute, University of Oxford, the Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security and the Electronic Frontier Foundation).
Later in February Musk left the OpenAI Board but keeps funding and advising it. According to a blog post Musk leaves the board since Tesla continues to become more focused on AI, which could be a potential future conflict.
Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI (The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation)
The same blog post also mentions new donors and announces that OpenAI is about to articulate the principles with which it will approach the next stage of OpenAI, and the policy areas in which the organization wants changes to ensure that, indeed, AI benefits all of humanity. In case you haven’t read the Malicious AI report yet (“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”), doing so will probably give you an idea of those policy areas (check it out here in PDF).
Obviously Elon Musk is far from the only prominent technology and science personality having warned for the dangers of AI.
Evolutions in AI: applied deep learning benefits in practice – and risks
Back to the AI for business event and a look at just one way about how deep learning delivers value today but how it also can lead to risks.
While Nell Watson speaks about the Transparency–Privacy paradox in a machine-intelligence-driven world at the event, Jonathan Berte will cover several sessions on deep learning in practice. We mention it for a reason in this scope of ethics and nurturing AI as well.
Robovision is, among others, a leader in AI-based software for the automated rooting of plants using robots. The combination of AI-based image processing and deep learning even enables the robots to root different types of plants based on its self-learning capabilities. It’s an excellent example of artificial intelligence in business.
Simply said: robot sees plant, robot doesn’t need reprogramming because it needs to root other plants. Imagine the possibilities.
While such advanced applications of AI in agriculture probably raise less questions, the future of artificial intelligence in work, business and society does. People like Nell are among those diving deep into it. And deep learning is one of the essential aspects. Why? We just said it: imagine the possibilities.
The most critical next step in our pursuit of AI is to agree on an ethical and empathic framework for its design (Satya Nadella)
With deep learning (which, although change is coming, isn’t fully understood yet) and machine vision one can root plants in faster and cheaper ways than ever as, simply put, the self-learning curve is so short. With deep learning and pattern recognition one can also make weapons that are more efficient, faster and cheaper than ever.
On top of the exponential growth of, among others, computing power, improved machine learning algorithms (of which deep learning is part), especially in the sphere of deep neural networks, are recognized as major contributors to progress in AI in recent years as is also reminded in the Malicious AI research report.
There is a difference between AI and AI – there is a difference between benefits and benefits
The question remains: how do we make sure AI is used for the benefit of mankind? And what about the fact that what for one might be a benefit, isn’t a benefit for the other at all or even worse?
There is a reason why some governments start drafting ethics rules in the scope of, among others, self-driving cars (yes, they have killed as you know and as mentioned in our article on the Internet of Trusted Things, people like Bruce Schneier do warn about when digital really goes physical, beyond the mere connection) and there are reasons why several organizations since long started campaigning against killer robots with governments starting to take decisions here as well (think back about the debates, which still exist, on drones in warfare).
Let’s leave the overhyped, single-entity AI that can do anything to science fictions writers (Maarten Herthoge, Consultant & Team Lead Data Science at delaware)
What are benefits? What ethical laws guarantee that the benefit of the one doesn’t override the benefit of the other or the many? Who decides? Reality is that many lawmakers don’t even see the tip of the iceberg of AI but that won’t take long anymore (and there are exceptions).
We’re not going to dive deeper into all the debates here and now. What we do want to touch upon is that dimension of AI (and robots etc.) being at the service of society in the broadest sense. “Purposeful AI” as Infosys calls it. It’s clear that there are so many different cultures, purposes and contexts in which AI is leveraged that it’s hard to generalize or resume.
AI is and does remain an umbrella term for very different things. You can’t exactly compare a chatbot with robots rooting plants, applications in the treatment of cancer, AI in the scope of inbound unstructured communications in a contact center or what Musk is doing with SpaceX and Tesla, to cite just a few examples.
Moreover, it’s not because we sign an open letter regarding research priorities for robust and beneficial artificial intelligence, stating that “the progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI” (all signatories so far here) or an open letter from AI and robotics researchers on autonomous weapons that all is OK, of course.
Between human intent and reality indeed stand cultural differences, views of what is beneficial and ethical, moral judgements and actions, often called rational but really being emotional, to name a few.
And we haven’t even touched upon the notion of moral machines. As on the site of Nell’s EthicsNet: “We believe that the future of a thriving humanity in an artificially intelligent world is predicated on the ability to create moral and ethical machines”.
AI research: Citizen AI – raising AI as if “it” were a child to become a responsible member of business and society
Anyway, what is clear is that a lot of work is done in all these areas across several instances, from industry bodies and governments to researchers, NGOs and so forth. Some of it looks at the shorter term and others are decades ahead in their thinking and work.
The short term brings us to the here and new and to the February 2018 “Citizen AI – Raising AI to Benefit Business and Society” work of Accenture in the scope of its Technology Vision 2018.
Here are two quotes from the page of Accenture on “Citizen AI – Raising AI to Benefit Business and Society”:
- Raising responsible AI means addressing many of the same challenges faced in human education and growth.
- AI is more than a program. It’s becoming a citizen that must be raised responsibly.
Everybody is starting to use AI as a fundamental way of how they are actually building their applications (Michael Biltz, MD Accenture Technology Vision)
Hold those thoughts. Raising AI as a valuable member of society, among others in its combination with machines and ample other technologies and applications. And also: how businesses should raise AI to be a good citizen as is also explained in the Accenture video at the bottom of this post.
In the video AI is also mentioned as needing nurturing like you would do with a child. That nurturing moreover is a journey, as with, for instance, a child with in this case the ‘child’ being at the service of mankind, business and so forth. It’s part of the essence of ‘raising AI responsibly’.
Of course these are not the only aspects covered in the video and in the Accenture Report which you can download here. Other topics regarding raising AI are related to data and avoiding bias to the max as of course AI is fed by the data.
Accenture Technology Vision 2018 – report PDF
Also the need to stop looking at AI as a thing as said before and as we’ll keep saying is mentioned: different forms of AI hardly overlap at all so away with that single view. That’s also what another speaker at the mentioned event, Maarten Hertoghe, Consultant & Team Lead Data Science at delaware, says as you can read here.
Pointing to Gartner’s words that “Although using AI correctly has the power to disrupt, it is not the holy grail where a general AI is able to learn any task that a human can learn and do. Instead, organizations should integrate & highly scoped machine learning solutions that target a specific task with algorithms chosen that are optimized for that task”, he urges to leave the overhyped, single-entity AI that can do anything to science fictions writers.
Raising artificial intelligence and robots – parenting and autonomy
However, lets take a step back and think about raising a child. Do we raise it to serve mankind or whatever purpose in whatever scope? Children are indeed raised to find a place in society and contribute, among others by working, for instance to serve customers.
We most of all – try to – raise children to become their true selves and find their own place in society in an autonomous way: most of all empowered with the human and emotional skills to find their way, whatever comes their way.
In our chat, Nell Watson also referred to that nurturing and raising of a child idea, albeit in a somewhat different context: people in relationship to robots on top of robots serving people.
Regarding the latter watch the presentation from Accenture embedded below: citizen AI as raising AI to benefit business and society.
Now, if AI (and machines) need to be raised to become responsible(also think about the autonomous aspect) representatives of the business, a contributing member of society, moral and ethical, empathic, conforming to the norms of the society in which they operate and trusted, to mix a few quotes from the presentation, from our talk with Nell, from some speakers at the event and from the Future of Life Institute, we have ample very human and by definition culturally different (norms, ethics) and often even highly individual and emotional terms (such as trust, empathy and morality) that deserve further exploration. And with responsibility we even enter other domains such as liability to name just one.
One doesn’t need a license to raise a child. One can have very different approaches and values in raising a child.
One can also raise a child in such a way that the child will be suffering as long as it lives, sometimes even making one wondering if a license now and then wouldn’t be a good idea. Of course that’s impossible: no moral background checks, no universal way to define who could be a good parent and a lot of people just trying to do their best as they too were children once and are limited.
The real shift will be when computers think in ways we can’t even begin to understand (Thomas Koulopoulos)
If AI is a citizen and a child that must be raised responsibly to serve mankind and society than who decides over, or worse “owns”, the ethics and capacities and who controls that? Or is AI a child that becomes autonomous just as we try to raise our children so they become autonomous?
Or perhaps it’s as Mell Watson said at a TEDx event; that we falsely believe we are separate, that there is something intrinsic in each of us that appears to be personal.
As people such as Ray Kurzweil and Joël de Rosnay (more here) predicted we perhaps are indeed evolving towards a ‘new lifeform’, in and beyond the symbiosis of man and machine or a global super organism and, perhaps as Nell wrote “machines are intended to help liberate us” and, in a sense, the child needs the parent less than the parent needs the child, called AI.
One thing is for sure: AI is not a thing (we said it again) and it’s definitely not a biological child. So if we want to raise “it” (“it” is not an it), maybe we should further question what trust, norms, benefits and so forth AI needs to be nurtured with for a further future as the main food in raising it today really is data.
Companies must use care when selecting taxonomies and training data to actively minimize bias (Accenture Citizen AI – teaching AI to make unbiased decisions)
Unless we drop the notion of intelligence, emotions and being human of course and overcome the fear that, as we once wrote in an article on the future and the Gen Z effect of Thomas Koulopoulos, “the real shift will be when computers think in ways we can’t even begin to understand”. And maybe even lead to machines that decide, ‘feel’ and are ‘responsible’ or even ’empathic’ in ways we can’t even begin to understand.
Or can we, to conclude with a quote from the homepage of EthicsNet, start from the “principle that ethics and morality can be measurable, definable and computable, across cultures, time, and geography, using techniques similar to The Scientific Method”? We don’t know. You?