Do we value the convenience and power of Artificial Intelligence over the sanctity of our human values?

Our hi-tech business community is being challenged by the game changing Artificial Intelligence conundrum. 

Value Generated 

Each of us on our smartphones and computers interact and use Artifiical Iintelligence as much as 30-40 times daily; we are not even aware of its integration in our lives. Google search is dependent on AI; Siri, Google Now, ECHO, and Cortana are our helpful AI “friends,” and Facebook targets ads and news after its Artificial Intelligence reads your posts.

A recent global study documents that senior executives see the value of AI.  They acknowledge its benefits – 79% said it provides better data analysis and 70+% said it make organizations more creative and helps make better management decisions.

In terms of investment, there is a global fascination with opportunities for growth. The UAE has launched the AI Regional Strategy with the aim “make UAE the first in the world in AI investment to create a new vital market with high economic value.”

Progress of Artificial Intelligence Super intelligence 

Will AI super intelligence far surpass human cognitive abilities?

The progress of AI will not be seen in robots taking over the world per se, but rather in the subtle infiltration into our daily lives and consciousness. With products getting smarter and better connected via AI, it is easy to project that we will grow more dependent and “trusting” of the information delivered by our machines and personal devices. “AI will affect how we “trust.”

In the gaming world, AI has proven its vast potential … from Watson winning the game show Jeopardy to in 2016 to an AI powered robot beating the world’s most recognized South Korean master in Go (in fact, Google’s Alpha Go was 60-0 against the top International Go Players). Recently, a robot even learned to “bluff,” outsmarting elite poker players.

In August 2017, AI took a jump by entering the ecommerce world. Facebook Artificial Intelligence Research Lab artificial intelligence and humanityreported the testing of a computer bot that is negotiating with consumers and online retailers. This bot learned to get better at making deals as often as humans. To accomplish this, the bot learned to lie. This trait was not programmed: the bots “learned” to emulate the behavior of a consumer that has the “desire” to win the negotiation at all costs. Such a trait could get ugly, unless future bots are programmed with a moral compass.

The results of this research … Those participating could not differentiate between the real person and the bot! This breakthrough in artificial intelligence sends very bright flares regarding the potential ethical conflicts of scaling such software and triggers a concern on what it means for our future.

The power and effect of such software on our current values and way of life is threatening and scary, to say the least. The threat of these bots was summarized by the Newsweek reporter of this story:

Put all of these negotiation-bot attributes together and you get a potential monster: a bot that can cut deals with no empathy for people, says whatever it takes to get what it wants, hacks language so no one is sure what it’s communicating and can’t be distinguished from a human being? If we’re not careful, a bot like that could rule the world.”

This begs the issue of bots developing aspirations. If they can learn these very ‘human’ qualities of basic deception, what else will they learn? And why are they learning only the negative qualities of humanity? 

Who will design and who will manage the implementation of such software?

What are the “values” and intentions that will be programmed into the Artificial Intelligence bot?

Will humanity be devoid of human-compatible values?

Elon Musk has been sounding the Artificial Intelligence alarms for years.  He reported that his investment in Deep Mind, an AI firm bought by Google, was not focused on the financial return but rather keeping a “wary eye on the arc of AI.” He suggested that Google Executives could have perfectly good intentions, but AI goes far beyond the motivations of Silicon Valley execs.  They could ”produce something evil by accident—including possibly, a ”fleet of AI enhanced robots capable of destroying the world.“  

Mark Zuckerberg countered that the fear of AI is ‘far-fetched, much less likely than disasters due to disease or violence…Choose hope over fear. “He warned, “if we slow down progress in deference to unfounded concerns, we stand in the way of real gains.”

AI brings up more questions that we have surfaced, like… 

  • What are the values and the moral compass of the entrepreneurs who would create these future technologies and driver businesses?
  • How do executives integrate a moral compass into the mindset as a criterion for sustainability of the planet?

Ethics Compromised… what do experts say?

How can Elon Musk, Professor Stephen Hawking and Bill Gates all raise the same warnings about AI and yet the global alarm is faintly heard?

Professor Hawking said primitive AI has proven to be useful, but he fears “the development of full AI could spell the end of human race .. It could take oven on its own and re-design itself at an even increasing rate.”

Elon Musk has launched a billion dollar crusade to focus attention,  “AI is our biggest existential threat … the most serious threat to the survival of mankind.”

In 2015, Gates wrote “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”

In 2017, Gates moderated his positionThe so-called control problem that Elon is worried about isn’t something that people should feel is imminent…We shouldn’t panic about it.”

Satya Nadella, Microsoft CEO says “The core AI principle that guides us at this stage is: How do we bet on humans and enhance their capability? There are still a lot of design decisions that get made, even in a self-learning system, that humans can be accountable for… There’s a lot I think we can do to shape our own future instead of thinking, this is just going to happen to us. Control is a choice. We should try to keep that control.”

In his recent book, WTF? What’s the Future and Why It’s Up to Us, technology visionary Tim O’Reilly exhorts “businesses to DO MORE with technology rather than just using it to cut costs and enrich their shareholders… Companies must commit themselves to continuously educating and empowering people with the tools of the future, What’s the future? It’s up to us.”

The Facebook revelation is the first of many ethical challenges that will surface related to AI applications influencing our lifestyles and consumer behaviors. Experts are clear as AI grows in sophistication and power: there is a potential threat to the future of mankind. 

Who will be the entrepreneurs and coders managing and guiding this change?

Will the NextGen determine our fate?

Here is a quick picture of the executives, entrepreneurs and coders designing, managing, guiding and scaling the “near future period” of exponentially expanding AI applications. NextGen (aka Millennials and Gen Z) will have major responsibility for addressing this ethical threat. 

 

  • By 2020 Millennials (born 1981-2000) will comprise 50% of the global workforce. If we add in the Gen Z (born after 2000) the total will be 70% of the global labor force.
  • Both Millennials and Gen Z have drawn a line in the sand when it comes to social responsibility, sustainability and social impact causes.
  • 60% of Gen Z want to have an impact on the world, compared to just 39% of Millennials. Social entrepreneurship is one of the most popular career choices for this generation.
  • More than 9-in-10 Millennials would switch brands to one associated with a cause.
  • 76% of Generation Z is concerned about human impact on the planet and believe they can operate as a change agent.
  • Nielsen Global Survey on Corporate Social Responsibility across 60 countries
    • 55% global online consumers will pay more for products and services committed to positive social and environmental impact.
    • In Asia-Pacific and Middle East/Africa regions. Millennials favored sustainability actions 3X more than older generations.
  • NextGen is also predisposed toward entrepreneurship; 49% of Millennials hope to start a business within the next 3 years.
  • Across 38 nations, 75% have a positive attitude towards entrepreneurship; Millennials were 80%.

 

These surveys paint a picture of NextGen as more attracted to become entrepreneurs that generate social impact. They have very positive attitudes to supporting sustainable actions and purchasing and working for social impact-driven businesses.

Proposed solutions?

With this research as a backdrop, here are four potential approaches to address this threat. (I’m sure there are more)

  1. Lobby technology  companies to adopt the role of standard bearer of ethical values.
  2. Gain global cooperation for a mandate to limit the use of such artificial intelligence applications that may prove to have questionable long-term effects.
  3. Adopt Isaac Asimov’s “Three laws of Robotics.”
  4. Build a network of NextGen (Millennials, Gen Z) entrepreneurs and coders to adopt a values-driven, ethical approach which promotes sustainable, win-win outcomes for all stakeholders.

The initial solution suggests that technology giants, businesses and startups will adopt a high standard of ethics for all projects. We can imagine and intend for technology companies to integrate ethical values. But based on the track record of business and other major institutions, 82+% Millennials mistrust the press, Wall Street, advertising and Congress.

This first suggestion will require that these same companies commit to the UN Sustainable Development Goals (SDGs) and values that are reflected in a Quadruple Bottom Line, where sustainable outcomes are optimized for People, Planet, Profit and Prosperity of communities.

Executives must speak to the ethical concerns and not dismiss or obfuscate them. Eric Schmidt, CEO of Google’s parent company, Alphabet, dismissed the dystopic threat of AI with a cynical comment, “Robots are invented, Countries arm them, an evil dictator turns robots on humans, and all humans are killed. Sounds like a movie to me.” Software designers must gain agreement on what is ethical and consistent with values-based approach. Based on Facebook’s and Twitter’s  recent response to the investigation of the Russian meddling in the US election, these companies are choosing profit over ethical and social impact!

The second suggestion is a global mandate (e.g., through a super agency). Currently there is no  US public policy on AI and the associated technologies are largely unregulated. This contrasts the model of US federal agencies that oversee drones, automated financial trading and self-driving cars.  This super AI agency and mandate is supported by Musk and Sam Altman, President of Silicon Valley startup, Y Combinator, through their billion-dollar nonprofit, OpenAI. They are spearheading the process to craft tech’s own ‘Constitution’. Altman has spoken to hundreds of tech leaders and investors about creating a set of core values that all tech companies can get behind. The document does not have a title or known release date.  It is challenging them to mobilize the countries and their businesses to agree on such a proposal.  Furthermore, it is difficult to determine what is questionable ethical behavior and how to monitor and enforce it.

The third proposal was presented as literary device some 53 years ago by Isaac Asimov. His Three Laws of Robotics — a set of rules designed to ensure friendly robot behavior — can be regarded as a ready-made prescription for avoiding the robopocalypse. These Three Laws  could be adopted by companies and a Super AI agency:

  1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, he added a fourth, or zeroth law, that preceded the others in terms of priority:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Finally, the last suggestion is focused on building a “Network of Trust” of social impact ventures. Through collaboration of values-based Baby Boomers and Gen X experts and impact investors and NextGen coders and entrepreneurs, a mindset change and “values-based culture”  will be generated  necessary to build this “Network of Trust.” To build “trust in AI,” we need to have detailed discussions at all levels and generations (not only among the AI specialists). We need to focus on what AI means to mankind, how it is already affecting our lives, and how it will affect our lives in the future.

Legacy International, a nonprofit with four decades of building capacity and leadership skills in 100 + countries, has launched Global Transformation Corps to address this need. It is working to build a global network of multi-generational venture teams with the mindset and moral compass to design sustainable impact, values-based enterprises.

The uncontrolled application of AI can threaten the way we conduct our lives and interact with businesses.  These four suggestions are not mutually exclusive.

It’s a very exciting time to be alive, because in the next few decades we are either going to head toward self-destruction or toward human descendants colonizing the universe.” (Sam Altman)

It is up to US!

We invite you to join an open conversation on the Artificial Intelligence Conundrum.