Skip to main content

Artificial Intelligence: How Fast Should We Go?

By 24/05/2018September 7th, 2018No Comments

Artificial intelligence is difficult to define, primarily because we do not really understand human intelligence. One way to come up with a definition of AI is to look at the words separately. According to Cambridge Dictionary, artificial is defined as made by people, often as a copy of something natural, and intelligence is the ability to learn, understand, and make judgments or have opinions that are based on reason. To help understand AI, it is crucial to keep in mind that something artificial generally occurs as a copy of something natural. As a branch of computer science, AI can be defined as “the study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognize pictures, solve problems, and learn”.[1] On the other hand, some other working definitions would be the automation of such activities associated with human thinking, or the art of creating such machines, or the study of brain/mental functions through computational models.

Artificial Intelligence is an umbrella term, covering specific technologies such as machine learning and cognitive computing. Duncan Anderson, CTO for Watson Europe at IBM, says that natural language/image recognition, reasoning and machine learning are the three dimensions of cognitive computing. Among these technologies under AI, deep learning, a subset of machine learning, has become especially a trending topic as a result of exploration of neural network models (Geoff Hinton, a professor at the University of Toronto, for instance, has recently explored neural network models inspired by the varying dynamics of the synapses found in our brain), and shedding of assumptions about what the brain can and cannot do, which, according to MIT neurotechnologist Adam Marblestone, has caused scientists to become more receptive to ideas stemming from AI.

IBM’s Deep Blue, the first computer chess program to defeat world chess champion Gary Kasparov, in 1997, and Google DeepMind’s AlphaGo, the first computer GO program to beat the world number 1 GO player Ke Jie, in 2017, are perhaps the two biggest AI achievements people hear the most through media. However, these successes are based on narrow learning for a specific task.  There is more to AI than this.

Starting in early 2000s, the commercial re-emergence of AI exploded for the second time, after the first boom of AI in 1950s. The primary reasons of this re-emergence are the development of deep learning methods and algorithms, and advances in computing power and new techniques that can handle massive data sets also known as Big Data.  Technological developments of the mid 20th and early 21st centuries holistically led to growing connections between AI and cognitive science, neuroscience, and developmental psychology. AI is now helping experts across all industries diagnose and solve problems faster. Scholars like Tom Griffiths, a psychology and computer science professor, believe that AI will be able to exceed human capabilities in certain areas by being able to consider far more information when making a decision. AI will arguably be the world’s most disruptive technology.

So, are we ready to integrate it into our lives?

First of all, there seems to be an agreement on potential issues which might stem from AI. We need to address issues such as the security of autonomous machines (like vehicles), data privacy, stock manipulation, cyberbullying, or terrorist threats. At the same time, more extreme scenarios such as AI posing existential threats to humanity, or robots taking over the world, are the sort of news which when circulated online are only distracting us from focusing on potential benefits we can reap from AI in almost every sector such as health and life biosciences, natural resources, advanced manufacturing, bioinformatics and so on…

Secondly, AI is a market projected to reach $70 billion by 2020, and there is incredible potential. However, safety/security concerns need to be adequately addressed. That is also the only way public trust is achieved. People will trust AI as long as they know it is safe to use. Are there any specific divisions or departments dedicated to security and safety of AI technologies across giant tech companies? Or across related government departments in Canada?

Third is the legal and ethical aspect of AI. Is our legal system ready to deal with potential issues might emerge from AI? Who is responsible if a person is physically or psychologically harmed by an AI system? The AI itself, the developers/engineers, or a third person(s) who taught bad behaviors? Should an AI system be subject to the full gamut of laws that apply to its human operator? We’d better develop state-of-the-art policies and regulations.

These are shared concerns and there needs to be shared responsibility. The way forward is to facilitate nationwide collaboration across federal and provincial government departments, higher education institutions, not for profit organizations and the industry both nationally and internationally. The issue is that advancing in AI technologies has become such a competition that it seems to be a challenge to slow down companies or nations in order to prevent them from dangerously advancing AI technologies without addressing safety, security, legal and ethical issues. I think, we may need a speed limit.

References: Cambridge Dictionary, Nvidia, the New York Times,, PwC

Further Reading:

Artificial Intelligence: Teaching Machines to Think Like People (2017)

Artificial intelligence in Canada : where do we stand? (2015)

What is AI?  (2016)

Advances in artificial intelligence : 27th Canadian Conference on Artificial Intelligence, Canadian AI 2014, Montreal, QC, Canada, May 6-9, 2014, proceedings