Loading search engine...

Artificial intelligence

Article curated by Holly Godwin

Leaps and bounds are being taken in the field of artificial intelligence, but will they benefit humanity? Are we capable of designing superintelligent beings and if so, is our curiosity with furthering technology going to overtake our understanding of the consequences that could result from superintelligence?

© Gerd Leonhard (CC BY-SA 2.0) via Flickr
Could artificial superintelligence become a reality? If it does, then will we be able to control it? Image credit: © Gerd Leonhard (CC BY-SA 2.0) via Flickr

The Artificial Intelligence Revolution

Nowadays, we use machines to assist us with our everyday lives, from kettles to laptops, but technology is advancing at such a rate that machines may soon be more capable than us at all our day to day tasks. Household robots, once the domain of science fiction, are already becoming a reality.

Some argue that this is an exciting advancement for human society, while others argue that it may have many negative connotations that we need to consider. However it seems unlikely that further artificial intelligence will not be pursued, and that our scientific drive to advance will not outweigh our concerns over AI.

What if machines are created that can exceed our capabilities? Having machines to independently carry out menial tasks such as cooking and cleaning is one thing, it would revolutionise society if each household had a robot for menial tasks, but we would not have been surpassed. Superintelligence is defined as ‘an intellect that is more capable than the best human brains in practically every field, including scientific creativity, general wisdom and social skills’. So, we must ask, Is superintelligence possible, and is this definition accurate?

 Lightbulb

For now, however, there are still discrepancies between the abilities of machines and humans. While machines can be more efficient in some domains, such as mathematical processing, humans are far better equipped in many others, such as facial recognition.

 Lightbulb

Can Artificial Intelligence Learn?

If artificial intelligence had the capacity to learn, a significant constraint on its abilities would be lifted, and a huge step towards superintelligence would have been made.

Recent research at Google was carried out to assess how efficient artificial intelligence was at recognising images and sound[1]. The machines in question are programmed with artificial neural networks, which allow them to ‘learn’. These artificial networks are statistical learning models, based on the neural networks found in the human brain.

An individual network consists of 10-30 stacked neurons. These neurons then collaborate such that when an input stimulus is detected, such as an image, the first layer will communicate with the subsequent layer and so on until the final layer is reached. This final layer can then decide and produce a particular output, which serves as its ‘answer’, allowing us to assess how well the network recognised the input image.

Understanding how each layer analyses the image differently can be challenging, but we do know that each layer focusses on a progressively abstract features of the image. For example one of the earlier layers may focus on edges while a later layer may focus on fine lines of detail. This is very similar to the process carried out in our brains, we break down images into features, shape, colour etc, to understand them before recombining them into a whole image.

When trying to visualize how this systematic recognition worked, it was decided that the reverse process should be observed. Starting with white noise, the neural networks were then asked to output an image of bananas based on their interpretation. Some constraints on the statistics were put in place in order to make sense of the output, but the interpretation was definitely recognisable as bananas, bananas as viewed through a kaleidoscope admittedly, but still bananas . This demonstrated that neurons designed to discriminate between images had acquired a lot of the information necessary to generate the images. This ability of the artificial networks has been dubbed as ‘dreaming’ due to the slight surreal effect produced.

While this process is defined by well known mathematical models, we understand little of why it works, and why other models don’t.

 Lightbulb
Google Research Blog
Google's new AI can output this image of 'bananas' independently from white noise. Image credit: Google Research Blog 

The problems with AI: Emotions

Is a being with no capacity for emotions capable of more intelligent thought than one who is? Are emotions purely obstacles to reason, or is intelligent thought the product of an interaction between emotion and reason?[2]

For the true meaning of superintelligence to be understood, we would first need to understand whether or not emotion was key to intelligence.

It has been suggested that decision making is of particular interest with regards to emotions. When facing difficult decisions with conflicting choices, often people are overwhelmed and their cognitive processes alone are no longer enough to define the better choice. In order to come to a conclusion in these situations, it has been proposed by the neuropsychologist Antonio Damasio that we rely on ‘somatic markers’ to guide our decisions[3]. Before we can understand this hypothesis, we must first take a closer look at what we mean by emotions.

While label emotions as happy, sad, angry etc., these are just primary emotions, they are broadly experienced across cultures. Secondary emotions are more evolved - they are influenced by previous experiences and can be based on the specific culture you are immersed in, so are manifested differently by individuals. Somatic markers are the physical changes in the body that occur when we have an emotion associated with a thing or event from a previous experience or cultural reference. They are in essence a physical product of a secondary emotion that lets the brain know we have an emotion associated with the choices we face. This in turn then gives us some context, which makes the decision easier to deal with.

We cannot say conclusively that emotions are key in difficult decisions. However if, after further research, the somatic markers hypothesis gathers the necessary supporting evidence, this would shed some light on the benefit of emotions both to human beings and possibly to artificial intelligence. It is also possible that while emotions may help in difficult decisions, they are not beneficial to our intelligence in other situations.

 Lightbulb2

There has been much debate over whether programming a machine to experience emotions would be possible. While the gap between reality and Science Fiction still seems vast, there has been a lot of progress in this area. Physiological changes such as facial expressions, body language and sweating are the cues that allow us to read emotions. Computers can now simulate facial expressions when inputted with a certain emotion - there has even been a three dimensional model of a face, ‘Kismet’, that can display expressions relating to emotions[4]. However, these are just simulations of feelings, not real emotions.

Are advances in this area going to merely produce better fakes, improvements on current simulations, or can we program a machine to feel? To achieve this we surely have to explore what it is to truly feel an emotion, rather than simply display its characteristics. To feel relies on consciousness, so a machine would need to be independently responsive to its surroundings before it could be said to experience emotions.

Before we can recreate the essence of consciousness in a machine, further studies would need to be carried out into the workings of our own minds. Is there something key in the human mind that cannot be recreated by complex electrical signals? Something that cannot be mathematically defined?

Whether it is achievable or not, maybe we should ask ourselves whether man made consciousness would be a good thing?
Delve deeper into emotions.

 Lightbulb2
Adapted from Robot Love by Steve Snodgrass (CC BY 2.0) via Flickr
In popular culture, giving robots emotions has commonly had very negative consequences. But could emotion be the key to improved AI decision making? Image credit: Adapted from Robot Love by Steve Snodgrass (CC BY 2.0) via Flickr 

The problems with AI: Morality and consequences

When faced with difficult moral dilemmas it may be beneficial to link an emotion to an outcome in order to weigh up the best decision, however this is up for debate. Let us assume briefly that emotions and some form of morality is conducive, and not harmful, to a being's intelligence - we must then decide upon how it should be implemented.

It has been suggested by Eliezer Yudkowski, member of The Machine Intelligence Institute (MIRI), that a mathematically defined ‘universal code of ethics’ would need to be devised in order to implement morality upon a manmade being[5], but can a universal code of ethics exist, and if it does is it too complex to define?

 Lightbulb

Would a machine with no concept of right or wrong have a positive or negative effect on society?

An example of the potential for destructive behaviour resulting from a lack of morality is ‘tiling the universe’[2]; imagine a robot has been given the task of making paper clips, it could attempt to turn the whole world into paper clips, under the impression it was carrying out the given task. The role of a code of ethics in this could prevent the situation from arising, meaning instructions wouldn’t have to be so carefully defined, and the robot could be spoken to like a human employee.

Conversely, what if a universally defined code of ethics is equally if not more harmful than having no code at all? Can morality, with so many grey areas, truly be defined, or would you end up having a situation where robots act for the greater good as an attempt to find a simple defining factor to link their desired behaviour?

You could argue that acting for the cause of the greater good can never be a bad thing, working towards situations where the least people are ill-affected. Take, for example, the problem of the runaway train; five people are tied to the tracks ahead of you, you can either do nothing and run them over, or choose to redirect the train to another track where just one person is tied down. The philosopher Kant argued that you can never treat someone as a means to an ends, known as the categorical imperative[6], hence making the choice to kill the one man is worse than doing nothing and killing five. However you find that many people, when faced with this problem, will say they would take a utilitarian standpoint. Arguing that doing nothing and killing the five is, in a sense, just as much of an action as deciding to change the path of the train and killing the one, they would redirect the train so fewer people died - the means justifies the end.

If put into context of another analogy, however, the responses can alter quite dramatically; imagine a hospital ward where five sickly patients are in need of different transplants. Without them they will die. Now if your sole motive was to act for the greater good, you should kill a healthy passer by and use their organs to cure the patients. You have in essence killed one person in order to let five live - an act that works in favour of the greater good and is comparable to the runaway train analogy. At this point many who previously believed you should act for the greater good change their mind and take the stance that you shouldn’t take a life.

So from this we can already see that a bold code of ethics would face many problems and a lot of criticism, illustrating that developing such a code with be a massive undertaking as there would be so many cases and exceptions. It is likely that it could never be agreed upon. However if we could not, or did not want to, program machines with emotions or morality, could they ever rival us or be considered superintelligent?

Another problem arises when we consider the dispersion of these artificial beings. Presumably they would have some cost, meaning they would only be available to those who could afford them. What could this mean for social equality and warfare?[2] Additionally, should questions of ownership even be considered? If these beings rival, if not outdo, our own intelligence, should we have the right to claim ownership over them? Or is this merely enslavement?

 Lightbulb
TWDK
Which would you choose? Action or inaction? Image credit: TWDK

The Artificial Intelligence ‘Explosion’!

If superintelligent machines are an achievable possibility, they could act as a catalyst for an ‘AI explosion’[2].

A superintelligent machine would have a greater general intelligence than a human being (not purely domain intelligence) and would hence be capable of building machines, equally or more intelligent than ourselves. This could be the beginning of a chain reaction, with the superintelligent machines’ efficiency, it could be a matter of months or days or even minutes until the ‘AI explosion’ had occurred - a world overrun with robots. It is a plausible argument that this would be the point at which we could lose control over machines.

The questions related to this field are numerous. It is a topic at the frontier of science that is developing rapidly, and the once futuristic fiction is now looking more and more possible. While the human race always has a drive to pursue knowledge, in a situation with so many possible outcomes, maybe we should exercise caution.

 Lightbulb

This article was written by the Things We Don’t Know editorial team, with contributions from Ginny Smith, Johanna Blee, and Holly Godwin.

References
why don’t all references have links?

[1] Mordvintsev, A., Olah, C., Tyka, M. ‘Inceptionism: Going Deeper into Neural Networks’ Google Research Blog, 2015.
[2] Evans, D. ‘The AI Swindle’ SCL Technology Law Futures Conference, 2015. (with further information from following interview).
[3] Damasio, A., Everitt, B., Bishop, D. ‘The Somatic Marker Hypothesis and the Possible Functions of the Prefrontal Cortex’ Royal Society, 1996.
[4] Evans, D. ‘Can robots have emotions?’ School of Informatics Paper, University of Edinburgh, 2004.
[5] Bostrom, N., Yudkowsky, E. ‘The Ethics of Artificial Intelligence’ Cambridge Handbook of Artificial Intelligence, Cambridge University Press, 2011.
[6] Kant, I. ‘Grounding for the Metaphysics of Morals’ (1785) Hackett, 3rd ed. p.3-, 1993.



Follow TWDK

Mailing list

Sign up for our Newsletter
constant contact safe subscribe logo

Please support us

Easyfundraising banner

Creative Commons License
Except where otherwise noted, content on this site by Things We Don’t Know C.I.C. is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. | Privacy & Cookies
Things We Don’t Know C.I.C. is registered in England and Wales. Company Number 8109669.
Registered address at 34B York Way, London, N1 9AB.