Archive

Posts Tagged ‘Thinking’

Can Machines Think?

February 5th, 2010 Fred 3 comments

I wrote this article as an interest piece as part of my responsibilities as a senior researcher in the Mobile Intelligent Autonomous Systems (MIAS) research group at the CSIR.  The article will be published by the CSIR at a later date.  It can be assumed that the CSIR is the copyright holder of this article.

The article is currently in draft form, I duplicate it here to receive feedback on the content of the article.

Can Machines Think?

Can machines think?  It fills the hearts and minds of small children with dreams of an exciting future, where robotic pets and intelligent servants will fulfil their every fantasy.  It perplexes scientists and engineers who wish to discover the secrets and inner workings of intelligence.  It makes for great science fiction and spectacular, block-buster Hollywood movies.

Can machines think?  It is a thought that terrifies the general public, filled with visions of a future where mankind will be enslaved by their more intelligent, silicon-based creations.  It leaves lawmakers puzzled as to the legal implications of living side-by-side with intelligent machines, what rights they should have and what responsibility they should assume.  It challenges our beliefs about our own uniqueness, our inherit sense of being special, our souls.

Can machines think?  It probably is a question that has been around almost as long as thinking Homo Sapiens itself.

A changing world

We live in a time where great technological advances seem to be occurring quite regularly and at a greater pace than ever before.  In the last hundred years, we went from horses and wagons to supercars and massive freight ships, from letters and postal routes to the Internet, e-mail, mobile phones and complex communication networks, from pinhole cameras to digital camcorders, high-definition televisions and video-on-demand, from dreaming of going to space to having landed on the moon to having robotic machines roving on Mars and feeding us valuable scientific data.

The technological state of the world a hundred years from now seems almost unimaginable.  Most futurists envision a world in which we will live side-by-side with machines as intelligent or more intelligent as ourselves, machines that we created.  But is this even possible?  What does it mean to think and can we develop technology to do so?  How will these machines be able to cope with the difficulties of understanding complex human environments?

Contemporary thinking machines

Many people view a game such as chess as a good measure of intelligence (at least in a limited domain), requiring careful planning and reasoning, pattern recognition and experience with a wide variety of positions and tactics.  A machine that is good at chess can certainly be called intelligent, right?

To measure chess ability, the ELO rating system is widely adopted.  In this rating system, an entry level player may have a rating of 1200, a strong player could have a rating of about 2000 and a grandmaster would have a rating of about 2500 or more.  The very best players would attain a rating of about 2800.  In the peak of his career, former world champion Gary Kasparov achieved a rating of 2851.  The current world champion, Visvanathan Anand, has a rating of 2790 (as at 1 January 2010).   In comparison, the best chess playing software, currently considered to be a program with the name Rybka, has an estimated rating of about 3200.  That would give even the world champion only about a 5% chance of beating the program.  In the few cases where Rybka played against grandmaster chess players, the results were indeed mostly favourable for the machine.

How does Rybka work?  Internally, it uses a well-studied algorithm in computer science known as alpha-beta search.  When playing chess, a player needs to make the best move given the current position.  The best move is the one that will put the player in a better future position, ultimately leading to a checkmate.  The problem is that the number of future positions that need to be evaluated quickly grow very large.  For example, at the very beginning of the game, white can choose among twenty different moves, to which black can reply with twenty different moves.  Thus, after a single move from both players, there could be any one of 400 different possible positions on the board.  On average, a player can make about 35 different moves in most positions.  Thus, looking only three moves ahead for both white and black would require nearly two billion positions to be evaluated.  The alpha-beta algorithm is a technique for eliminating moves that are deemed to be inferior from the evaluation, much like how humans go about when playing chess.  Rybka uses this alpha-beta algorithm with aggressive pruning, to “look far ahead” in the game.  This ability, together with a finely-tuned heuristic for evaluating a position, allows it to play at superhuman strength.

Is Rybka an intelligent machine then?  Yes, when limited to playing chess.  But it cannot calculate anything it isn’t explicitly programmed to do, even something as simple as calculating the sum of two small numbers.  You will need a calculator for that…

Rybka (and the more primitive calculator for that matter) does illustrate two good points though.  Firstly, it is possible to create machines that exhibit intelligent behaviour in a limited domain.  And secondly, that sometimes the mechanisms giving rise to the intelligent behaviour can be precisely described.

One might argue that chess is a well-understood game, with defined rules that make it possible to develop algorithms that excel at playing chess.  Rightly so, and the leap to create machines that can interact in a human environment – an environment which is only partially observable, stochastic and dynamically changing – seems quite hard indeed.  Is it possible to create machines or robots that can behave intelligently in a real human environment?

Robots are often found in manufacturing environments working on an assembly line.  These robots have a limited number of conditions for which they need to be programmed, reducing the complexity of the problem and making it possible to perform their tasks efficiently.  The way in which scientists and engineers are making progress, is by incorporating similar principles in robots designed to operate in human environments.

Perhaps one of the best such examples is the recent development of autonomous vehicles, also known as driverless cars.  The idea is to create vehicles capable of driving by themselves under normal traffic conditions, offering a taxi-like experience, but without a human driver.  Recently, a series of Grand Challenges organised by the Defense Advanced Research Projects Agency (DARPA) in the United States brought autonomous vehicles to public attention.  In the first two challenges, the vehicles had to drive autonomously under off-road conditions on a course in the Mojave Desert, using only onboard sensors and systems.  In the second of the two challenges, five vehicles completed the challenges successfully.  The winner, a vehicle named Stanley from Stanford University, completed the 212 km course in just less than 7 hours.  In the 2007 Urban Challenge, vehicles had to drive in an urban area, obeying all traffic regulations while negotiating with other traffic and obstacles and merging into traffic.  Six teams successfully completed the course, with a vehicle named Boss from the Carnegie Mellon University finishing first.  DARPA’s aim with the competition is to achieve autonomy in a third of their military vehicles by 2015.

But even in earlier years, there was much interest in autonomous vehicles.  As early as 1995 an S-Class Mercedez-Benz undertook a 1600 km autonomous trip, achieving speeds of up to 175 km/h on the German Autobahn, even overtaking other vehicles.  The vehicles achieved 95% autonomous driving, even driving up to 158 km without human intervention.  With such successes, many car manufacturers are investing heavily in technology to make autonomous driving possible.  General Motors is rumoured to begin testing autonomous vehicles by 2015 and possible having such vehicles on the road by 2018.

As is the case with the intelligent chess-playing Rybka, autonomous vehicles such as Stanley or Boss achieve their perceived intelligence through a good understanding of their problem domain.  These vehicles are fitted with an array of sensors aimed at giving it an appreciation of their environment, route and path planning systems that enable them to make the best decisions as to how to reach their driving destinations, complex control and actuation systems to make them traverse safely, etc.  As is the case with Rybka, each one of the subsystems is relatively well understood, and the apparent intelligence comes about as an emergent property of putting all these systems together.

So it would seem that it is possible to build machines that exhibit intelligent behaviour.  We could even understand the building blocks that are necessary for such intelligent behaviour to occur.  But it does seem that we have only achieved success in limited domains.  Is it possible to achieve thinking in a deeper sense, in the same way as humans, being able to generalise and excel across multiple problem domains?

What does thinking mean anyway?

Until now, we have used the terms “thinking”, “intelligence” and “intelligent behaviour” in a somewhat interchangeable manner.  Scientists in artificial intelligence distinguish somewhat between machines that think intelligently and machines that act intelligently.  A calculator might be good at thinking without making any errors, but do not act much in the world apart from allowing a user to enter values and to display results to the screen.  There is also a distinction between thinking or acting in a way humans do and thinking or action rationally.  Rationality can be defined as making the optimum choice given the current circumstances to achieve the best possible outcome, something humans might not always be good at.

Opinions about exactly what intelligence or thinking means differ vastly among scientists.  To illustrate the difficulty, consider the question:  “Can machines fly?”  It is a question that seems innocent enough and one that most people would answer in the affirmative.  Have we not built aeroplanes with truly marvellous characteristics?  The Lockheed SR-71 (commonly known as the Blackbird) is a manned airplane that can fly at speeds up to 3500km/h, many times faster than the Peregrine Falcon, the fastest bird which can reach horizontal cruising speeds of up to 105-110 km/h.  The massive 277-ton Airbus A380 can transport up to 853 passengers simultaneously in an all-economy configuration.  And the Voyager 1 spacecraft has travelled over 16.5 billion kilometres from the sun, having passed the orbit of Pluto a long time ago and currently underway to unknown reaches outside our solar system!

But some would argue that none of the airplanes can fly with the elegance and manoeuvrability of even the smallest bird.  None can change its direction abruptly or gracefully set down where it chooses.  Flying is perhaps not just about the ability to stay in the air, but perhaps also about the way in which it is achieved.  Airplanes fly in very much a different way than birds do.  They use engines and birds muscle, their wings are fixed while birds can move them about freely, etc.  And so it may seem that birds have some kind of inherit property that allow them to truly fly, something that machines will never be able to achieve.

By analogy, some view true thinking as perhaps not just about being able to exhibit some kind of intelligent-looking behaviour, but also about the mechanisms required to achieve that intelligence.  Some might say that although Rybka exhibits phenomenal chess-playing capability, this ability comes about from a well-studied and very deterministic algorithm, not from a brain that allows it to adapt and learn, and so it is not intelligent at all.  To be able to truly think, machines would have to have a brain that can show intelligence across many facets of the human experience, which can learn from experience, can generalise across multiple problem domains, can show creativity and emotion and can interact effortlessly with its environment.  Some might even say that to truly be thinking, to have a consciousness, requires a “soul”, something that machines would never have.  But bequeathing such properties upon a human could be dangerous, for a future might await where we may be disillusioned by just exactly how “machine-like” we are.

Challenging human uniqueness

Humans inherently feel or want to feel that they are somehow special.  For centuries, man has viewed the earth as the centre of the universe.  Only in the 16th century did scientists like Copernicus, Kepler and Galilei start to suggest a heliocentric view, something that was widely opposed by the authorities at the time, but now seems quite obvious and intuitive to most people.  Historically, man has viewed himself as some kind of special creature on this planet.  Today, we understand that we are but parts of a larger organisation of different species and that many evolutionary principles have brought us to where we are.

Perhaps similarly, out of our need to feel special, man bestows upon himself the right to be the only entity that can truly be intelligent.  If we were to truly create a thinking machine, such a machine would challenge our sense of uniqueness, our need to feel special.  But there is no scientific evidence to suggest that we cannot create a truly thinking machine.

At a biological level, all of our higher-level thinking processes occur in the central nervous system, and predominantly in the human brain.  The human brain is an extremely complex organ, and little is understood about large regions of it even in our modern age.  Scientists know that structures known as neurons are responsible for computation, memory and the transmission of signals between different parts of our brains, giving rise to our higher-level thinking processes.  These neurons are also able to adapt over time, making it possible for us to learn from experience, to adapt to changing circumstances and to generalise over different problem domains.

Although complex and often weakly understood biological processes are at work, fundamentally, all intelligence in humans can be explained as the emergent behaviour brought about by the interplay of complex electrochemical processes in our brains.  Human intelligence stems from nothing more than biological computation.

Fundamentally then, it seems that there is nothing preventing us from studying the human brain, and applying some of the same principles at work in machines, with the expectation of achieving similar results.  In artificial intelligence, a widely-used technique, known as artificial neural networks (ANNs), is an example where (albeit extremely crude, simplified and much smaller) models based on the human brain are created to solve complex pattern recognition problems.  ANNs have the ability to “learn” patterns over time, and to predict outcomes for patterns it has never seen before.  Even though it is very crude and a heavily abstracted model of how the brain works, it is good enough to be used in many real world systems.  It is implemented in various systems, from predicting future values in the stock exchange, to fraud detection in credit card usage, to military systems to detect hostile aircraft and missiles.

If we knew perfectly how the brain worked, and could translate that knowledge into hardware and software, it would seem then that, at least in principle, there is nothing stopping us from creating machines at least as intelligent as ourselves.  Thus, in the same way that we could create a chess-playing program that could play chess at superhuman strength by studying the algorithms required in the limited domain of playing chess, we could achieve success in the much more complex human world by studying an already existing “machine” – the human brain – that is already extremely well adapted to our human environment, and as such create intelligences similar to our own.

Of course, some would argue that the brain is so complex that we will never be able to understand it.  Others theorise that it may use much more complex computation, such as quantum computing which may be impossible to create in an artificial machine.  And indeed then, advancement in the creation of thinking machines and artificial intelligence would seem to be somewhat dependent on progress in other fields.  This has been the case for many branches of artificial intelligence, making it a very multidisciplinary area of research.  For example, one of the recent surges in popularity is in the field of computer vision, perhaps due to the availability of cheap digital cameras and computational power, making the field accessible to a broader audience of researchers.  But the field also relies on a good understanding of physics, optics, cognitive vision, machine learning, signal processing, and other fields.

Although the brain is quite a complex organ, recent advances in neuroscience and especially neuroimaging have shed new light on the structure and functioning of the brain.  Perhaps illustrating the success of these disciplines is the emergence of brain-computer interfaces (BCI) in the last decade.  BCI devices are able to create a direct communication pathway between the brain and an external device.  A BCI could either be invasive, requiring brain implants, partially-invasive, residing inside the skull but not connected to the grey matter, or even non-invasive, typically requiring the human to where a device connected to their head.  Because of the amazing ability of the brain to adapt, the brain can interpret signals from the BCI and treat it like a natural sensor or effector.  It has been used successfully to restore damaged vision, hearing and movement.  For example, vision has been restored in a number of patients with non-congenital (acquired) blindness, by connecting external cameras to electrodes implanted on the patients’ visual cortex.  Such patients report being able to see different shades of light and being able to distinguish different shapes, enabling them to interact with their natural environment.  Although such research is only in its infancy, it promises to bring new hope to many people with disabilities and to become more mainstream and widely accepted.

In the author’s opinion, it seems reasonable to suggest that we will make many more discoveries about the human brain in the decades to come and that we will reach a point where, although we may not have a perfect understanding, will be able to conceptualise its functioning to such a degree that we can implement similar functionality in machines.

If it is then possible that one day machines could be thinking in the truest sense of the word, what would the implications be for the human race?

Should machines be allowed to think?

Futures in which intelligent thinking machines have evolved often make for great science fiction.  Movies such as The Matrix, I Robot or Terminator frequently take a somewhat dystopian outlook (perhaps because it makes for better stories), depicting futures where the human race are enslaved by robots.

Prominent futurists theorise that once we build machines of equal or better intelligence than humans, such machines would be able to build even better machines, which will in turn be able to build even more improved machines, etc.  This could lead to an “intelligence explosion” where the rate of change in technological advancement would grow exponentially, reaching a “technological singularity” where the rate of change would seem almost infinite.  Machines would be far more intelligent than humans, leading to the situations depicted in the movies.

Although such scenarios seem quite dramatic, it is quite conceivable that living side-by-side with machines or intelligent software systems would radically alter civilisation and have a number of serious consequences.  Many people fear that many current jobs could be done by machines, leading to large-scale job losses and economic decline.  But on the other hand, as illustrated by the industrial revolution, automation also provides the opportunity to create new types of (typically higher-paying) jobs.  Some people are concerned about losing their privacy rights.  Digital surveillance is becoming more prominent with technologies such as object recognition and speech recognition making it possible to identify and track humans much more easily.  As an example, the city of London in the United Kingdom has cameras in most public areas making it possible to find a specific person almost anywhere in the city.  Others are willing to give up a bit of their privacy in exchange for the other benefits it provides, such as the ability to recognise criminals and alert authorities, or for example being able to detect a vehicle accident and automatically request emergency services.

Machines that can think could also result in a lost of accountability.  For example, if a medical system makes an incorrect diagnosis of a patient’s illness, which results in the patient being treated with the wrong medicine, to what extent should the physician be accountable, given that accepted practise is for machines to make such diagnosis?  Or if a vehicle that drives autonomously were to create an accident, who should assume responsibility for such a happening?  The problem is that if a machine is truly intelligent and acting autonomously, then to what extent is its owner of manufacturer responsible for its actions?  Which brings to attention another interesting consequence – to what extent should such a machine have any liberal rights?  If a machine is truly thinking, should humans be allowed to enslave and/or discriminate against it?

As some of the movies would suggest, there is also the possibility that robots could enslave humans, or wipe them completely of the face of the planet.  Where perhaps previously the greatest threat is for nuclear of biological weapons to fall into the wrong hands, it could become quite possible for the “wrong hands” to be the technology itself.

In the author’s view, there will be a gradual development and adaption of more intelligent technologies and machines in the next century.  This gradual development would provide the opportunity for society to adapt to the changes such technology bring and to sort out many of the issues raised above.  Such a gradual development would also make a doomsday scenario very unlikely.  Furthermore, the author believes that the technological development would bring about many more positives than negatives, just as the printing press, telephones, television, the Internet or motor vehicles provide society more benefits than detriments.

Conclusion

We live in exciting times.  The world around us has changed considerably over the last century, who knows where we will have advanced to by turn of the current century?  One thing is for certain – intelligent technologies, and perhaps even truly thinking machines, will become ever more important and dramatically alter daily living.

At the moment, it is possible to create machines that exhibit intelligent behaviour in limited problem domains, such as playing chess or driving autonomously.  These problem domains are often well understood, as are the algorithms that are necessary to create the perceived intelligence.

Although no universally accepted definitions exist for what is meant by thinking or intelligence, it is generally taken to mean as possessing similar cognitive qualities as humans.  Our current best examples of intelligent machines are lacking in many regards, and cannot rightfully be said to be thinking in the way humans do, being able to learn from, adapt to and operate in a general and complex human environment.

However; at a fundamental level, human intelligence is nothing more than biological computation, and so in principle there is nothing preventing us from studying our own biology, extracting the principles behind it and implement it in hardware and software to create machines as capable as ourselves.  Although at the moment we do not quite understand our own intelligence and the functioning of our brains in great detail, research is constantly shedding light on it, and creating machines in our own cognitive image is a real possibility.

If we were to succeed in creating truly thinking machines, it would have great consequences for society.   It may bring many benefits and improve the lives of many, but it may also bring about many societal changes that we will need to adapt to.  Perhaps one of the biggest perceived threats that thinking machines hold, is that it challenges humankind’s sense of uniqueness, of being somehow special.  We may learn in the process that we are not that special and that we are more machine-like than we care to admit.  Or, in our struggles to create machines in our own image, we might just learn to appreciate how wonderful and complex we truly are.

Whatever the future that awaits us, it sure is going to be exciting!

What is your opinion?  Can machines think?  Will they ever be able to think at the same level as humans do?

This article is posted in the Opinions category.  Please comment and express your opinion, but be sure to Follow the Rules.