When I started my Bachelor studies of cognitive science, older semesters recommended us young freshmans to read a book. This book was described as “the bible of cognitive science”. And the promise was that, by reading the gospel, we will be inside the club and true followers of the cult. It was “Gödel, Escher, Bach”, a 800 page tome about mathematics, art, computer science, biology, zen buddhism and much more, written by Douglas Hofstadter.

Naturally I bought the book and devoured page after page and finally understood – very little. The book explores how self-referential formal rules might allow systems to acquire a high-level state like “meaning”, despite being made of “meaningless” elements. In order to make his case, Hofstadter jumps between the details of knowledge representation theory and philosophical discussions of the notion of “meaning” itself. Heavy stuff.

At its heart the book explores the question Alan Turing posed in 1950:

“Can machines think?”

The answer Hofstadter gives in his book is “Probably. If you can formulize the right model”. His intuition is, that something he calls “strange-loops” might be crucial for consciousness to emerge from a system. Strange loops create self-referential systems that, by moving only upwards or downwards through a hierarchy, find themselves back to where they started. Strange loops remain one of the most interesting attempts to model consciousness. However a functional AI, based on that model, has not been successful so far.

A pair of hands drawing each other

M.C. Eschers pictures, like this one, where a pair of hands are drawing each other, are visual examples of strange loops. (M. C. Escher. Drawing Hands, 1948)

But Hofstadter wasn’t the first who tried to answer Turing’s question.

 

The Birth of Artificial Intelligence

The brood chamber of which artificial intelligence crawled out into existence was the Dartmouth conference, held in 1955, where a dozen of brilliant researchers met at Dartmouth University in Hanover, New Hampshire. Those guys were the crème de la crème of computer scientists like John McCarthy, Herbert Simon, Claude E. Shannon and Marvin Minsky. Their goal was to formally describe the process of learning and every other feature of intelligence, to create a machine that can simulate “thinking”. At this conference the term artificial intelligence was coined and the event is held in history as the birth of AI.

Originally planned to be finished at the end of the summer, not one of the goals has been reached by the Dartmouth group till this day. We still lack the knowledge about how the brain creates those peculiar cognitive states of the mind that we experience every day.

However, they had some minor successes that were pretty revolutionary for their time. One attempt was to formulate reasoning as a simple brute force search algorithm. Given the task of solving a problem like winning a game, or solving a puzzle, the program searches all possible choices it can make, like moving through a maze and backtracking, if it reached a dead end. Such a “General Problem Solver” was able to solve highly formalized problems with small amount of possible choices, like “Towers of Hanoi”. But it failed at any real-world problem, because here the amount of choices are way too high, which leads to a combinatorial explosion. Because time complexity grows exponentially with problem size, a lot of computing power is needed to go through all possible choices in a reasonable time. That kind of computing power wasn’t around back then.

Yet their initial successes let to a little hype and a phase where AI was heavily funded by the Advanced Research Project Agency, which later became what is known today as DARPA. AI was blossoming.

But it didn’t last long. In 1973 James Lighthill – a British mathematician – published a heavy critic about the lacking progress of AI research and their failure to produce any real world applications. This led to political pressure from congress and a stop in funding for undirected AI research from the U.S. and the British government. The concurrent halt in progress that endured till the 80s is called the “AI Winter”. During this time people were generally disillusioned with artificial intelligence and the claims of AI researchers where heavily attacked by philosophers like Dreyfus and Searle who argued that the processes inside the machines can never be described as “thinking”. The Hype was over.

 

The Rise of Expert Systems

Seven years after the conference, an expert system called XCON was created by the Carnegie Mellon University for the Digital Equipment Corporation. Expert systems operate just within small domain of specific knowledge, instead of trying to simulate “general intelligence”. Their simple design made it relatively easy for programs to be built and then modified, once they were in place. XCON was an enormous success and saved the company 40 million dollars per year by 1986. Other corporations, from all over the world, were impressed and started to develop and deploy expert systems en masse and created a new hype. By 1985 over a billion dollars was spent on expert system AIs. By the end of the eighties, it seemed as people didn’t care anymore about machines that achieve the metaphysical goal of “thinking”, they were just happy to use automatic computer systems that actually got some work done.

During this time AI researchers also began to use sophisticated mathematical tools. There was a realization that many AI problems were already solved by mathematicians, economists and other researchers. AI became a more rigorous scientific discipline and made rapid progress by implementing sophisticated algorithms like Bayesian networks, Hidden Markov Models, Neural networks and Evolutionary algorithms.

Also, thanks to Moore’s law, processing power became much cheaper and much more powerful. New algorithms could be implemented that enabled novel applications, that were not possible before, for the machines.

Fast-forward, it’s May 11th 1997. IBMs supercomputer Deep Blue beats the world champion Kasparov in a game of chess, broadcasted live over the internet to 74 million viewers. This was the “moon landing event” of artificial intelligence. That day, AI had arrived in the collective consciousness as a force that will shape our world. Ironically Deep Blue wasn’t even an AI in a technical sense, as IBM points out in a statement:

“Deep Blue, as it stands today, is not a ‘learning system.’ It is therefore not capable of utilizing artificial intelligence to either learn from its opponent or ‘think’ about the current position of the chessboard. …  Any changes in the way Deep Blue plays chess must be performed by the members of the development team between games. Garry Kasparov can alter the way he plays at any time before, during, and/or after each game.”

2011, similar to Deep Blue’s media spectacle, another AI was pushed into the arena against a human enemy. This time the machine was called Watson and the game was Jeopardy. Watson won the match against the two human champions by far, demonstrating a revolutionary capacity to understand natural language.

Since then IBM poured a lot of money into improving Watson even further, making it one of the most advanced AI around with an impressive range of applications in its quiver.

What Can AI Do Today?

Some applications like stock market trading, search engines and classifying DNA sequences have already internalized machine learning algorithms for decades. But AI is spreading out into more fields, becoming more ubiquitous each year. Here are just a few of the highlights of what artificial intelligence can do:

As of July 2016, Google had test driven their fleet of driverless cars for over 2.4 million km. The newest model of their autonomous vehicle has no steering wheel or pedals. Every major car company has some sort of effort to sell autonomous cars in about 2020.

A car that can drive itself is a car that can deliver itself to you. It can refuel or recharge itself without you having to worry about it. It can also store (or what we used to call “park”) itself. Self-driving vehicles can make transportation enormously energy-efficient, since, instead using a bulky all-purpose car, most trips can be accomplished by a very small electric on-demand vehicle. Autonomous driving will revolutionize our whole concept of mobility.

But the terror attack in Nice on July 14, 2016 has reminded us, in a terrible way, that cars are also deadly weapons that we put our bodies into. In certain situations, an artificial intelligence that controls such a weapon has to make decisions about life and death. Can something like a “moral code” be programmed?

Researchers at Duke University are trying to accomplish the goal of creating a moral machine by letting the artificial intelligence observe real humans making ethical decisions and learning to identify general patterns in those choices. Another approach, followed by researchers from Northwest University, is to use a model, based on the structure-mapping theory on analogy and similarity by the influential psychologist Dedre Gentner. No matter in what form morality will be represented in self-driving cars, their sheer ability to be better drivers than humans will result in significantly fewer deaths than we have today.

Games

The big breakthrough here is that the algorithm was just let loose on the games without any prior knowledge about how to play them. The A.I. learned to play these games by itself, just by using a trial-and-error system, using scored points as reward on single actions.

This means that Google is pretty advanced when it comes to machines that can learn pretty much anything, as long as the problem space is relatively confined, like in games. The deep Q-network is, in a fact, a simple first version of a general-purpose agent, that is able to continually learn without human intervention.

image_recog

This is scary. A system that knows what is going on in pictures and automatically classifies it is the perfect tool for the already omniscient surveillance systems.

But also marketers are interested in this software to track customers that pay in cash, to throw personalized ads at them in the future. Until now cash purchases have been impossible to track. But this is about to change when marketers start to employ facial recognition software using the cameras in the brand’s stores, to monitor the products shoppers physically carry out. This will effectively overcome the cash payment barrier.

Similar systems that are able to automatically process videos are also in development.

Combat

The Colonel remarked: “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”

On July 28, 2015 an open letter was announced at the opening of the IJAI conference urging governments to ban autonomous weapons. To this date this letter was signed by over 20,000 people including Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky and many more. The main fear behind the ban is that networks of autonomous weapons could accidentally ignite a war that quickly spirals out of control. However the open letter had little impact on high-tech militaries like the US, China, Israel, South Korea, Russia and the United Kingdom which all are developing full autonomous weapon systems.

Model

The AI used an evolutionary approach to develop simulation after simulation, until it was able to come up with a gene network model that matched the experimental data perfectly. Creating a scientific model is one of the most creative things a human can do. This particular problem was too hard for humans, who tried to develop such a model for over a century. The computer solved it in just three days (Although the programmers had to work several years to describe the scientific experiments, humans had carried out, in a mathematical language that the computer could understand).

The “Data Science Machine” developed by an MIT startup can already run on any raw data set and create predictive models within a couple of hours. The strength of AIs compared to humans is that humans have to work a long time to develop even a single model of high complexity.

“Humans can typically create one or two good models a week; machine learning can create thousands of models a week.” (Thomas H. Davenport, analytics thought leader)

Developing models is one of the most exciting application for AI in fields where predicting the future is crucial, like in finance or meteorology. The dark side of this kind of AI is that the US military is using them to create models of who might be a terrorist based on metadata like phone calls and geo-data. When a threshold is reached and the algorithm predicts that someone is a terrorist a drone is ordered to hammer its justice from heaven to kill the person and everybody nearby. Without a trial. Without any actual evidence. Just based on an artificial intelligence that created a model of a guy being a terrorist.

Tay

Chatbot “Tay” was built to represent a female teenager and was able to learn from its conversations on social media. The people of the internet of course used this fact and fed the bot all kinds of racist, misogynistic and hateful content. Only a few hours after its birth Tay tweeted hate tweets against feminists and gems like “I just hate everybody” and “Hitler was rights. I hate the Jews”. Microsoft pulled the plug.

Critics see in Tay an example of the limitations of artificial intelligence and declare the chabot to be nothing more than a parrot who repeats the things that are presented to her, no matter how stupid. What these critics are missing is that this is exactly the way how human learn too. Imitation is the most crucial learning tool and only very rarely our behavior is based on deep analysis or innovation. You only have to take a quick look on human twitter to realize that – most of time – people are parrots too.

 

What Can AI Not Do?

Robots are trusted in the controlled environment of factories with specific tasks for decades and are very efficient. To test how Robots perform at tasks that cannot be formalized in a straight-forward way, DARPA designed the Robotics Challenge 2015.

The tasks of the challenge were based on emergency-response scenarios like “open a door and enter a building” or “locate and close a valve near a leaking pipe”. DRC-HUBO, a bipedal humanoid, won the Challenge Finals and can therefore be considered the most advanced humanoid robot to date. Yet the whole event made clear how hard it is for a robot to navigate in a non-smooth environment.

We probably still have a long time until we have to welcome our new robot overlords as this compilation of robots failing the challenges is evidently showing:

 

The Frequency Bias

A common shortcoming of AIs is that they are not good at dealing with outliers. Take the recommendation algorithm of Netflix for example. The more data I feed into it, the more recommendations it gives me based on my past decisions. But taste in movies is nothing that can be easily formalized.

I generally like classical action movies like The Matrix, Die Hard, Lethal Weapon and Mad Max, yet one of my favorite movies is Jean-Pierre Jeunet’s wonderful film Amelie. The more movies I watch that are based on my “main-taste”, the more the recommender AI is likely to miss unexpected options that I would also like. Humans are not free from this flaw – called frequency bias – but we are much better dealing with it, using our intuition to decode important outliers from unimportant ones. This “Frequency Bias” is a common flaw in modern machine learning algorithms.

The Frame Problem

Natural language software is the next frontier in AI. All important IT companies are trying to create a reliable personal agent that is able to have a really intelligent conversation with a person. So far Google, Microsoft, IBM, Apple and Amazon are head to toe. But despite recent progress, the task of understanding and produce language reveals a crucial limit of modern AI. At the heart of the difficulty is what is known as “The Frame Problem”.

We humans have an intuitive understanding of what is relevant during each moment of our lives. We don’t have to think about what’s relevant, we just know. To grasp what is relevant and ignoring what’s not in real-time thinking is something that has been proven incredibly hard for machines.

It is such a difficult problem because the environment around us is constantly changing. Therefore what’s relevant now can be not relevant just three seconds later; and things that are irrelevant can quickly change to become relevant.

Even building a machine that possesses a comprehensive database from which it can create a detailed model of the world is not enough. It would also need to know what facts are relevant in each particular context. Without being able to tell what is relevant stupid decisions are bound to be made. This is why Google Translate gives following translation from German to English:

  • German; „Max spielt Fußball. Es macht ihm viel Spaß.“ (means: Max plays football. He has a lot of fun)
  • English: “Max plays football. It makes it a lot of fun”

The underlying AI does not have a dynamic perspective on language. It doesn’t know what particular frame to use and therefore produce a sentence that clearly makes no sense.

The skill of “knowing what is relevant” is at the core of any intelligent behavior. Engineers have worked on this problem for a long time but suitable solutions for current machine learning algorithms have yet to be developed. Current data-driven models fail to capture the human “magic” of recognizing relevancy. Which means that, unfortunately, personal agents as depicted in the movie “Her” will probably still be science fiction dreams for a long time to come.

Maybe in order to create a machine that can solve the frame problem, we shouldn’t just trust in Moore’s Law. Maybe we have to go back to Douglas Hofstadter’s „Gödel, Escher, Bach“ and think about how a formal model of consciousness might look like, instead of hoping that an AI will just understand what’s relevant in any situation given enough processing power.

Still, modern AIs are pretty advanced. We learned that machine learning algorithms are used these days to develop scientific models (see above). Maybe in some lab, in some part of this world, a machine is currently searching for a model that will lead to its own evolution. What a strange loop that would be…

 

Further Readings

  • Hofstadter, D. R. (1975). Gödel, Escher, Bach.