
Philosophizing on what intelligence is and how we can recreate it has been with humanity for a long time and resulted in a long list of papers dedicated to the question. One can also find attempts to visualize what artificially created intelligence might be in popular culture. From Maria of “Metropolis”, to Philip K. Dick’s writings, to HAL 9000 from 2001 “Space Odyssey” and all sorts of smart computers in sci-fi – scientists and artists are trying to wrap their minds around how and in what form artificial intelligence can exist and will it work for humanity’s benefits or will it bring dismay.
As we discussed in our previous article, it will take some time before artificial intelligence will become as smart as people are. To understand what stands in the way of achieving artificial general intelligence (AGI), one needs to look back and take a look at how artificial intelligence as science was developing and what its milestones are. Thus, in this article, SOLVVE is going to give you a tour of the brief history of artificial intelligence.
Up till the 1950s: the beginning of cybernetics
Defining intelligence is difficult and there is still no agreed definition. And while these discussions root deeply in ancient philosophy. However, scientists moved from abstract theories to specific actions that could help solve problems in the 1930s and 1940s when it was proved that the human brain is a neurological net operating on so-called all-or-nothing pulses and scientists played with the idea of creating artificial electronic brains.
Early approaches to artificial intelligence during these decades assumed that the process of thinking can be represented as mechanical manipulations of symbols. This very idea gave the world the first programmable computer in the 1940s that could deal with mathematical reasoning to a certain extent working with numerical values.
However, further research met two mundane obstacles. Firstly, using computers was highly expensive – USD 200,000 per month states Harvard Business Review. Simply experimenting around was not possible without significant funding. Secondly, computers could execute commands but did not store any data, meaning they could never remember what and why they did. Which means they cannot learn.

The 1950s: the 1st breakthrough
This decade signified the major shift in the operation of computers when John von Neumann and Alan Turing switched from using the decimal logic (values from 0 to 9) to binary (chains of 0s and 1s to represent values). This logic proved to be universal and is still used today as the basis for all computer architectures.
It was also a decade when Turing published his Computing Machinery and Intelligence suggesting that, just as humans, machines can make decisions if they have enough information. At this point, computers could store a limited amount of data. Thus, many scientists became fascinated with the idea and the general interest in the topic started to grow.
In 1955, a program called “Logical Theorist” created by Allen Newell, Herbert Simon, and J.C. Shaw managed to solve 38 out of 52 theorems from “Principia Mathematica” and even found more sophisticated proofs in several cases.
These results were presented in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence organized by John McCarthy and Marvin Minsky. Besides being a milestone in stirring the development of technology, the event gave it a name – artificial intelligence.
Although today many regard Dartmouth conference as a birthplace of contemporary artificial intelligence as we know it, organizers felt that it was not up to their expectations due to poor attendance and inability of scientists to agree on research methods. Nevertheless, this event defined everything that will happen for the next couple of decades.
The 1960s – mid-1970s: the golden age of artificial intelligence
From the late 1950s and until the mid-1970s AI saw its best days. On the one hand, technical capabilities grew and computers could store more and more data. On the other hand, scientists themselves gained enough experience and skills with machine learning algorithms to successfully choose methods to solve different kinds of problems. Successful cases followed one by one. Scientists expressed bold ideas of achieving artificial general intelligence in the nearest decade or two.
A lot was done in the field of natural language processing. Joseph Weizenbaum presented ELIZA, a program that could support a conversation and could probably be called an early version of a chatbot. It showed that working with naturally spoken languages is possible. Many institutions and the government became interested in funding such research. The popularity of artificial intelligence grew and there were a lot of expectations attached to the next breakthroughs in the field.
Reasoning as search also evolved. Systems could mimic the reasoning process of an expert by searching for relevant information. In 1965 MIT presented DENDRAL, a system that worked with issues in molecular chemistry. For example, it could identify unknown molecules using logic and reasoning close to those of scientists themselves. It was followed by MYCIN in 1972, a system that helped to diagnose blood diseases and suggested relevant medication the way doctors would do.

The Mid-1970s – 1980s: the AI winter
However amazing these achievements sound, it became obvious that fulfilling the promises of creating an AI within a decade or two is impossible. Firstly, it was hard to ignore the lack of computational power. Secondly, there was less and less funding coming to AI projects.
At that time scientists were doing undirected research, meaning that they freely experimented with the ideas, trying to see what could be done with AI. Unfortunately, free experiments without specific results did not please investors and over time many of them withdrew financial support.
Year after year investors became more and more frustrated with the lack of new breakthroughs and by the end of the 1980s getting funding for artificial intelligence projects became almost impossible. Anyone working in the field was avoiding the term whenever applying for funding.
Nevertheless, the lack of funding never stopped scientists from pursuing their ideas. In the late 1970s, scientists were exploring commonsense reasoning, a concept that is easy for humans but became a huge roadblock for further AI development. A big revelation came after realizing that intelligent behavior requires very detailed knowledge of a certain domain giving birth to the expert systems of the 1980s and the revival of the funding.
Discussing the new rise of artificial intelligence that we are still witnessing today requires about as much time as you have spent reading this article, thus we will continue our historical journey next week. In the meantime, if you have any questions or ideas about the application of artificial intelligence in your projects, you can read some of our case studies or contact us directly. Let us make it happen!