top of page

A brief history of AI

  • Writer: Asad Naqvi
    Asad Naqvi
  • Apr 7, 2019
  • 4 min read

Updated: Jun 10, 2019

If you work in technology, there is very rare chance you will not hear someone bring up the topic of AI. The origins of artificial intelligence can be traced back to the first half of the 20th century. By 1950, Alan Turing had already proved the mathematical possibility of simulated intelligence. The logical framework of his paper (Computing Machinery and Intelligence) was built around the premise that machines, just like humans, can decipher information and apply logical reasoning to reach a conclusion.


However, before 1950 computers lacked a key prerequisite for intelligence - they couldn’t store commands, only execute them. Also, the computing costs were higher making it riskier to fund research and development.


The DSRPAI conference of 1956


By 1956, Allen Newell, Cliff Shaw, and Herbert Simon’s developed the first proof of concept to build a program designed to simulate the logical skills of a human (Logic Theorist). The program was based on “symbolic reasoning”, which uses physical patterns (symbols) and combines them into structures (expressions). The expressions can then be manipulated (using processes) to generate new expressions. It was first presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) conference in 1956. The conference saw a number of high profile researchers and scientists acknowledge the sound logic behind machine intelligence. However, the conference could not see the researchers agree on a standard method for the field. Since then, the growth of AI has been a rollercoaster with a couple of dry periods known as “AI winter”.


The first boom


From 1956 till 1974, AI saw a rapid growth catalyzed by the reduced costs of computational power. Moreover, the high profile attendees of DSRPAI conference convinced the US government to pump more money into AI research through agencies such as Defense Advanced Research Projects Agency (DARPA). In the midst of a cold war, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. They aggressively funded efforts to help build tools with similar capabilities.


However, researchers had underestimated the problem of "word-sense disambiguation (WSD)". This is an open problem in linguistics concerned with identifying which sense of a word is used in a sentence. In order to translate a sentence, a machine needed to have some idea what the sentence was about. An example is "the spirit is willing but the flesh is weak". Translated back and forth with Russian, it became "the vodka is good but the meat is rotten." Similarly, "out of sight, out of mind" became "blind idiot". 


Additionally, the earliest work in AI were based on circuit of connected neural networks to simulate intelligence. However, post the success of programs like "Logic Theorist" and "General Problem Solver" in 1956, researchers began to explore symbolic reasoning as the essence behind building intelligent system.


The winter of 1974 - 1980


While the 1960’s saw unlimited funding in AI research capabilities, the "Mansfield Agreement" in 1969 was a major setback for AI based research funding. According to the agreement, the military would not be able to fund research that did not directly affect the efforts of the specific military function. AI research proposals were held to a very high standard and scrutiny.


Moreover, a report by professor Sir James Lighthill (a UK parliamentarian and a mathematician) argued that the AI’s most successful algorithms solved “toy” versions of the bigger real world problems. The poor results of the Speech Understanding Research (SUR) program, funded by DARPA in the hopes of developing a system that could respond to voice commands from pilots, also re-enforced the conclusion of the Lighthill report.


© DXC.technology


AI is back


The early 1980's saw a re-ignited growth in AI powered by the introduction of “expert systems”, a computer system that would solve complex problems by interacting and learning from human experts. In addition AI also saw a lot of growth due to the “fifth generation computer project”, a program launched by Japanese government to create computers with increased logic programming and artificial intelligence.


“Expert systems” was a big hit and widely adopted by enterprises around the world. By 1985, corporations were spending over a billion dollars to develop in-house expert systems. Subsequently, an entire industry was born to support these operations. One of the most successful companies during the era was Lisp. They provided specialized hardware that was optimized to process their programming language “Lisp”, the preferred language for AI.


The second stint of AI winter


However, In 1987 the market for specialized AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new era of workstations.


By the early 1990's, the earlier successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, slow to learn and also failed to overcome predicaments such as the "qualification program" (the impossibility of listing all the preconditions required to simulate a real world action and its intended effect).


The present age


In the new century, many impediments to the growth of AI such as memory and computational speed may have become antiquated. But for how long? Will AI innovation always reach an upper cap and then wait for Moore’s law to catch up? We have noticed a similar trend in the last 50 years.


Or can AI be another variation of Moore’s law that will allow us to keep increasing performance of our systems while keeping price and computational power constant? Other tech innovations such as big data, IoT, blockchain might help us leverage new disruptive applications of AI.


Moreover, the idea of “singularity” has been challenged by some great mind of the present age, such as the late Stephen Hawking and Elon Musk. It also remains to be seen how human ethics and government policies will be defined in the next few years.


For me, AI is a hot word that companies will need to adopt in their operations and services. But, the idea of having emotionally and intellectually intelligent robots living with us is still a few decades away (or even questionable).



References:

Comments


bottom of page