IA, the Great Promise

Share

The official authorship of the ideas that established the first steps for the creation of an intelligent computer can be awarded to Alan Turing. These ideas were presented in an article by its author entitled Computing Machinery and Intelligence, published by the magazine Mind in 1950, which introduces the famous mental experiment to measure the intelligence of a machine that bears the name of its creator: the Turing Test.

However, the real origin on the central ideas of Artificial Intelligence (AI) is even older and more diffuse. The first design of an artificial neuron is attributed to McCullouch and Pitts in 1943 and is considered the first work done in this field. But it was not, in fact, until 1956 that the term Artificial Intelligence was formally coined at a scientific conference at Dartmounth University. Thirty years had to pass since then (1987) to have a formal definition on the attributes that a computer must have to be able to illuminate the so-called Intelligent Agents, defined by the project A.L.I.C.E (Artificial Linguistic Internet Computer). Nine years later, in 1996, Deep Blue, IBM’s famous machine, became the first machine to beat a human being, defeating the genius Kasparov in a chess tournament that has gone down in history. And in spite of everything, it took almost another twenty years to get to know the first machine capable of passing the test proposed by Turing, sixty years after its definition. This machine will be none other than Watson, again from IBM, which is given the merit of deceiving an interviewer pretending to be a 13-year-old boy.

AI is now the great promise of the moment. Advances in recent years (e.g., DeepMind’s Alpha Go) have served to demonstrate to the general public the real applicability of AI and have awakened the awareness of the industry by encouraging its production on a scale.

┬áToday’s AI makes it possible to perform extremely complicated tasks for traditional systems by fully automating end-to-end processes that were unthinkable a few years ago. It is felt in different facets of social activity that were once considered exclusively human, awakening the fascination of outsiders. Little by little the capacity of human decision is subrogating to the machines, where the lack of precedents of its use prevents to value the suitability of this technology in terms of responsibility and ethics. Some examples are:

  • The automobile industry, where a computer system makes decisions and drives autonomously.
  • Complex business systems, in which enormous volumes of data are analyzed to extract business conclusions that will serve to underpin the executive decisions made by the company.
  • The high-frequency trading systems that operate on the stock exchange, making decisions in a matter of thousandths of a second with which they will autonomously execute movements of enormous amounts of capital, directly impacting on the value of the shares.
  • Tools such as COMPAS, designed to predict the recidivism of defendants in criminal cases and which, de facto, has been used by judges in the United States to make decisions on convictions.

Such is the impact of AI that the great European powers are beginning to understand that this technologist must have her own regulation. For example, the Italian Agency for Digitization (AgID) recently created a working group of experts with the world’s pioneering aim of advising the Italian Government on how to adopt this technology.

Justified waiting

Summarizing all this with an overview, it is easy to realize that IA evolution has indeed been punctuated by long technological winters. Given that the formal bases of AI were formulated around 1960, it is normal to ask why it is not until today that it really begins to work.

  • AI research needed a clear framework to manage public expectations and channel efforts and investments. From this framework came the definitions of weak AI, strong AI and expert or generalist systems that provide a clear picture of the scope and applicability of this technology.
  • AI has an enormous dependence on data and it has not been until today that an orderly availability of quality data is possible thanks to the advancement of specialized technologies developed in the last decade.
  • The proposed algorithms have always been limited by the capabilities of the hardware on which they operate and it has not been until now that specific improvements and adaptations of the hardware have allowed the algorithms to demonstrate to the general public that they are really capable of fulfilling their promises.

Understanding the close link that exists between hardware and AI, it is not surprising that a race has begun for the development of specialized hardware. Examples are Google’s TPUs or Huawei’s HiSilicon Kirin 970 NDU (Neural Processing Units) or NVIDIA Deep Learning Accelerator (NVDLA).

All this hardware is capable of performing large volumes of calculations orchestrated by the main machine learning systems, greatly reducing its computational expense, energy consumption, error rate and time spent solving such problems with respect to conventional hardware.

DainProject

DAIN is the next-generation artificial intelligence platform, a decentralized and geo-dispersed public computing network governed through blockchain and specialized in addressing and solving artificial intelligence problems

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *