in

The two souls of Artificial Intelligence: the Smart and the Clever

innovaizone

Recently, Deep Q-network (a system of software algorithms) learned to play 49 classic Atari vintage video games from scratch by relying only on data about the pixels on a screen and the scoring method. Impressive? Yes, from an engineering point of view. Not very much, in terms of getting closer to true artificial intelligence (AI). It takes less “brain” to win playing Space Invaders or Breakout than to be a chess champion. So it was only a matter of time before some clever human beings would devise a way to make a Turing Machine smart enough to play Atari games proficiently. Why is all this no final evidence that we are getting close to some kind of primordial form of artificial intelligence? The answer requires a clarification.

There are two kinds of AI. One is a branch of engineering and seeks to reproduce the outcome of human (indeed animal) intelligent behaviour by non-biological means.

Deep Q-network belongs to this kind. Then there is another kind of AI, which is more a branch of cognitive science and it seeks to produce the non-biological equivalent of our intelligence. This is currently science fiction. As a branch of engineering interested in intelligent behaviour reproduction, AI has been astoundingly successful. We increasingly rely on smart technologies (AI-related applications) to perform tasks that would be simply impossible by un-aided or un-augmented human intelligence.

For smart technologies, the sky is the limit and Deep Q-network has just eliminated another area where humans are better than machines.

However, as a branch of cognitive science interested in intelligence production, AI has been a dismal disappointment. Productive AI does not merely underperform with respect to human intelligence; it has not joined the competition yet.

Credits: hub.jhu.edu

The fact that Watson – IBM’s system capable of answering questions asked in natural language – can win against its human opponents when playing Jeopardy! only shows that artefacts can be smart without being clever. It says more about the human engineers, their amazing skills and expertise, and the game itself, than about biological intelligence of any kind, that of a rat included. John Mcarthy, who coined the expression “artificial intelligenceand was a true believer in its possibility, had similar remarks about Deep Blue, the IBM chess computer that defeated the world champion Garry Kasparov in 1997.

The two souls of AI, the engineering (smart technologies) and the cognitive one (clever technologies), have often engaged in fratricidal feuds for intellectual predominance, academic power, and financial resources. That is partly because they both claim common ancestors and a single intellectual inheritance: a founding event, the Dartmouth Summer Research Conference on Artificial Intelligence in 1956, and a founding father, Turing, with his machine and its computational limits, and then his famous test. It hardly helps that a simulation might be used in order to check both whether the simulated source (i.e., your intelligence) has been produced, and whether the targeted source’s behaviour or performance (i.e., what you achieve thanks to your intelligence) has been reproduced or even surpassed.

The misalignment of their goals and results has caused endless and mostly pointless diatribes.

Defenders of AI point to the strong results of reproductive, engineering AI, whereas detractors of AI point to the abysmal results of productive, cognitive AI. Many of the current speculations on the so-called singularity issue have their roots in such confusion. Sometimes I cannot help suspecting that this is done on purpose.

In order to escape the dichotomy just outlined, one needs to realise that AI cannot be reduced to a “science of nature”, or to a “science of culture”, because it is a “science of the artificial”, to put it with Herbert Simon. As such, AI pursues neither a descriptive nor a prescriptive approach to the world: it investigates the constraining conditions that make possible to build and embed artefacts in the world and interact with it successfully. In other words, it inscribes the world, for such artefacts are new logico-mathematical pieces of code, that is, new texts, written in Galileo’s mathematical book of nature.

Until recently, the widespread impression was that such process of adding to the mathematical book of nature (inscription) required the feasibility of productive, cognitive AI. After all, developing even a rudimentary form of non-biological intelligence may seem to be not only the best but perhaps the only way to implement technologies sufficiently adaptive and flexible to deal effectively with a complex, ever-changing and often unpredictable, when not unfriendly, environment. Such impression is not incorrect, but it is distracting because, while we were unsuccessfully pursuing the inscription of productive AI into the world, we were actually modifying (re-ontologising) the world to fit reproductive, engineering AI.

The world is becoming an infosphere increasingly well adapted to AI bounded capacities.

In robotics, an envelope is the three-dimensional space that defines the boundaries that a robot can reach. We have been enveloping the world for decades without fully realising it.Enveloping used to be either a stand-alone phenomenon (you buy the robot with the required envelop, like a dishwasher or a washing machine) or implemented within the walls of industrial buildings, carefully tailored around their artificial inhabitants. Nowadays, enveloping the environment into an AI-friendly infosphere has started pervading any aspect of reality and is visible everywhere, on a daily basis. If drones or driverless vehicles can move around with decreasing troubles, this is not because productive AI has finally arrived, but because the “around” they need to negotiate has become increasingly suitable to reproductive AI and its limited capacities.

Credits: illustrationserved.com

Enveloping is a trend that is robust, cumulative and progressively refining. It has nothing to do with some sci-fi singularity, for it is not based on some unrealistic (as far as our current and foreseeable understanding of AI and computing is concerned) speculations about some super AI taking over the world in the near future. But it is a process that raises the risk that our technologies might shape our physical and conceptual environments and constrain us to adjust to them because that is the best, or sometimes the only, way to make things work.

By becoming more critically aware of the re-ontologising power of reproductive AI and smart applications, we might be able to avoid the worst forms of distortion (rejection), or at least be consciously tolerant of them (acceptance), especially when it does not matter or when this is a temporary solution, while waiting for a better design. In the latter case, being able to imagine what the future will be like, and what adaptive demands technologies will place on their human users, may help to devise technological solutions that can lower their anthropological costs. In short, human intelligent design (pun intended) should play a major role in shaping the future of our interactions with forthcoming smart artefacts and the environments we share with them.

After all, it is a sign of intelligence to make stupidity work for you.

Originariamente pubblicato su chefuturo.it

Lascia un commento

Il tuo indirizzo email non sarà pubblicato.

What do you think?

Scritto da chef

scienze

Ecco come Science-Hub è diventato il Napster della ricerca scientifica

innovaizone

Come nasce l’intelligenza artificiale e perché non è ancora davvero intelligente