Would you eat artificial meat? How about filling your home with artificial plants? Most of us would do neither of those. When we hear the word artificial we think: unnatural, contrived, false or hollowed. But with the advent of Artificial Intelligence (AI) it could very much be a spark of technological originality. AI as a concept is far from fresh. Star Trek’s USS Enterprise was furnished with the LCARS AI computer system, which could effortlessly converse with crewmates. Going further back, the 1927 film Metropolis could have arguably shown the first iteration of self-determined machines. We could go even further back in time to the Greeks who imagined mechanical men (Talos) which mimicked our behaviour.
AI today is perhaps neither of those visions of the future and perhaps it never will be. The difference today is how companies are applying their AI deployment for more mundane tasks (at least for the moment).
Machines and medicine
A discussion on AI would not be possible without the incremental advancements in computers and medicine.
The buzz around AI is partly down to how computers have become fast enough, powerful enough – and more importantly – cheap enough to make machine learning much more meaningful. The development comes partly from the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. So, in 20 years, processing power is not 10 times better, but in fact 1000 times greater. Researchers have used this projection to maintain R&D budgets and ensure that this estimation lives on. There have been times in the last 30 years when many saw a breakdown in Moore’s Law, but scientific discoveries have always managed to bring the improvements rolling.
Similarly, Kryder’s Law, which offers a projection for hard drive data density, predicts that areal density will double every thirteen months. The implication of Kryder’s Law is that as areal density improves, storage will become cheaper.
Combining the exponential developments in CPU processing power and data density, modern technology has advanced in many orders of magnitude.
Medicine has given important breakthroughs, which progresses our notions of; intelligence and how our minds actually work. It is true that we know more about black holes than our minds, but in terms of enlarging the field of neurology, we have come a long way. For instance, Doctor Norman Doidge has researched extensively on how the brain is malleable and plasticine-like in its physiology. We have even identified the gene for brain synaptic plasticity – neuronal calcium sensor-1 (NCS-1). Making it ever more possible to describe human cognitive behavior and ultimately rebuild those traits.
What is machine learning?
The breakthrough in how to get machines to think was realised by Arthur Samuel in 1959. Rather than giving humans the slow and repetitive task of teaching machines to think, it was much faster – given their computation power – to have machines learn themselves. He realised that it would be far more efficient to code machines into thinking like people instead of pre-programming layer upon layer of data, facts, concepts, and judgements. The hope of one day turning a full AI machine on (or is that awake?) and have it absorb it environment and subsequently react to in real-time is ever nearer.
Now that we know what to do, how do we now do it? The response in Computer Science was the development of neural networks. Essentially it works on a system of classifying objects that it is given, then using algorithms to access elements it detects. The output from this is then retained or sometimes fed back into the decision-making process, to influence the next set of objects to analyse. The programmed feedback loop enables “learning” through a “memory” while new experiences offer the ability to verify its decisions. The innate advantages of computers means this can happen in the milliseconds and accumulated “knowledge” easily accrues.
The other great technological emergence – unavailable to the pioneers of AI – is the internet. In a way, the largest human structure on Earth is an organic – in a cultural sense at least – repository of human interaction, emotion, and history. Every possible conversation has probably been codified on the internet in some way. Images, video, and sounds all provide massive quantities of stimuli for AI to ponder. By combining power machine learning and the resources freely available on the internet provides a fertile ground for future AI testing.
What is AI? What can it do?
Little distinction is given to these two terms, however, AI they can often be separated into three divisions – applied, strong, or cognitive. Applied AI is more common in our landscape and is usually tasked with a single purpose.
Applied AI – advanced information processing – is mostly a commercial “smart” product offering speed or technical advantage over people. Applications like stock trading – for use in high-frequency trading – and medical diagnosis – like that available from IBM – are major successes. Encounters for consumers will take shape in the form of self-driving cars and vehicles (likes buses or trams) to navigate busy roads across a large sprawling city.
Right now we can see “smart” voice-activated assistants like Alexa, Siri, and Google as our immediate contact.
Strong AI – perhaps the hardest to implement – is teaching machines to develop an intellectual capability that is indistinguishable from people. John Searle coined the phrase “strong AI” in a now-famous paper from 1980 “Minds, Brains, and Programs”, where the case for intelligence that is able to be artificial was made. His example was developed in the “Chinese room argument” as the following:
Imagine that a person who knows nothing of the Chinese language is sitting alone in a room. In that room are several boxes containing cards on which Chinese characters of varying complexity are printed, as well as a manual that matches strings of Chinese characters with strings that constitute appropriate responses. On one side of the room is a slot through which speakers of Chinese may insert questions or other messages in Chinese, and on the other is a slot through which the person in the room may issue replies. The person in the room, using the manual, acts as a kind of computer program, transforming one string of symbols introduced as “input” into another string of symbols issued as “output.” Searle claims that even if the person in the room is a good processor of messages so that his responses always make perfect sense to Chinese speakers, he still does not understand the meanings of the characters he is manipulating. Thus, contrary to strong AI, real understanding cannot be a matter of mere symbol manipulation. Like the person in the room, computers simulate intelligence but do not exhibit it.
The example was critical as it defined the occupant more akin to a CPU which merely interprets data but does not codify the instructions. The input and output of the Chinese characters in the room were predetermined and separate from the occupant. Therefore it is a case of symbol manipulation – not interpretation. If the occupant was to memorise the entire Chinese character writing system they would still be able to operate in the room, but may not necessarily understand what those symbols actually mean or how they are used independently of the surrounding environment.
A modern-day example of this would be Google’s Alpha Go software which was able to teach itself how to play the game Go. With origins from ancient China over 4000 years ago the game of Go is simple yet contains many more possibilities to win or lose than that of Chess. Chess has a total possible move combination of around 288 billion while Go has 10 to the power of 48 combinations. In order words that is one billion billion billion billion billion billion possibilities. Alpha Go saw its debut with a match with a Go Grandmaster Lee Sedol of South Korea. Prior to the match, the machine took just 40 days to teach itself – without external assistance – how to play Go. It later defeated its human opponent two games to one.
Lastly, cognitive AI seeks to harness computer intelligence to test theories of how our minds function. If researchers are able to reproduce a brain ex-situ they could then attempt to run a variety of tests and simulations to see how it would react. Such a tool would be immensely important to neuroscience. It is also this strand of AI which brings life to the much publicised Alan Turing test. The task laid out by Turing is simple. If a machine is able to engage in conversation (or as written text) with a human being and no suspicion to its true identity is aroused, only then it can be called “Turing Complete”. Though the test is elegant, it is perhaps a more subjective assessment of how we measure intelligence.
Comment
The future is never easy to predict, despite multi-billion dollar companies trying and failing miserably. What is certain, for the moment, is that proponents of AI are warning of a displacement of workers from society. As with every previous technological revolution, worker displacement offers both opportunities for society and potential political risks. The coming change will engulf doctors, lawyers, engineers of all types, and dare I say, writers. Future employers and consumers will have the choice to buy a voice-activated brain in a box for the price for an iPhone. The opportunity cost is just too good to pass up on. As we begin to adapt to a new equilibrium many questions need to be discussed to avoid disaster. Work is a pivotal aspect of our identity, it shapes our education, friendships, and sometimes takes us on the occasional overseas adventures. How we come to define ourselves will radically change. So, what will we do with our time? What would be the societal public space? These are yet to be answered. Meanwhile, the march of AI goes forth silently, ever onwards.
Leave a Reply