Contact

020 7377 9279
hi@thecubelondon.com

ISSUE A: The Robots Are After Us!

The Robots Are After Us! But Are They Really? 

by Josh Artus

The robots are after us! At least that is what we are constantly being fed by nihilist tech writers and commentators in the media. But are they really? Will they replace everything we do and render the human race redundant in the history of the Planet Earth?

Most AI being explored and used today is predominantly based on neural networks – typically through back-propagation (decision trees). However, the neural networks of the human brain are only one aspect of how the brain functions. What we do know to date is that the brain is like an orchestra, with everything working in relation to and with each other, each part almost obsolete without the other parts. With only just understanding one aspect, out of many, of how our brain works, there are many groups within neuroscience who are cynical towards the notion that the AI we are developing is of equal flexibility to a human brain. To ‘code’ or program complex algorithms of an equal measure to a human brain we would need to understand better the purpose of other cells, such as glial cells, which we are not fully versed in at the moment. Therefore, until we understand more about the biology and mechanical nature of the brain, assuming that computers can replace humans is a bit premature. You can’t build a motorbike from scratch when you don’t know what all the moving parts are and do.

The $1.2bn US B.R.A.I.N. project, Brain Research through Advancing Innovative Neurotechnologies, initiated under President Barack Obama ran out of funding in their mission to map the brain on the basis that much like buying a larger telescope to explore outer space, you identify an entire new area that changes your view on what already exists. In the understanding of the brain in historical terms, we are equal to when Galileo said the world isn’t flat; a lot more understanding came from centuries of space exploration. Even with the advanced technologies we have today, there is still a lot more research required to truly make sense of the brain and how we can then interpret its vast network into code.

However, we are progressing in certain fields. Our computational power grows at a rate faster than Moore’s Law, and big data is helping solve problems previously not accessible to us due to their complexity. This is leading to more intelligent computer systems. But is this leading to intelligent robot systems that can replace and mimic human activity? Moravec’s paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.

We therefore find ourselves in a position that the problems we have created in the recent centuries can be easily solved (banking mathematics for example); however, the problems that have developed during civilisation (empathic communication) are more complicated to understand and reinterpret.

Rather than looking to create an intelligent artificial being, should we not be looking at using the technology we have to help solve specific human problems where the acuteness of the problem allows the tech to be more precise in its delivery?

Pascale Fung, Professor of Electronic and Computer Engineering, Hong Kong University of Science and Technology, talks of AI computers and the value of empathy, that it is so vital in relationships between different cultures and religions, stating that it naturally holds the key to the future of human-machine interactions. There are in fact a number of technology companies employing algorithms in machines within specific human environments that embody emotion and respond in an empathic, human-like manner. However, many appear to not really be solving the problems that need to be looked at. Should research and technology be devoted to machines that transcribe and take minutes at meetings, be programmed to detect tension in the room and tell a joke, or creating a holographic receptionist who tells you which of your colleagues are in and what your diary looks like?

Personally, I’m not quite sure these are problems that need solving, and think we should be looking out for developing more human enhancement tools such as Nao.

Nao is a 23-inch robot developed by French company Aldebaran Robotics (now owned by SoftBank) that has been recently used in schools to assist with teaching children with Autism Spectrum Disorder (ASD). ASD is a disorder that affects social interaction, communication and social behaviour. This company used ‘technology that is built into the robot so that he will track the child’s face…[and whilst] the child keeps eye contact with the robot, the robot will continue with what he is doing, but if the eye contact is lost, the robot will stop and wait’.

‘Robots are simpler’, says Dr Lila Kossyvaki, a research fellow at the Autism Centre for Education and Research. ‘Everything is slowed down. There are no subtle non-verbal behaviours to confuse them. So they focus on what the robot does, or says, and usually these are not at the same time, so it is much easier for them to focus on the social demands.’

Nao is developed to read body movements such as dance routines like ‘Heads, Shoulders, Knees and Toes’ and adjusts its speed to help the child follow along easily. It is programmed to understand that often with ASD children there can be long pauses in-between sentences, reads that there is a pause and waits with patience for the child to continue. Some individuals with ASD suffer from high anxiety levels and being constantly interrupted can be an unwelcoming feeling likely to increase anxiety. Robots like Nao help the children learn by mitigating problems experienced in situations that some of us take for granted.

Zeno, another 2-foot robot, developed in America, can interact with children through nonverbal communication such as body movements and facial expressions. It has been suggested that Zeno may potentially speed up diagnosis. The creators of Zeno programmed three ways that it can interact with children. The first mode is called a scripted mode of interaction, where one pre-programmes a certain sequence of motions. For the second mode, they have added a control system so an operator or therapist can control the robot by tele-operations; in this mode, it mirrors the motions of the instructor. In the third mode, the child can take control of the robot.

Another example is Milo, a humanoid robot that has also been designed to interact and engage with children with ASD. Because one of the common characteristics of autism is the inability to read and connect with the emotions of others, the designers gave Milo an expressive face. In interacting with Milo, children are asked to identify the emotion shown by Milo from multiple choices on an iPad. Milo’s eyes are cameras recording feedback. Additionally, the child wears a chest monitor that records changes in heart rate. Here we see ways of embodying empathy in technology helping those at a disadvantage to share in equal measure the simple joys of life.

PARO is another ‘friendly’ robot that is being used to enhance the lives of those with another acute disorder, dementia. ‘When I first saw PARO on YouTube I thought it was very twee,’ says Claire Jepson, an occupational therapist at The Grange, an NHS specialist assessment unit for dementia patients. When inanimate, it resembles a child’s toy that can be bought at many stores, but on closer looks its 3kg weight gives it the feeling of being very much a living thing – and once it is switched on, it is clearly not a toy. Its eyes are surprisingly affecting, and as it moves and responds to your touch there is a musculature apparent in its face that conspires to give it a real ‘living’ feel. PARO has five kinds of sensors: tactile, light, auditory, temperature and posture sensors, with which it can perceive people and its environment. PARO can recognise light and dark, and senses whether it is being stroked, beaten, or held. PARO can also recognise the direction of voice and words such as its name, greetings and praise with its audio sensor. PARO can learn to behave in a way that the user prefers, and to respond to its new name. For example, if you stroke it every time you touch it, PARO will remember the previous action and try to repeat that action to be stroked. If instead you hit it, PARO remembers its previous action and tries not to repeat that action.

With much to discover about the human brain, predicting human behaviour is limited in its effectiveness. We want to be asking ourselves whether prediction is what we want from leading programmers, machine learning specialists and computer scientists. Do we need a robot to be programmed to make us a cup of coffee when it thinks we are tired or forcing a joke in an awkward moment? In that view it becomes a dumb human. In fact, the aforementioned examples showed that through embodying empathy to help understand and enhance a sense of being human, we get more fulfilment. It appears that human-like skills such as empathy when embodied in technology for specific interventions have the opportunity to assist where humans can’t.

Too often the dream of robots leads us to run before we can walk. The more we try to ‘predict’ the human brain in the form of technology, the more advanced and expensive Microsoft Helper Paperclips we will get.


The above article was one of the many part of THECUBE‘s first issue tackling the thematic of ‘EMBODIMENT‘ through the lens of art, science and technology. We are currently accepting submissions for the upcoming magazine examining ‘TRUTH’. If you are interested in contributing just email us!