Robots: by land, by sea, and in your bed

Gotta love 'em

Eerste slide: Voorpagina met Chapeau en kop

Robots: by land, by sea, and in your bed

Gotta love ’em!

1 Robotica aan de RUG
1-1 intro

Building robots and thinking about their future: It is all happening at the RUG. But how far along are we really, in a world where self-driving cars are no longer a thing of the future and the self-checkouts at the Albert Heijn mean they need fewer cashiers?
And where will these developments take us? Will we be marrying robots in the future? What abilities will these robots have? Will robots ever become more intelligent than human beings and take over our jobs? The UK takes you into the world of robotics.
1-2 Jayawardhana

Smart solutions

Self-teaching

Lambert Schomaker is the scientific director of the ALICE institute. There, students of Artificial Intelligence (AI) and Industrial Engineering & Management are working together to develop the ALICE robot, which is supposed to teach itself new skills.

Collaboration

In the robotics lab at Nijenborgh 4, professor Bayu Jayawardhana is working on efficient automation solutions for companies. His goal is to make small robots work together.

Our brain

AI research is closely linked to research into the human brain. Cognitive scientist Fred Keijzer shares his vision of the social consequences of increasing automation.

1-3 Schomaker en Keijzer

Sleeping with robots

Ethics

RUG instructor and PhD student Ronald Hünneman, a philosopher with a background in IT, has a clear vision of the role of robots in our lives. He also explains the ethical ramifications of increasing automation.

Sex

British professor David Levy is the foremost authority in the field of robots and love. The author of the book ‘Love and Sex with Robots’ answers burning questions in his field.

Power

Leon Geerdink teaches Philosophy of Cognitive Sciences. He does not think robots will rule the world any time soon. First, robots would have to learn to think the way human beings do. ‘And we’re not nearly there yet.’

2 Noa de voetballer

It shoots…

Nao. That is the name of the humanoid (a robot that looks like a human being) that the Artificial Intelligence students use to learn the art of programming. At the Bernoulliborg, which houses one of the RUG’s two robotics labs, they are working on programming the robot. Can UK reporter Tim, who is pretty good at soccer, best Nao?

Filmpje Nao

3 Alice
3-1

Alice: trial and error

Robot Alice should ultimately save money, time, and energy. In the lab in the Bernoulliborg, under the watchful eye of scientific director Lambert Schomaker, Intelligence students are working on her development. Because Alice is a self-teaching robot, she can correct her own mistakes. But so far, she has been unable to do so without human control.
3-2

These days, assembly lines in large factories are almost always computer-controlled. But that does not mean they make no mistakes, explains Schomaker. ‘People have become so used to relying on computers, so they also think that it’s easy to make systems using large machines reliable. But creating perfection is extremely difficult.’ It takes just one mistake for the whole assembly line to fail. All it takes is a little bit of dirt on a machine or a badly oiled part. That costs money, and not just because production is halted.

‘If a machine makes a mistake, it often leads to a new, better and more expensive machine being built. But mistakes are made all over an assembly line, so we keep building increasingly expensive machines to try and solve small mistakes. That’s not very efficient. Besides, creating the perfect machine is really difficult.’

3-3

That is why professor Schomaker is working on a more efficient solution: a system in which the robots solve their own mistakes. In the MANTIS project, partially funded by the European Committee, he and other European universities are working on developing a self-teaching robot. This robot could save manufacturers a lot of money. Schomaker: ‘The robot we’re developing can perform maintenance tasks in an assembly line. It can clean parts, lubricate them or top up the oil.’ For now, the robot can only perform small, preventative tasks. ‘We have to be realistic. Large repairs are so complicated that a robot is unable to do them for us’, according to Schomaker.

Robot Alice was built for Schomaker’s research. In the robotics lab in the Bernoulliborg, Schomaker and his students are working on her.

3-4

This is Alice

Right now, Alice is still kind of a clumsy cleaner. Anyone who hires her would have to be extremely patient. She is not yet able to keep an assembly line going. Alice just needs a bit of education – literally.

3-5

Educating robots

Schomaker: ‘We have to show a self-teaching robot how to perform tasks. It’s not magic. We still have to input the information into the robot ourselves. And then we have to tell the robot whether it performed its task correctly. It’s called reinforcement learning. If it’s done a good job, we give it a little pat on the head. Just like with pets, really.’
Reinforcement learning is much more efficient than traditional programming, according to Schomaker. ‘Right now, if we want to make a robot perform a new task, we can. But we’d first have to reprogramme it. With a self-teaching robot, we don’t. All we have to do is feed it the new information.’ That means that a robot that dusts machines can also learn how to top up oil.
Does that mean that we will all have our own Alice in our homes in the near future? Schomaker: ‘As it stands, domestic robots aren’t very good. It would be like having a very clumsy pet.’ But it will not be long before robots will take over our housework, although you will have to teach a robot chef to cook before it can make you breakfast in the morning.

4-1

Smitten with a machine

Falling in love with a robot: It happens in shows and films like ‘Westworld’ and ‘HER’. And in reality, it is apparently not that outlandish, either. Indeed, a French woman who built her own robot fell in love with it and would love to marry it tomorrow if she could. How does one fall in love with a robot, and could our feelings ever be returned? And what about sex? Will robots ever be as good as, or even better, than human beings in bed? David Levy, author of the book ‘Love and Sex With Robots’, Ronald Hünneman, instructor and PhD student at Arts, Culture & Media, and Philosophy of Cognitive Sciences instructor Leon Geerdink shed light on the matter.

4-2

Can I fall in love
with a robot?

Yes, you can. In fact, some people already have. Will it ever be normal, or will these robosexuals remain a minority? Hünneman: ‘That is really going to depend on the quality of the robots. A robot should be more than just an intelligent being. When it comes to being in love, the body plays a big role as well.’ Levy: ‘Of course we’re going to fall in love with robots! We’re already seeing people having real affection for robot pets, for example. Robot seals are incredibly popular in Japan.’ According to Levy, the age of robosexuals will come sooner than we might think. In a daring prediction, he thinks that the first robot marriages will be performed by 2050. And while it might sound strange, Levy thinks it will be a positive development: ‘Remember that many people are very lonely. These people might be much happier with a robot partner than with no partner at all.’

4-3

Can a robot fall in
love with me?

With this question, we get right down to one of the most fundamental issues of artificial intelligence: can a robot have consciousness? There is no unequivocal answer to that question just yet. According to Levy, however, the question of whether a robot can truly fall in love is not even important. ‘A robot will be able to convince you that it’s in love with you. Whether it actually is, is irrelevant, as long as you’re made to feel that your love is real. In the future, many people won’t even notice the difference between robots and human beings.’

4-4

A robot roll in the hay

‘In the future, some people will only want to have sex with robots’, says Hünneman. ‘To what extent that will actually happen depends on the quality of those robots.’ The lifelike sex dolls currently being made are nowhere near being robots. ‘In a few decades, they’ll be able to move and respond like human beings’, predicts Hünneman. Levy thinks we will be unable to distinguish robots from human beings by 2050.
‘I don’t think advanced sex robots will be possible any time soon. It’ll take a really long time before we’ll have created robots with general intelligence: the ability to make choices and learn new skills’, says Geerdink. ‘Besides which, society needs to change quite a bit before something like that will be accepted. People still hide their sex toys even now. They’re very secretive about them. I can’t imagine anyone openly admitting to having sex with robots in 50 years.’
But Hünneman thinks our sexual morals will change drastically over the next few years. ‘This is something that happens constantly. Take the Greeks, for instance: they had sex with young boys. We might be having sex with robots in the future.’

4-5

Why should we make
sex robots?

Designing robots can teach us a lot about ourselves. ‘By building robots, we realise how much we have underestimated what it’s like to be human. I think that sexual robots will be very successful because they’ll be able to tell us so much about our own sexuality’, Hünneman says. ‘Besides, sexuality and intimacy are so very complex. If we pull this off, creating artificial intelligence will be a snap.’ According to Hünneman, the complexity lies in the contact between humans and robots. ‘A truly good sex robot should have more than just fixed programs. It should do things that make you want to have sex, takes initiative, and tries new things. Sex is good with someone you have a relationship with: someone who actually responds and takes their partner’s reactions into account.’
According to cognitive scientist Fred Keijzer, a good sex robot does not need to be able to think and act by itself. ‘The specific parts of sex mentioned by Hünneman already exist in some cases. Google algorithms, for instance, know exactly what you like and adjust their ads accordingly. And robots are learning to read emotions.’

4-6

Sex robots as a solution

Sex robots could help lonely people who have trouble finding a human partner. But it could also offer a solution for sex offenders or paedophiles. ‘Already, many people are using particular tools to suppress their urges.’ According to Hünneman, using robots to help these people would simply be a continuation of existing techniques. ‘Sexual impulses are very strong. The ability to channel them like this would be great. The closer to reality a tool is, the better it can help people.’
The question is whether people would accept treatment options like these. What if a robot makes a really realistic six-year-old child? ‘That might be a little weird’, Hünneman admits. And then there is the ‘e-cigarette argument’: the possibility that it would make it easier for people to do something for real. But Hünneman does not think that will happen. Because, he says: most paedophiles know that their sexual ethics are not okay.

5 Zelfrijdende auto’s
5-1

Crash dummy
or perfect driver?

Transportion of the future? Flying saucers, the creators of the ’50s cartoon The Jetsons thought. But the future is closer than we thought: self-driving cars no longer surprise us. Well, not exactly self-driving: so far, this autopilot technology is only allowed a supporting role in the Netherlands. The driver of a self-driving car must always have their hands on the wheel and is responsible for the car’s actions. It turns out that even when you are not driving, a lot can happen.

5-2

From flying saucer
to hyper modern car

Fred Keijzer, senior lecturer in philosophy and psychology who wrote the book ‘Philosophy of the future’, did not expect self-driving cars to be here this soon. They operate on the basis of rules and learning systems that are supplemented by extra gadgets that execute tasks.
But the future is still hard to predict, he thinks. ‘Some stories from a hundred years ago predicted we’d have flying cars by now.’ And although we are not quite there yet, we have gone further than Keijzer would have guessed. ‘Ten years ago, I said it would be a long time before we’d have self-driving cars.’
According to Keijzer, we base many machines and software on how we ourselves think and work. Self-driving cars, however, do not drive the way people do. People give meaning to everything around them and get distracted. ‘They work like machines, much simpler and much more focused. They were here faster than I’d thought.’
5-3

Bad drivers

Conversely, Hünneman is very enthusiastic about self-driving cars, if only because human beings are such bad drivers. He thinks self-driving cars will largely put an end to traffic jams. ‘And we’ll be able to prevent so many accidents and fatalities. Drunk drivers, people speeding, people on their phones: they’re the biggest killers! While there will be situations where cars make strange decisions, we will see an overall drop in traffic fatalities.’
He thinks the idea that a self-driving car causing a fatal accident would be more high-profile than one caused by someone not paying attention is strange. ‘We can compare it to the introduction of the seatbelt. It saves so many lives, and yet it has also been the reason for people’s deaths. But the balance is ultimately in favour of the seatbelt. We’re just afraid of losing our autonomy.’
5-4

What if…?

That leads to an interesting question about the self-driving car. What kind of choice should the car make when something goes wrong? Hünneman: ‘Imagine you’re heading towards an intersection and your brakes fail. Do you drive yourself into a wall or do you hit an innocent pedestrian? What if there are five pedestrians? That is a choice self-driving cars will have to make, and there are different ethical theories that choice can be based on.’ An ethical committee will have to approve the software for self-driving cars. Whichever choice they make, it will not stop Hünneman from getting into a self-driving car. ‘Those situations barely even happen anyway.’
5-5

Who loses?

In the animation, three pedestrians are crossing the street. The car only has a driver. There are three approaches based upon which the car can be programmed, and they have the following consequences:

According to Kantianism, we are not allowed to actively kill in order to prevent other deaths. Not even when that means that everyone will die. There will be four deaths if the car is programmed in accordance with this theory.

In the utilitarian approach, the goal is to kill as few as possible. Because there are three pedestrians on the zebra crossing, the driver is toast in this case.

If the car is programmed according to the selfish approach, the three innocent pedestrians will not survive the accident. The driver is unharmed.

These ethical principles do not just apply to self-driving cars. All robots with the ability to think for themselves will have to be programmed like this.

6-1

Robot army

Imagine: you are walking down a dark street when you suddenly see a group of robots heading your way. Menacingly, they march towards you. You are terrified, but then you are jolted awake. You are back in the here and now, and robots marching in formation are a thing of the future.

Professor Bayu Jayawardhana wants to use his research to change that. The Industrial Engineering & Management robotics lab can be found a few hundred yards from the Bernoulliborg, at Nijenborgh 4. Here, Jayawardhana and his students are working on ways to make multiple robots move in formation. But they want to use them for far more peaceful things than robot armies.

6-2

Walking systems

The task Jayawardhana has set himself is not a simple one. ‘Imagine moving a table from one room to another with a group of friends. As you are lifting the table, you have to keep walking at the same distance of your friends. If you don’t, you drop the table. But there’s no need for any verbal communication. You just keep an eye on the other ones to determine the distance you need to keep.’ For human beings, carrying a table together is a fairly simple task. But robots have a much harder time working together like this.

Coordinating multiple autonomous robots is difficult because of measurement uncertainties. It goes like this: if two robots have to keep a distance of one metre from each other, they do that by measuring how far away they are from one another. But these measurements have small differences. Robot 1 will measure a distance of 0.9 metres. That means that it has to move away from number 2. Robot 2 will measure a distance of 1.0 metres. The moment robot 1 moves away, robot 2 will follow. And then the whole system takes a walk.

6-3

Mission accomplished

Jayawardhana and his students managed to solve this problem. They were able to make several autonomous robots move in a pattern. The robots keep an eye on their positions in relation to each and do not use GPS or video.

6-4

Multi-purpose robots

Coordinating several robots by way of GPS was already possible. There is also the option of taking videos from above to control the robots. However, there are many conceivable situations in which video or GPS is not possible or desirable. In a large factory, for example. In a place like that, completely independent robots who work without any help are preferable. Jayawardhana developed the new algorithm in collaboration with the business industry. They could save a lot of money on it. Jayawardhana: ‘To move an airplane wing, you could build one large robot that does the work on its own. But it would be much more economical to build four smaller robots that can lift and move the wing together.’ These small robots would not only be cheaper, but also multi-purpose.

6-5

Drones and gravity

But there are more applications, according to Jayawardhana. ‘One of the most amazing is the use of drones for 3D videos. We can send multiple drones to one particular ‘target’, such as a human being. The drones then fly in formation around the person, surrounding him or her.’ Space travel is also on the lookout for techniques like these. In 2034, ESA and NASA hope to send three ships into space to look for gravity waves. Because the gravity waves are so small, the spacecraft cannot move in relation to each other if they want to be able to catch the waves. Jayawardhana’s technique is needed to make the three ships form an immobile triangle.

7 Zelfrijdende auto’s
7-1

Robots in power

It is clear that the future is hard to predict – especially when it comes to robotics. Twenty years ago we were not even dreaming of self-driving cars, but here they are. But there is still a lot of work and research to be done in other fields. What consequences will all these developments have on us? How will robots influence our future society? Will they make our lives easier or will they kill us all?
7-2

Technology and society

The changes that are currently happening in the field of robotics are not necessarily new. Technological developments have always influenced our society, says Keijzer. ‘Our way of thinking is pretty much still based on ancient forms of communication. It’s only twenty years ago that we didn’t have cell phones or Facebook. That puts a lot of pressure on society and politics.’

Because of new technology, we are faced with unknown situations in daily life. At times like that, a society has to decide how to handle them, according to Keijzer. Otherwise, private organisations would gain control.

7-3

Less work?

Are we soon going to be replaced by robots because of sweeping automation in the job market? While some people fear that jobs will disappear, others are saying that new jobs will also be created. According to Keijzer, that is true. ‘But I don’t know where the idea of this being in balance comes from, because I don’t think it will be.’

‘It’s obviously very difficult to predict what will happen exactly. There are so many unknowns’, says Keijzer. He thinks that we’ll start to find paid jobs less important. ‘It’s definitely a discussion we should be having. We can talk about things such as the idea of basic income. Or we can switch over to the Italian system, where everyone has a job but not everyone works.’

Hünneman also predicts robots taking over our jobs. A rather inviting prospect, he feels. ‘I think it’d be great. We’d have so much more free time. We’re always so focused on work. But we haven’t given enough thought to free time or the meaning and end of life. This will also be a golden age for the entertainment industry, by the way.’

Scared or just curious? ‘Humans need not apply’ shows more about the development of robotics and the influence it will have on society.

Fred Keijzer, senior lecturer in philosophy and psychology and author of ‘Philosophy in the future’: ‘Our way of thinking is pretty much still based on ancient forms of communication.’

7-4

Or other work?

‘Jobs probably won’t disappear all that fast’, says Geerdink, disagreeing with his colleagues. ‘Robotics’ influence on our labour is often overestimated. The social element will always play a role. And a lot of production work that is currently being done in low-wage countries will be even cheaper if it’s automated. That labour will return to the Netherlands and that will also create jobs.’

Obviously, robots will be able to take some things off our hands. ‘Those tasks will be mainly done by expert systems, programs that have the ability to learn and execute one specific task really well.’ According to Geerdink, it will be a while before we have developed a properly functioning general-purpose robot. ‘We could try to integrate multiple systems into one robot, but that is really complicated, it turns out. When a robot has to switch from one task to another, it would first have to shut down one system and start up another. That would take a while, and they’d still be separate systems.’

7-5

Man versus machine

The general-purpose robot is the Holy Grail of artificial intelligence (AI). These robots would possess general intelligence, which means that they would have a lot of general knowledge that they would be able to use in any situation without switching systems. But we are not even remotely there yet, Geerdink thinks. And that is not just because we have not figured out how to do it. ‘The problem also lies in the fact that we simply lack the physical computing power. We’d need quantum computers and we just haven’t come far enough yet.’

This means he is not afraid that the world will be taken over by a machine army or crazed drones. He thinks that humans will retain control over robots in the future. Today’s supercomputers can only beat humans in a few games. Computers have been better at chess for a while now, and humans have recently started losing to computers in the Chinese board game Go. People who play the computer game StarCraft are warned: ‘Google DeepMind, one of the most advanced companies in the field of artificial intelligence, is currently working on completely solving the game.’

Leon Geerdink, philosophy of cognitive sciences instructor: ‘The influence of robotics on our labour is often overestimated.’

7-6

What makes us human?

According to Hünneman, much research into AI and robotics – in addition to the practical side – has a larger, philosophical motivation. ‘Our history is full of stories of people trying to create life. You see it everywhere. In the Bible, we were created in God’s image. The story of Frankenstein is another good example.’ There is a development to these stories, though. While it used to be about capturing a life source, we are currently pondering the question of what makes us human.

‘That also leads to the question of what will ultimately separate human beings from robots. That difference will mainly lie in the on/off button. Human beings will always be the rulers. But other than that, I don’t think there’s any difference between alive and not alive. That gives rise to questions such as: can robots make art? Will they eventually have rights? I think they do.’

What life will look like in the distant future remains guesswork, even for experts. That is exactly what makes it so difficult and interesting at the same time, says Keijzer. ‘That is what science fiction is all about. It’s really more about the open question of what the future will look like. The philosophy of the future of the past shows two things very clearly: that it is extremely difficult to predict the future, and it is difficult to predict how people in any given era will think about the future. They often magnify the present.’

Ronald Hünneman, philosopher: ‘There will be people in the future who only want to have sex with robots.’

8-1

Credits

This is a production by:

Text

Tim Bakker

Simone Harmsen

Leonie Sinnema

Translation

Sarah van Steenderen

Video

Johan Siemonsma

Beppie van der Sluis

Photography

Traci White

Animation and design

René Lapoutre

Voice-overs

Leonie Sinnema

Sjef Weller

mobile versie
Building robots and thinking about their future: It is all happening at the RUG. But how far along are we really, in a world where self-driving cars are no longer a thing of the future and the self-checkouts at the Albert Heijn mean they need fewer cashiers?

This is a simplified version for mobile display. The desktop version is a richly styled article.

And where will these developments take us? Will we be marrying robots in the future? What abilities will these robots have? Will robots ever become more intelligent than human beings and take over our jobs? The UK takes you into the world of robotics.

Smart solutions

Self-teaching

Lambert Schomaker is the scientific director of the ALICE institute. There, students of Artificial Intelligence (AI) and Industrial Engineering & Management are working together to develop the ALICE robot, which is supposed to teach itself new skills.

Collaboration

In the robotics lab at Nijenborgh 4, professor Bayu Jayawardhana is working on efficient automation solutions for companies. His goal is to make small robots work together.

Our brain

AI research is closely linked to research into the human brain. Cognitive scientist Fred Keijzer shares his vision of the social consequences of increasing automation.

Sleeping with robots

Ethics

RUG instructor and PhD student Ronald Hünneman, a philosopher with a background in IT, has a clear vision of the role of robots in our lives. He also explains the ethical ramifications of increasing automation.

Sex

British professor David Levy is the foremost authority in the field of robots and love. The author of the book ‘Love and Sex with Robots’ answers burning questions in his field.

Power

Leon Geerdink teaches Philosophy of Cognitive Sciences. He does not think robots will rule the world any time soon. First, robots would have to learn to think the way human beings do. ‘And we’re not nearly there yet.’

He shoots…

Nao. That is the name of the humanoid (a robot that looks like a human being) that the Artificial Intelligence students use to learn the art of programming. At the Bernoulliborg, which houses one of the RUG’s two robotics labs, they are working on programming the robot. Can UK reporter Tim, who is pretty good at soccer, best Nao?

Alice: trial and error

Robot Alice should ultimately save money, time, and energy. In the lab in the Bernoulliborg, under the watchful eye of scientific director Lambert Schomaker, Artificial Intelligence students are working on her development. Because Alice is a self-teaching robot, she can correct her own mistakes. But so far, she has been unable to do so without human control.

These days, assembly lines in large factories are almost always computer-controlled. But that does not mean they make no mistakes, explains Schomaker. ‘People have become so used to relying on computers, so they also think that it’s easy to make systems using large machines reliable. But creating perfection is extremely difficult.’ It takes just one mistake for the whole assembly line to fail. All it takes is a little bit of dirt on a machine or a badly oiled part. That costs money, and not just because production is halted.

‘If a machine makes a mistake, it often leads to a new, better and more expensive machine being built. But mistakes are made all over an assembly line, so we keep building increasingly expensive machines to try and solve small mistakes. That’s not very efficient. Besides, creating the perfect machine is really difficult.’

That is why professor Schomaker is working on a more efficient solution: a system in which the robots solve their own mistakes. In the MANTIS project, partially funded by the European Committee, he and other European universities are working on developing a self-teaching robot. This robot could save manufacturers a lot of money. Schomaker: ‘The robot we’re developing can perform maintenance tasks in an assembly line. It can clean parts, lubricate them or top up the oil.’ For now, the robot can only perform small, preventative tasks. ‘We have to be realistic. Large repairs are so complicated that a robot is unable to do them for us’, according to Schomaker.

Robot Alice was built for Schomaker’s research. In the robotics lab in the Bernoulliborg, Schomaker and his students are working on her.

This is Alice

Right now, Alice is still kind of a clumsy cleaner. Anyone who hires her would have to be extremely patient. She is not yet able to keep an assembly line going. Alice just needs a bit of education – literally.

Educating robots

Schomaker: ‘We have to show a self-teaching robot how to perform tasks. It’s not magic. We still have to input the information into the robot ourselves. And then we have to tell the robot whether it performed its task correctly. It’s called reinforcement learning. If it’s done a good job, we give it a little pat on the head. Just like with pets, really.’
Reinforcement learning is much more efficient than traditional programming, according to Schomaker. ‘Right now, if we want to make a robot perform a new task, we can. But we’d first have to reprogramme it. With a self-teaching robot, we don’t. All we have to do is feed it the new information.’ That means that a robot that dusts machines can also learn how to top up oil.
Does that mean that we will all have our own Alice in our homes in the near future? Schomaker: ‘As it stands, domestic robots aren’t very good. It would be like having a very clumsy pet.’ But it will not be long before robots will take over our housework, although you will have to teach a robot chef to cook before it can make you breakfast in the morning.

Smitten with a machine

Falling in love with a robot: It happens in shows and films like ‘Westworld’ and ‘HER’. And in reality, it is apparently not that outlandish, either. Indeed, a French woman who built her own robot fell in love with it and would love to marry it tomorrow if she could. How does one fall in love with a robot, and could our feelings ever be returned? And what about sex? Will robots ever be as good as, or even better than, human beings in bed? David Levy, author of the book ‘Love and Sex With Robots’, Ronald Hünneman, instructor and PhD student at Arts, Culture & Media, and Philosophy of Cognitive Sciences instructor Leon Geerdink shed light on the matter.

Can I fall in love with a robot?

Yes, you can. In fact, some people already have. Will it ever be normal, or will these robosexuals remain a minority? Hünneman: ‘That is really going to depend on the quality of the robots. A robot should be more than just an intelligent being. When it comes to being in love, the body plays a big role as well.’ Levy: ‘Of course we’re going to fall in love with robots! We’re already seeing people having real affection for robot pets, for example. Robot seals are incredibly popular in Japan.’ According to Levy, the age of robosexuals will come sooner than we might think. In a daring prediction, he thinks that the first robot marriages will be performed by 2050. And while it might sound strange, Levy thinks it will be a positive development: ‘Remember that many people are very lonely. These people might be much happier with a robot partner than with no partner at all.’

Can a robot fall in love with me?

With this question, we get right down to one of the most fundamental issues of artificial intelligence: can a robot have consciousness? There is no unequivocal answer to that question just yet. According to Levy, however, the question of whether a robot can truly fall in love is not even important. ‘A robot will be able to convince you that it’s in love with you. Whether it actually is, is irrelevant, as long as you’re made to feel that your love is real. In the future, many people won’t even notice the difference between robots and human beings.’

A robot roll in the hay

‘In the future, some people will only want to have sex with robots’, says Hünneman. ‘To what extent that will actually happen depends on the quality of those robots.’ The lifelike sex dolls currently being made are nowhere near being robots. ‘In a few decades, they’ll be able to move and respond like human beings’, predicts Hünneman. Levy thinks we will be unable to distinguish robots from human beings by 2050.

‘I don’t think advanced sex robots will be possible any time soon. It’ll take a really long time before we’ll have created robots with general intelligence: the ability to make choices and learn new skills’, says Geerdink. ‘Besides which, society needs to change quite a bit before something like that will be accepted. People still hide their sex toys even now. They’re very secretive about them. I can’t imagine anyone openly admitting to having sex with robots in 50 years.’

But Hünneman thinks our sexual morals will change drastically over the next few years. ‘This is something that happens constantly. Take the Greeks, for instance: they had sex with young boys. We might be having sex with robots in the future.’

Why should we make sex robots?

Designing robots can teach us a lot about ourselves. ‘By building robots, we realise how much we have underestimated what it’s like to be human. I think that sexual robots will be very successful because they’ll be able to tell us so much about our own sexuality’, Hünneman says. ‘Besides, sexuality and intimacy are so very complex. If we pull this off, creating artificial intelligence will be a snap.’ According to Hünneman, the complexity lies in the contact between humans and robots. ‘A truly good sex robot should have more than just fixed programs. It should do things that make you want to have sex, takes initiative, and tries new things. Sex is good with someone you have a relationship with: someone who actually responds and takes their partner’s reactions into account.’

According to cognitive scientist Fred Keijzer, a good sex robot does not need to be able to think and act by itself. ‘The specific parts of sex mentioned by Hünneman already exist in some cases. Google algorithms, for instance, know exactly what you like and adjust their ads accordingly. And robots are learning to read emotions.’

Sex robots as a solution

Sex robots could help lonely people who have trouble finding a human partner. But it could also offer a solution for sex offenders or paedophiles. ‘Already, many people are using particular tools to suppress their urges.’ According to Hünneman, using robots to help these people would simply be a continuation of existing techniques. ‘Sexual impulses are very strong. The ability to channel them like this would be great. The closer to reality a tool is, the better it can help people.’

The question is whether people would accept treatment options like these. What if a robot makes a really realistic six-year-old child? ‘That might be a little weird’, Hünneman admits. And then there is the ‘e-cigarette argument’: the possibility that it would make it easier for people to do something for real. But Hünneman does not think that will happen. Because, he says: most paedophiles know that their sexual ethics are not okay.

Crash dummy or perfect driver?

Transportion of the future? Flying saucers, the creators of the ’50s cartoon The Jetsons thought. But the future is closer than we thought: self-driving cars no longer surprise us. Well, not exactly self-driving: so far, this autopilot technology is only permitted in a supporting role in the Netherlands. The driver of a self-driving car must always have their hands on the wheel and is responsible for the car’s actions. It turns out that even when you are not driving, a lot can happen.

From flying saucer to hyper modern car

Fred Keijzer, senior lecturer in philosophy and psychology who wrote the book ‘Philosophy of the future’, did not expect self-driving cars to be here this soon. They operate on the basis of rules and learning systems that are supplemented by extra gadgets that execute tasks.

But the future is still hard to predict, he thinks. ‘Some stories from a hundred years ago predicted we’d have flying cars by now.’ And although we are not quite there yet, we have gone further than Keijzer would have guessed. ‘Ten years ago, I said it would be a long time before we’d have self-driving cars.’

According to Keijzer, we base many machines and software on how we ourselves think and work. Self-driving cars, however, do not drive the way people do. People give meaning to everything around them and get distracted. ‘They work like machines, much simpler and much more focused. They were here faster than I’d thought.’

Bad drivers

Conversely, Hünneman is very enthusiastic about self-driving cars, if only because human beings are such bad drivers. He thinks self-driving cars will largely put an end to traffic jams. ‘And we’ll be able to prevent so many accidents and fatalities. Drunk drivers, people speeding, people on their phones: they’re the biggest killers! While there will be situations where cars make strange decisions, we will see an overall drop in traffic fatalities.’

He thinks the idea that a self-driving car causing a fatal accident would be more high-profile than one caused by someone not paying attention is a strange one. ‘We can compare it to the introduction of the seatbelt. It saves so many lives, and yet it has also been the reason for people’s deaths. But the balance is ultimately in favour of the seatbelt. We’re just afraid of losing our autonomy.’

What if…?

That leads to an interesting question about the self-driving car. What kind of choice should the car make when something goes wrong? Hünneman: ‘Imagine you’re heading towards an intersection and your brakes fail. Do you drive yourself into a wall or do you hit an innocent pedestrian? What if there are five pedestrians? That is a choice self-driving cars will have to make, and there are different ethical theories that choice can be based on.’ An ethical committee will have to approve the software for self-driving cars. Whichever choice they make, it will not stop Hünneman from getting into a self-driving car. ‘Those situations barely even happen anyway.’

Who loses?

In the animation, three pedestrians are crossing the street. The car only has a driver. There are three approaches based upon which the car can be programmed, and they have the following consequences:

According to Kantianism, we are not allowed to actively kill in order to prevent other deaths. Not even when that means that everyone will die. There will be four deaths if the car is programmed in accordance with this theory.

In the utilitarian approach, the goal is to kill as few as possible. Because there are three pedestrians on the zebra crossing, the driver is toast in this case.

If the car is programmed according to the selfish approach, the three innocent pedestrians will not survive the accident. The driver is unharmed.

These ethical principles do not just apply to self-driving cars. All robots with the ability to think for themselves will have to be programmed like this.

Robot army

Imagine: you are walking down a dark street when you suddenly see a group of robots heading your way. Menacingly, they march towards you. You are terrified, but then you are jolted awake. You are back in the here and now, and robots marching in formation are a thing of the future.

Professor Bayu Jayawardhana wants to use his research to change that. The Industrial Engineering & Management robotics lab can be found a few hundred yards from the Bernoulliborg, at Nijenborgh 4. Here, Jayawardhana and his students are working on ways to make multiple robots move in formation. But they want to use them for far more peaceful things than robot armies.

Walking systems

The task Jayawardhana has set himself is not a simple one. ‘Imagine moving a table from one room to another with a group of friends. As you are lifting the table, you have to keep walking at the same distance of your friends. If you don’t, you drop the table. But there’s no need for any verbal communication. You just keep an eye on the other ones to determine the distance you need to keep.’ For human beings, carrying a table together is a fairly simple task. But robots have a much harder time working together like this.

Coordinating multiple autonomous robots is difficult because of measurement uncertainties. It goes like this: if two robots have to keep a distance of one metre from each other, they do that by measuring how far away they are from one another. But these measurements have small differences. Robot 1 will measure a distance of 0.9 metres. That means that it has to move away from number 2. Robot 2 will measure a distance of 1.0 metres. The moment robot 1 moves away, robot 2 will follow. And then the whole system takes a walk.

Mission accomplished

Jayawardhana and his students managed to solve this problem. They were able to make several autonomous robots move in a pattern. The robots keep an eye on their positions in relation to each and do not use GPS or video.

Multi-purpose robots

Coordinating several robots by way of GPS was already possible. There is also the option of taking videos from above to control the robots. However, there are many conceivable situations in which video or GPS is not possible or desirable. In a large factory, for example. In a place like that, completely independent robots who work without any help are preferable. Jayawardhana developed the new algorithm in collaboration with the business industry. They could save a lot of money on it. Jayawardhana: ‘To move an airplane wing, you could build one large robot that does the work on its own. But it would be much more economical to build four smaller robots that can lift and move the wing together.’ These small robots would not only be cheaper, but also multi-purpose.

Drones and gravity

But there are more applications, according to Jayawardhana. ‘One of the most amazing is the use of drones for 3D videos. We can send multiple drones to one particular ‘target’, such as a human being. The drones then fly in formation around the person, surrounding him or her.’ Space travel is also on the lookout for techniques like these. In 2034, ESA and NASA hope to send three ships into space to look for gravity waves. Because the gravity waves are so small, the spacecraft cannot move in relation to each other if they want to be able to catch the waves. Jayawardhana’s technique is needed to make the three ships form an immobile triangle.

Robots in power

It is clear that the future is hard to predict – especially when it comes to robotics. Twenty years ago we were not even dreaming of self-driving cars, but here they are. But there is still a lot of work and research to be done in other fields. What consequences will all these developments have on us? How will robots influence our future society? Will they make our lives easier or will they kill us all?

Technology and society

The changes that are currently happening in the field of robotics are not necessarily new. Technological developments have always influenced our society, says Keijzer. ‘Our way of thinking is pretty much still based on ancient forms of communication. It’s only twenty years ago that we didn’t have cell phones or Facebook. That puts a lot of pressure on society and politics.’

Because of new technology, we are faced with unknown situations in daily life. At times like that, a society has to decide how to handle them, according to Keijzer. Otherwise, private organisations would gain control.

Less work?

Are we soon going to be replaced by robots because of sweeping automation in the job market? While some people fear that jobs will disappear, others are saying that new jobs will also be created. According to Keijzer, that is true. ‘But I don’t know where the idea of this being in balance comes from, because I don’t think it will be.’

‘It’s obviously very difficult to predict what will happen exactly. There are so many unknowns’, says Keijzer. He thinks that we’ll start to find paid jobs less important. ‘It’s definitely a discussion we should be having. We can talk about things such as the idea of basic income. Or we can switch over to the Italian system, where everyone has a job but not everyone works.’

Hünneman also predicts robots taking over our jobs. A rather inviting prospect, he feels. ‘I think it’d be great. We’d have so much more free time. We’re always so focused on work. But we haven’t given enough thought to free time or the meaning and end of life. This will also be a golden age for the entertainment industry, by the way.’

Scared or just curious? ‘Humans need not apply’ shows more about the development of robotics and the influence it will have on society.

Fred Keijzer, senior lecturer in philosophy and psychology and author of ‘Philosophy in the future’: ‘Our way of thinking is pretty much still based on ancient forms of communication.’

Or other work?

‘Jobs probably won’t disappear all that fast’, says Geerdink, disagreeing with his colleagues. ‘Robotics’ influence on our labour is often overestimated. The social element will always play a role. And a lot of production work that is currently being done in low-wage countries will be even cheaper if it’s automated. That labour will return to the Netherlands and that will also create jobs.’

Obviously, robots will be able to take some things off our hands. ‘Those tasks will be mainly done by expert systems, programs that have the ability to learn and execute one specific task really well.’ According to Geerdink, it will be a while before we have developed a properly functioning general-purpose robot. ‘We could try to integrate multiple systems into one robot, but that is really complicated, it turns out. When a robot has to switch from one task to another, it would first have to shut down one system and start up another. That would take a while, and they’d still be separate systems.’

Man versus machine

The general-purpose robot is the Holy Grail of artificial intelligence (AI). These robots would possess general intelligence, which means that they would have a lot of general knowledge that they would be able to use in any situation without switching systems. But we are not even remotely there yet, Geerdink thinks. And that is not just because we have not figured out how to do it. ‘The problem also lies in the fact that we simply lack the physical computing power. We’d need quantum computers and we just haven’t com far enough yet.’

This means he is not afraid that the world will be taken over by a machine army or crazed drones. He thinks that humans will retain control over robots in the future. Today’s supercomputers can only beat humans in a few games. Computers have been better at chess for a while now, and humans have recently started losing to computers in the Chinese board game Go. People who play the computer game StarCraft are warned. ‘Google DeepMind, one of the most advanced companies in the field of artificial intelligence, is currently working on completely solving the game.’

What makes us human?

According to Hünneman, much research into AI and robotics – in addition to the practical side – has a larger, philosophical motivation. ‘Our history is full of stories of people trying to create life. You see it everywhere. In the Bible, we were created in God’s image. The story of Frankenstein is another good example.’ There is a development to these stories, though. While it used to be about capturing a life source, we are currently pondering the question of what makes us human.

‘That also leads to the question of what will ultimately separate human beings from robots. That difference will mainly lie in the on/off button. Human beings will always be the rulers. But other than that, I don’t think there’s any difference between alive and not alive. That gives rise to questions such as: can robots make art? Will they eventually have rights? I think they do.’

What life will look like in the distant future remains guesswork, even for experts. That is exactly what makes it so difficult and interesting at the same time, says Keijzer. ‘That is what science fiction is all about. It’s really more about the open question of what the future will look like. The philosophy of the future of the past shows two things very clearly:  that it’s extremely difficult to predict the future, and it’s extremely difficult to predict how people in any given era will think about the future. They often magnify the present.’

Credits

This is a production by:

Text

Tim Bakker

Simone Harmsen

Leonie Sinnema

Translation

Sarah van Steenderen

Video

Johan Siemonsma

Beppie van der Sluis

Photography

Traci White

Animation and design

René Lapoutre

Voice-overs

Leonie Sinnema

Sjef Weller

Nederlands