At only 93 km away from the very old Buddhist city of Koya-San, is a most fascinating place : the Intelligent Robotics Laboratory, directed by Prof. Hiroshi Ishiguro, which is part of Osaka University. If there was one way to illustrate the huge contrast which characterises Japanese civilisation, those less than an hundred kilometres would be the proper candidate.
We have been lucky to be received by Professor Ishiguro, and by assistant Professor Kohei Ogawa, who both very kindly opened their laboratory.
Robots in this laboratory separate in two categories :
- The humanoid robots, whose general purpose is to reproduce human look and feel as much as possible
- The connecting robots, whose purpose is to interact with human using artificial intelligence tools.
As Professor Ishiguro pointed, the goal of humanoid robot is to scientifically understand human being through reproducing its behaviour. As an example, the figure below shows differences in brain activities when a user sees a human person, an android robot, and an “ordinary” robot.
First robots were the Geminoid, who were designed “as close as possible” to a human person, in terms of look and feel. They can be operated remotely, through the Internet. Very often, Professor Ishiguro sends its “Doppelgänger” for a conference, and talks to the audience remotely through skype.
However, studies made by the lab have shown that in some cases, there was no need to have such a close appearance, and that it could even be unproductive, not only because of the uncanny valley effect. For this reason, a very simplistic version of the robot was developed, the Telenoid. Inspired by very old statues (it reminded me of the Cycladic statues), it contains a head, a torso, and very small arms. The telenoid is seen in the picture at the top of this post, and in the following video. As an example of usage, it seems that autistic children, or elderly people with dementia, like, and prefer to communicate with (or through..) telenoid (cf. this paper).
The second category is made of robot which focus on human interaction using artificial intelligence tools. M3-Neony was first developped, and M3-Synchy is the latest robot in this category. Here is a picture of child interacting with a Synchy.
The goal is here to introduce the robot to mediate human communication, but still keeping some basic human gesture, like smiling, or nodding.
So far the two types of robots are independent. However, the technological platform will allow, in the future, for integration of all interaction tools developed for connecting robots into humanoid robots, thus leading to a greater potential in terms of interaction.
List of projects made at the lab can be found here.
It was a very short, but highly fruitful visit. Many thanks again to Professor Ishiguro and his team for the beautiful welcome.