Figure 1 - uploaded by Jwu-E Chen
Content may be subject to copyright.
Ping: an Affective Robot 

Ping: an Affective Robot 

Source publication
Article
Full-text available
In this paper, an emotional and sensational robotic system by which the machine perception and the representation of affectivity can be studied is developed. This electromechanical system can be used as a versatile educational kit (module) for researchers and students. An important feature of this new system is that three demonstration modes have b...

Contexts in source publication

Context 1
... Japan, according to the survey of 2007, it is found that electronic pets for the seniors born after War II need the ability of expenditure. Owing to the trend of decreasing birth rate, on the other hand, the younger are more interested in computers and internet products that can enhance learning and competition capability. A survey made in the U.S indicates that there is an expected growth in the market of personal service robots [ 1]. Generally speaking, robots can not only help the elder to improve the quality of lifestyle, but also provide a platform of learning for researches and education. This paper aims to provide helpful educational aids for kids. By using our facial expression robot shown in Figure 1, kids can acquire knowledge, for example, multiplication table, through the interaction between with our robot and kids. As we can see that kids not only can teach the robot multiplication table, but also can be motivated in learning by playing. So far we have developed the entire robot system already. We also have achieved the educational target we have set. The architecture of the proposed affective robot [2] shown in Figure 2 is comprised of a mechanic unit, a micro controller and the ...
Context 2
... Facial Expression Robot System (FERS) is designed to mimic human’s head. It has some DoFs (Degree of Freedoms) just like eyes, eyebrows, eyelids, mouth and neck as illustrated in Figure 3. The mechanic unit contains several elements including levers, motors, gear wheels, slipping wheels, rotating wheels, bearings, steel silks and springs. These elements are combined together in order to make the robot act like human’s face. In addition, degree of freedom, server motors, and interconnection between DoFs and server motors are also include in the mechanical parts. The RC server is controlled by the MCS-51, which is a widely used microprocessor in industry and schools. This micro controller operates in 40 MHz clocks, and can generate 16 sets of Pulse Width Modulation (PWM) signals to the input of RC servers independently, See Figure 4. The PC sends signals to the micro controller via RS232, as displayed in Figure 5. In this mode, an interface known as facial expression robot system shown in Figure 1, is developed so that direct communication between the external audience, especially kids, and the robot can be established. The controller behind the robot can see outside vision and hear outside voice through a CCD camera and a micro phone accommodated on the robot, respectively. An API programmed in Visual C++ and Visual Basic is developed for the controller. On receiving operation instructions, the robot will act through the speaker, the DoF or sound to respond the audience. But note that the robot must be controlled by a person. In this mode, songs, for example, “Two tigers” and “Look back, my girl” are prerecorded in the robot. After receiving some programmed input data, the robot can perform different facial expressions. In this way our robot not only can sing songs and broadcast news, but also has the facial expression to reveal its emotion. For completion of relevant works, we have developed a Sequential Emotional Behavior Scheduler (SEBS) interface [3], as shown in Figure 6. It is described in the eMuu project [4] that interacting with the screen character or the robot character will make little joy, but presence of a person can motivate people to make more efforts in doing a task. We use learning of the multiplication table to illustrate the HM mode. Initially, the robot does not know what multiplication table is. Kids can teach the robot to learn the multiplication table just as they are the robot’s teachers. Since the kids have to be acquainted with the multiplication table in advance, .they can acquire confidence and get more interests in the process of interacting with the robot. Moreover, the robot’s facial expression is attractive to kids. This will further encourage the kids to learn the multiplication table in the process of playing. Since our robot can memorize the input data sent by the users, the more the kids play, the smarter the robot will become. In this mode, an Idle-Sound-Expression-Action (ISEA) structure that can link the robot’s action and the potential player’s response is also embedded in our robot. Figure 6 shows interface of the multiplication table learning ...