Rhythmic pattern recognition technologies of an acoustic wave have been attempted to control electronic gadgets like earphones, displays, mobile devices, etc. But, in the aspect of realization, previous methods for classifying and recognizing the rhythmic patterns are not easy to maintain a program structure due to complex comparative grammars and equations. It could be more difficult to implement those methods on a micro-controller of a low clock speed and small memory size. In this paper, a novel rhythmic pattern recognition technology is proposed to solve these problems with a new polynomial equation. The proposed method can instantly classify diverse rhythm patterns and be immediately implemented even in various rhythm numbers and intervals. The effectiveness of the proposed algorithm is shown in a real headlamp system.
When customers use a tailgate (or trunk), some systems such as power tailgate and smart tailgate have been introduced and implemented for improving convenience. However, they still have some problems in some use cases. Some people have to search for the outside button to open the tailgate, or they should take out the key and push a button. In some cases, they should move their leg or wait a few seconds which makes some people feel that it is a long time. In addition, they have to push the small button which is located on the inner trim in order to close the tailgate. This paper proposes a new tailgate control technology and systems based on acoustic patterns in order to solve some inconvenience. An acoustic user interaction (AUI) is a technology which responds to human’s rubbing and tapping on a specific part analyzing the acoustic patterns. The AUI has been recently spotlighted in the automotive industry as well as home appliances, mobile devices, musical instruments, etc. The AUI is a technology that can extend to rich-touch beyond multi-touch. The AUI can be easily applied and adapted even to the systems which need a large touch recognition area or have complex shape and surface. This paper addresses how to recognize the users’ intention and how to control the tailgate using acoustic sensors and patterns. If someone who has the smart key wants to open the tailgate, he or she only needs to knock on the outer panel of the tailgate twice. When they want to close the tailgate, just touching anywhere of the inner trim of the tailgate will do. Various digital filters and algorithms are used for acoustic signal processing, and the effectiveness of the proposed methods is shown by a real tailgate system with a micro control unit. Finally, we suggest other applications of vehicles which use AUI technology.
One factor that contributes to successful long-term human-human interaction is that humans are able to appropriately express their emotions depending on situations. Unlike humans, robots often lack diversity in facial expressions and gestures and long-term human robot interaction (HRI) has consequently not been very successful thus far. In this paper, we propose a novel method to generate diverse and more realistic robot facial expressions to help long-term HRI. First, nine basic dynamics for robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation in order to generate natural and diverse expression changes in a facial robot for identical emotions. In the second stage, facial actions are added to express more realistic expressions such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot with and without use of the proposed method. The results of the survey showed that the proposed method can help robots generate more realistic and diverse facial expressions.
In this paper, emotional motion representation is proposed for Human Robot Interaction: HRI. The proposed representation is based on “Laban Movement Analysis: LMA” and trajectories of 3-dimensional whole body joint positions using an RGB-D camera such as a “Microsoft Kinect”. The experimental results show that the proposed method distinguishes two types of human emotional motion well.
A traffic situation is getting more complex in urban areas. Various safety systems of an automobile have been developed but fatal and serious accidents still can be made by driver’s faults or distractions. The system supporting extend of driver’s recognition area is going to be an important part of future intelligent vehicles in order to prevent accidents.In this paper, we propose sensor fusion system based on a digital-map for driver assistance. The accurate localization of a host vehicle is achieved by a stereo vision sensor and a digital-map using polygon matching algorithm in an urban area. A single-row laser scanner is used for tracking multiple moving objects. The coordinate transformation from sensor frame to global frame is performed to visualize the moving objects on a digital-map.An experiment was conducted in an urban canyon where the GPS signals are frequently interrupted. Four cameras were installed in the left and right of the vehicle to get images of landmarks and horizontal laser scanner was mounted to collect scan data.
It is expected that robots will be widely exposed to humans in the near future. Emotional communication is very important in human-robot interaction, as well as human- human interaction. Facial expression is an emotional expression method between humans, and it also enables human to recognize a robot’s emotional state. Although there are lots of previous research regarding facial expressions, it is not easily applicable to different shapes of robot faces, if the number and types of the robot’s control points are varied. In addition, the natural connection between emotions has not been considered or has been inefficiently implemented in previous research. In this paper, we propose a linear dynamic affect-expression model for continuous change of expression and diversifying characteristics of expressional changes. Therefore, the proposed model allows a robot’s facial expression and expression changes to more closely resemble those of humans, and this model is applicable to various mascot-type robots irrespective of the number and types of robot’s control points.
A robot’s face is its symbolic feature, and its facial expressions are the best method for interacting with people with emotional information. Moreover, a robot’s facial expressions play an important role in human–robot emotional interactions. This paper proposes a general rule for the design and realization of expressions when some mascot-type facial robots are developed. Mascot-type facial robots are developed to enable friendly human feelings. The number and type of control points for six basic expressions or emotions were determined through a questionnaire. A linear affect–expression space model is provided to realize continuous and various expressions effectively, and the effects of the proposed method are shown through experiments using a simulator and an actual robot system.
In the last decade, face analysis, e.g. face recognition, face detection, face tracking and facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial facial expression mimic system which can recognize facial expressions of human and also imitate the recognized facial expressions. We propose a classifier that is based on weak classifiers obtained by using modified rectangular features to recognize human facial expression in real-time. Next, we introduce our robot that is manipulated by a distributed control algorithm and that can make artificial facial expressions. Finally, experimental results of facial expression recognition and facial expression generation are shown for the validity of our artificial facial expression imitator
Although many researchers have tried to find out the bases of emotion in human mind, there is still no conclusion about that question. However, it is necessary to find out or establish the bases of emotion since they could be adapted to control facial expressions of robots effectively. This paper introduces a new linear model to define a linear affective space, which is based on the six basic emotional expressions of the face of a mascot-type robot. We attempt to determine which affective space is suitable for the facial robot, and define a linear expressional space. A new linear model is built up regarding relations between affects and facial expressions. This paper demonstrates efficient control of various and continual emotional expressions with a low dimensional linear affective space. The results are displayed through simulator experiments and mascot-type facial robot system
Humans want robots which can react or express their emotions similarly as human beings do. However, there has not been any definite models to express emotion on the face of robots. The previously developed face of robots just has some target positions of the facial components which were defined in advance for expressing, and it can not show the continuous emotional expression while its emotion changes. In this paper, we will see the definition of an expressional space on the face of a mascot-type robot and its application to the formation of an affect-expression space model. The proposed model is simple but more efficient than other methods since it can be used to express various and continual emotional expressions just using a little facial information. The results will be shown in experiments using a mascot-type facial robot system
Socially intelligent robots are no longer an interesting topic of science fiction. In the robotics community, there is a growing interest in building personal robots, or in building robots that share the same workspace with humans. Natural interaction between humans and robots is therefore essential. To interact socially with humans, a robot must be able to do more than simply gather information about its surroundings; it must be able to express its states or emotions so that humans believe it has beliefs, desires, and intentions of its own. In the last decade, many researchers have focused on generating emotive facial expressions, which are known as the best cues for conveying the robot’s state, intentions, feelings and emotions. This paper gives a brief overview of current robotic systems with emotive facial expressions and introduces the basic models and hardware of two different types of facial robotic systems.