Filter by type:

Sort by year:

A Linear Rhythmic Pattern Recognition Technology for Recognizing Various Rhythmic Sound, 음파의 다양한 리듬 패턴 인식을 위한 선형 패턴 인식 기술

Domestic Conference Paper
Hui Sung Lee, W. H. Lee
the 33rd ICROS Annual Conference (ICROS2018), pp. 3~5

Rhythmic pattern recognition technologies of an acoustic wave have been attempted to control electronic gadgets like earphones, displays, mobile devices, etc. But, in the aspect of realization, previous methods for classifying and recognizing the rhythmic patterns are not easy to maintain a program structure due to complex comparative grammars and equations. It could be more difficult to implement those methods on a micro-controller of a low clock speed and small memory size. In this paper, a novel rhythmic pattern recognition technology is proposed to solve these problems with a new polynomial equation. The proposed method can instantly classify diverse rhythm patterns and be immediately implemented even in various rhythm numbers and intervals. The effectiveness of the proposed algorithm is shown in a real headlamp system.

A Tailgate(Trunk) Control System Based on Acoustic Patterns

International Conference Paper
Hui Sung Lee
SAE Int. J. Passeng. Cars – Electron. Electr. Syst. 10(1):208-215, 2017, doi:10.4271/2017-01-1634.

When customers use a tailgate (or trunk), some systems such as power tailgate and smart tailgate have been introduced and implemented for improving convenience. However, they still have some problems in some use cases. Some people have to search for the outside button to open the tailgate, or they should take out the key and push a button. In some cases, they should move their leg or wait a few seconds which makes some people feel that it is a long time. In addition, they have to push the small button which is located on the inner trim in order to close the tailgate. This paper proposes a new tailgate control technology and systems based on acoustic patterns in order to solve some inconvenience. An acoustic user interaction (AUI) is a technology which responds to human’s rubbing and tapping on a specific part analyzing the acoustic patterns. The AUI has been recently spotlighted in the automotive industry as well as home appliances, mobile devices, musical instruments, etc. The AUI is a technology that can extend to rich-touch beyond multi-touch. The AUI can be easily applied and adapted even to the systems which need a large touch recognition area or have complex shape and surface. This paper addresses how to recognize the users’ intention and how to control the tailgate using acoustic sensors and patterns. If someone who has the smart key wants to open the tailgate, he or she only needs to knock on the outer panel of the tailgate twice. When they want to close the tailgate, just touching anywhere of the inner trim of the tailgate will do. Various digital filters and algorithms are used for acoustic signal processing, and the effectiveness of the proposed methods is shown by a real tailgate system with a micro control unit. Finally, we suggest other applications of vehicles which use AUI technology.

Robot’s Motivational Emotion Model with Value Effectiveness for Social Human and Robot Interaction

Domestic Journal Paper
W. H. Lee, J. W. Park, W. H. Kim, Hui Sung Lee, M. J. Chung
Journal of Institute of Control, Robotics and Systems, Vol. 20, No. 5, pp. 503~512, 2014.

Generation of Robot Facial Gestures based on Facial Actions and Animation Principles

Domestic Journal Paper
J. W. Park, W. H. Kim, W. H. Lee, Hui Sung Lee, M. J. Chung
Journal of Institute of Control, Robotics and Systems, Vol. 20, No. 5, pp. 495~502, 2014.

Generation of Realistic Robot Facial Expressions for Human Robot Interaction

International Journal Paper
J. W. Park, Hui Sung Lee, M. J. Chung
Journal of Intelligent & Robotic Systems, Vol. 78, pp. 443~462, 2015.

One factor that contributes to successful long-term human-human interaction is that humans are able to appropriately express their emotions depending on situations. Unlike humans, robots often lack diversity in facial expressions and gestures and long-term human robot interaction (HRI) has consequently not been very successful thus far. In this paper, we propose a novel method to generate diverse and more realistic robot facial expressions to help long-term HRI. First, nine basic dynamics for robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation in order to generate natural and diverse expression changes in a facial robot for identical emotions. In the second stage, facial actions are added to express more realistic expressions such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot with and without use of the proposed method. The results of the survey showed that the proposed method can help robots generate more realistic and diverse facial expressions.

LMA based Emotional Motion Representation using RGB-D Camera

International Conference Paper
W. H. Kim, J. W. Park, W. H. Lee, Hui Sung Lee, M. J. Chung
8th ACM/IEEE International Conference on Human-Robot Interaction (HRI2013), Tokyo, Japan, March 4-7, 2013

In this paper, emotional motion representation is proposed for Human Robot Interaction: HRI. The proposed representation is based on “Laban Movement Analysis: LMA” and trajectories of 3-dimensional whole body joint positions using an RGB-D camera such as a “Microsoft Kinect”. The experimental results show that the proposed method distinguishes two types of human emotional motion well.

A Sensor Fusion Digital-Map System for Driver Assistance

International Conference Paper
K. H Yoo, H. Y. Kim, H. Y. Woo, S. S. Kim, Hui Sung Lee
SAE 2013 World Congress & Exhibition, Detroit, USA, doi:10.4271/2013-01-0734, April, 2013.

A traffic situation is getting more complex in urban areas. Various safety systems of an automobile have been developed but fatal and serious accidents still can be made by driver’s faults or distractions. The system supporting extend of driver’s recognition area is going to be an important part of future intelligent vehicles in order to prevent accidents.In this paper, we propose sensor fusion system based on a digital-map for driver assistance. The accurate localization of a host vehicle is achieved by a stereo vision sensor and a digital-map using polygon matching algorithm in an urban area. A single-row laser scanner is used for tracking multiple moving objects. The coordinate transformation from sensor frame to global frame is performed to visualize the moving objects on a digital-map.An experiment was conducted in an urban canyon where the GPS signals are frequently interrupted. Four cameras were installed in the left and right of the vehicle to get images of landmarks and horizontal laser scanner was mounted to collect scan data.

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity

Domestic Conference Paper
M. G. Kim, Hui Sung Lee, J. W. Park, S. H. Jo, M. J. Chung
Proc. of Human Computer Interaction, pp. 547~552, 2008.

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation

Domestic Conference Paper
S. H. Jo, Hui Sung Lee, J. W. Park, M. G. Kim, M. J. Chung
Proc. of Human Computer Interaction, pp. 540~546, 2008.

Reactive Emotion Generation Model in Four Phases

Domestic Conference Paper
W. H. Lee, Hui Sung Lee, j. W. Park, M. G. Kim, M. J. Chung
Proc. of Conf. on Korea Intelligent Robot, pp. 454~455, June, 2008.

A Mascot-Type Facial Robot with a Linear Dynamic Affect-Expression Model

International Conference Paper
Hui Sung Lee, J. W. Park, S. H. Jo, M. G. Kim, W. H. Lee, M. J. Chung
Proc. the 17th World Congress of International Federation of Automatic Control (IFAC) Seoul, Korea, p. 14099, July, 2008.

Emotional Boundaries for Choosing Modalities according to the Intensity of Emotion in a Linear Affect-Expression Space

International Conference Paper
J. W. Park, Hui Sung Lee, S. H. Jo, M. G. Kim, M. J. Chung
17th IEEE Int. Symp. on Robot & Human Interactive Communication, Munich, Germany, pp. 225~230, August, 2008.

Determining Color and Blinking to Support Facial Expression of a Robot for Conveying Emotional Intensity

International Conference Paper
M. G. Kim, Hui Sung Lee, J. W. Park, S. H. Jo, M. J. Chung
17th IEEE Int. Symp. on Robot & Human Interactive Communication, Munich, Germany, pp. 219~224, August, 2008.

Dynamic Expression Based on Affect-Expression Space Model for a Mascot-Type Facial Robot

International Conference Paper
M. J. Chung, J. W. Park, Hui Sung Lee
Proc. of 4th Int. Conf. on Humanized Systems (ICHS08), Beijing, China, pp. 79~82, October, 2008.

Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot

Domestic Conference Paper
J. W. Park, Hui Sung Lee, M. J. Chung
Proc. of Conf. on Korea Intelligent Robot, pp. 173~180, June, 2007.

Digital Model of Vocal Tract and Vocal Folds for Articulatory Speech Synthesizer

Domestic Conference Paper
Hui Sung Lee
The 12th Int. Conf. on Multilingual Processing, Information, and Telecommunication Technology (ICMIT), Yanbien, China, pp. 402 ~ 408, July, 2007.

A Linear Dynamic Affect-Expression Model: Facial Expressions According to Perceived Emotions in Mascot-Type Facial Robots

International Conference Paper
Hui Sung Lee, J. W. Park, S. H. Jo, and M. J. Chung
16th IEEE Int. Conf. on Robot & Human Interactive Communication, Jeju, Korea, pp. 619~624, August, 2007.

It is expected that robots will be widely exposed to humans in the near future. Emotional communication is very important in human-robot interaction, as well as human- human interaction. Facial expression is an emotional expression method between humans, and it also enables human to recognize a robot’s emotional state. Although there are lots of previous research regarding facial expressions, it is not easily applicable to different shapes of robot faces, if the number and types of the robot’s control points are varied. In addition, the natural connection between emotions has not been considered or has been inefficiently implemented in previous research. In this paper, we propose a linear dynamic affect-expression model for continuous change of expression and diversifying characteristics of expressional changes. Therefore, the proposed model allows a robot’s facial expression and expression changes to more closely resemble those of humans, and this model is applicable to various mascot-type robots irrespective of the number and types of robot’s control points.

Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot

Domestic Journal Paper
J. W. Park, Hui Sung Lee, S. H. Jo, and M. J. Chung
Journal of Korea Robotics Society, Vol. 2, No. 3, pp. 282~287, 2007.

A Linear Affect-Expression Space Model and Control Points for Mascot-Type Facial Robot

International Journal Paper
Hui Sung Lee, J. W. Park, and M. J. Chung
IEEE Trans. on Robotics, Vol. 23, No. 5, pp. 863~873, Dec. 2007.

A robot’s face is its symbolic feature, and its facial expressions are the best method for interacting with people with emotional information. Moreover, a robot’s facial expressions play an important role in human–robot emotional interactions. This paper proposes a general rule for the design and realization of expressions when some mascot-type facial robots are developed. Mascot-type facial robots are developed to enable friendly human feelings. The number and type of control points for six basic expressions or emotions were determined through a questionnaire. A linear affect–expression space model is provided to realize continuous and various expressions effectively, and the effects of the proposed method are shown through experiments using a simulator and an actual robot system.

The Conducting Motion Recognizing System Using Acceleration Sensors for the Visual Orchestra

Domestic Conference Paper
D. K. Son, Hui Sung Lee, Y. H. Noh, Y. K. Won, and B. C. Gu
Proc. of Human Computer Interaction, pp. 124~129, 2006.

Development of a Virtual Instrument System Using 3-Dimensional Acceleration Sensors and Digital Signal Processor

Domestic Conference Paper
Hui Sung Lee, D. K. Son, and Y. H. Noh
Proc. of Human Computer Interaction, pp. 982~987, 2006.

An Affect-Expression Space Model of Mascot-Type Facial Robot

Domestic Conference Paper
Hui Sung Lee, J. Y. Park, J. W. Park, and M. J. Chung
Control and Automation System Symposium, pp. 28~33, 2006.

Development of a Facial Expression Imitation System

International Conference Paper
D. H. Kim, S. U. Jung, K. H. An, Hui Sung Lee, and M. J. Chung
Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 3107~3112, Oct. 2006.

In the last decade, face analysis, e.g. face recognition, face detection, face tracking and facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial facial expression mimic system which can recognize facial expressions of human and also imitate the recognized facial expressions. We propose a classifier that is based on weak classifiers obtained by using modified rectangular features to recognize human facial expression in real-time. Next, we introduce our robot that is manipulated by a distributed control algorithm and that can make artificial facial expressions. Finally, experimental results of facial expression recognition and facial expression generation are shown for the validity of our artificial facial expression imitator

A Linear Affective Space Model based on the Facial Expressions for Mascot-Type Robots

International Conference Paper
Hui Sung Lee, J. W. Park, and M. J. Chung
Proc. SICE-ICASE Int. Joint Conf., pp. 5367~5372, Oct. 2006.

Although many researchers have tried to find out the bases of emotion in human mind, there is still no conclusion about that question. However, it is necessary to find out or establish the bases of emotion since they could be adapted to control facial expressions of robots effectively. This paper introduces a new linear model to define a linear affective space, which is based on the six basic emotional expressions of the face of a mascot-type robot. We attempt to determine which affective space is suitable for the facial robot, and define a linear expressional space. A new linear model is built up regarding relations between affects and facial expressions. This paper demonstrates efficient control of various and continual emotional expressions with a low dimensional linear affective space. The results are displayed through simulator experiments and mascot-type facial robot system

An Affect-Expression Space Model of the Face in a Mascot-Type Robot

International Conference Paper
Hui Sung Lee, J. W. Park, and M. J. Chung
Proc. of IEEE/RAS Int. Conf. on Humanoid Robots, pp. 412~417, Dec. 2006.

Humans want robots which can react or express their emotions similarly as human beings do. However, there has not been any definite models to express emotion on the face of robots. The previously developed face of robots just has some target positions of the facial components which were defined in advance for expressing, and it can not show the continuous emotional expression while its emotion changes. In this paper, we will see the definition of an expressional space on the face of a mascot-type robot and its application to the formation of an affect-expression space model. The proposed model is simple but more efficient than other methods since it can be used to express various and continual emotional expressions just using a little facial information. The results will be shown in experiments using a mascot-type facial robot system

Two Type of Facial Robots for Visual Attention and Emotional Expressions

International Conference Paper
Hui Sung Lee, D. H. Kim, K. H. An, J. W. Park, Y. G. Ryu, and M. J. Chung
2nd Asia Int. Sym. on Mechatronics(AISM2006), Dec. 2006.

Emotion and Personality of Robots: from Recognition to Expressions of Facial Expressions

Domestic Journal Paper
Hui Sung Lee, D. H. Kim, and M. J. Chung
Korea Robotics Society Review, Vol. 3, No. 4, pp. 36~49, 2006.

A Study on a New Sound Design Tool and Its Application using Paintings

Domestic Conference Paper
Hui Sung Lee, S. E. Kim, and Y. H. Noh
Proc. of Human Computer Interaction, pp. 94~100, 2005.

Biologically Inspired Models and Hardware for Emotive Facial Expressions

International Conference Paper
D. H. Kim, Hui Sung Lee, and M. J. Chung
Proc. of IEEE Int. Workshop on Robots and Human Interactive Communication, pp. 679~685, 2005.

Socially intelligent robots are no longer an interesting topic of science fiction. In the robotics community, there is a growing interest in building personal robots, or in building robots that share the same workspace with humans. Natural interaction between humans and robots is therefore essential. To interact socially with humans, a robot must be able to do more than simply gather information about its surroundings; it must be able to express its states or emotions so that humans believe it has beliefs, desires, and intentions of its own. In the last decade, many researchers have focused on generating emotive facial expressions, which are known as the best cues for conveying the robot’s state, intentions, feelings and emotions. This paper gives a brief overview of current robotic systems with emotive facial expressions and introduces the basic models and hardware of two different types of facial robotic systems.

Emotional Expression using the Face of Robot

Domestic Journal Paper
M. J. Chung, Hui Sung Lee, and D. H. Kim
Korea Robotics Society Review, Vol. 2, No. 3, 2005.

A Study on the Facial Expressions of Robots using a 3-Dimensional Linear Emotional Model

Domestic Conference Paper
Hui Sung Lee, J. Y. Park, and M. J. Chung
Proc. of Korea Automatic Control Conf., pp. 586~591, 2005.

Development of Ink-Jet Head Controller for Electro-Luminescence Display

Domestic Conference Paper
S. U. Jung, Hui Sung Lee, J. R, Ryoo, J. S. Park, and M. J. Chung
Proc. of Conf. on Information and Control System, pp. 623~625, 2004.

Implementation of Nonlinear Two-Mass Vocal Folds Digital Model

Domestic Conference Paper
Hui Sung Lee and M. J. Chung
Proc. of Conf. on Information and Control System, pp. 9~11, 2004.

Implementation of Continuous Utterance using Buffer Rearrangement for Articulatory Speech Synthesizer

Domestic Conference Paper
Hui Sung Lee and M. J. Chung
Proc. of KIEE Summer Annual Conf., pp. 2454~2456, 2002.