Abstract:
Gesture is a way of communication which involves body language and can be defined with or without spoken words. Gesture specifically is expression of thought or feeling where a degree of voluntarism is always implied by the body. If someone starts body activity at a sudden explosion, or bursts out laughing at something said, or if, on being told bad news, tears well up, these expressions are not usually regarded as gestures. The primary goal of gesture interaction applied to Human-Computer Interaction (HCI) is to create systems, which can identify specific human gestures and use them to convey information or controlling devices. Gesture interaction is already a promising input modality in smart computing environments such as modern gaming, augmented reality and virtual reality. Motion-based game controllers such as the Microsoft Kinect, Google Assistant make full body gesture interaction affordable and practical. The goal of gesture interaction is to computationally analyze body movements and associate each gesture to a predefined label. Most approaches are designed to control gesture events one by one given gesture till all are complete. Unlike touch input, users usually have little experience with gesture interaction. Technologists frequently claim that gestural input makes computing more “natural” by enabling communication with a computer the same way we also communicate with one another. Computer users face challenges when engaging and implementing gesture interactions, that is: complex coordination of all gestures regularly, mastering approachability to different gestures, muscle fatigue and a lack of supporting data input that is seamless with gesture interaction. In combination with a lack of tracking success or failure, end users often struggle to execute gestures correctly. To compensate this, users require an efficient design thinking framework that supports them during execution of gestures. Design thinking is a technological process guided by principles that provide a simple solution for a complex problem. An experimental research design using simulation tested the design thinking framework for using hand palm gesture interaction dataset. The framework classified hand palm gesture input data into its feedback, analytics and output entities in two steps: learning step (training step) in which a classifier is built to describe a predetermined set of new input data classes from the three entities. The second step builds a model done in the first training stage for classification with a classifier accuracy of known data to provide new alternative gesture interaction input data. The value of the design thinking framework results shows an improvement of gestural interaction improvement of real-time alternative gesture input sets for users to quickly learn how to interact with gestures systems for better gestural interaction performances.