Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
The 11th ACM SIGCHI Symposium on Engineering
Interactive Computing Systems
18-21 June, 2019 - Valencia, Spain
An Ontology for Reasoning on Body-based Gestures
Mehdi Ousmer, Jean Vanderdonckt,
Université catholique de Louvain,
Belgium
Sabin Buraga,
Alexandru Ioan Cuza University of Iasi,
Romania
Results
The Kinect sensor is widely used for different users, environments, and applications. The purpose of this work is to design
tools to structure gestures for automated reasoning of body-based gestures, acquired by Kinect sensor. Thus, we come up
with a sensor-independent ontology. The ontology is based on different parts with intrinsic and extrinsic properties. To
establish this proposal, a gesture elicitation study was conducted which fed the ontology with 456 elicited gestures.
The ontology
0
5
10
15
20
25
7654321
Device frequency of usage
Computer Smartphone Tablet Game Kinect
Feeding the ontology
Participants:
•Twenty-four participants (11 Females, 13 Males)
•Participants’ average age is 34.5 (from 23 to 53 Years)
•Voluntary participants with different occupations
•Most of them were not familiar with the Kinect sensor
Apparatus:
•The experiment was in a controlled environment (
usability laboratory)
•Showing referent to the participant
• Time how long it takes to think
•Gesture recorded by a camera and a Kinect sensor
•Ask the participant to rate the goodness-of-fit
References:
[1] Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proc. of CHI ’15. 1325–1334.
[2] Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined Gestures for Surface Computing. In Proc. of CHI ’09. 1083–1092.
Experiment:
•Gesture elicitation study based on original methodology
[1][2]
•Nineteen referents from IoT actions
•Referents are divided into three parts (Unary/Binary/Linear)
11.54
11.17
17.14
7.90
8.13
8.31
8.36
9.27
10.34
8.40
6.74
10.63
10.83
7.63
9.99
17.31
11.96
5.85
7.45
0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 18.00 20.00
Turn TV On
Turn TV Off
Start Player
Increase Volume
Decrease Volume
Go to Next Item in a List
Go to Previous Item in a List
Turn Air Conditioning On
Turn Air Conditioning OFF
Turn Light On
Turn Light Off
Brighten Lights
Dim Lights
Turn Heating System On
Turn Heating System Off
Turn Alarm On
Turn Alarm Off
Answer Phone Call
End Phone Call
Thinking times (sec.)
7.54
7.21
7.25
8.04
7.50
7.71
7.46
6.92
6.79
7.58
7.75
6.83
7.04
7.75
7.25
6.63
6.88
7.75
8.08
0.002.004.006.008.00
Goodness of Fit (1-10)
Agreement rate:
•Agreement rate computed by AGaTE[1]
•In average magnitude, agreement rates are Medium
•Most referents belong to the medium range
•Similar to other rates in literature
Classification:
•Half of the gestures were made in a
medium range
•Dominant hand was most used
•The study resulted in 53 unique gestures
go in 23 categories of gestures
•No particular correlation
between thinking time for pairs
of complementary referents
•The value of goodness of fit
are above the average values,
which indicates that
participants were satisfied by
the gestures they chose
User satisfaction:
•Subjective user satisfaction using the IBM PSSUQ questionnaire
•Participants were subjectively satisfied with body-based gestures
•All values of measures are superior to 5 on a scale of 1 to 7
Conclusion:
•A base to describe body-based gestures for reasoning tasks
(e.g. Beyond querying )
•Potential benefits from the ontology (Reusability, Coherence,
Extensibility, …)
•The RDF defines 3 main classes:
•User: Information about user and anatomical body parts and
joints.
•Sensor: Raw data provided by a device.
•Detected gestures and poses: Includes Gesture, GestureSegment,
HandState, and Pose.
•Representation of the body-based gesture in the context of use
•The ontology based on user, sensor and physical environment
•Expressed in OWL (Ontology Web Language), a W3C standard
according to the triples <subject, predicate, object>
•Joints: Objects that could be compared respecting their
position in a Cartesian spaces
• Segment: Merged joints which specify body pose
•Pose: the basic part of a gesture which can be manipulated
using logical constructs (And/Or/Not)

More Related Content

An ontology for reasoning on body-based gestures

  • 1. The 11th ACM SIGCHI Symposium on Engineering Interactive Computing Systems 18-21 June, 2019 - Valencia, Spain An Ontology for Reasoning on Body-based Gestures Mehdi Ousmer, Jean Vanderdonckt, Université catholique de Louvain, Belgium Sabin Buraga, Alexandru Ioan Cuza University of Iasi, Romania Results The Kinect sensor is widely used for different users, environments, and applications. The purpose of this work is to design tools to structure gestures for automated reasoning of body-based gestures, acquired by Kinect sensor. Thus, we come up with a sensor-independent ontology. The ontology is based on different parts with intrinsic and extrinsic properties. To establish this proposal, a gesture elicitation study was conducted which fed the ontology with 456 elicited gestures. The ontology 0 5 10 15 20 25 7654321 Device frequency of usage Computer Smartphone Tablet Game Kinect Feeding the ontology Participants: •Twenty-four participants (11 Females, 13 Males) •Participants’ average age is 34.5 (from 23 to 53 Years) •Voluntary participants with different occupations •Most of them were not familiar with the Kinect sensor Apparatus: •The experiment was in a controlled environment ( usability laboratory) •Showing referent to the participant • Time how long it takes to think •Gesture recorded by a camera and a Kinect sensor •Ask the participant to rate the goodness-of-fit References: [1] Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proc. of CHI ’15. 1325–1334. [2] Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined Gestures for Surface Computing. In Proc. of CHI ’09. 1083–1092. Experiment: •Gesture elicitation study based on original methodology [1][2] •Nineteen referents from IoT actions •Referents are divided into three parts (Unary/Binary/Linear) 11.54 11.17 17.14 7.90 8.13 8.31 8.36 9.27 10.34 8.40 6.74 10.63 10.83 7.63 9.99 17.31 11.96 5.85 7.45 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 18.00 20.00 Turn TV On Turn TV Off Start Player Increase Volume Decrease Volume Go to Next Item in a List Go to Previous Item in a List Turn Air Conditioning On Turn Air Conditioning OFF Turn Light On Turn Light Off Brighten Lights Dim Lights Turn Heating System On Turn Heating System Off Turn Alarm On Turn Alarm Off Answer Phone Call End Phone Call Thinking times (sec.) 7.54 7.21 7.25 8.04 7.50 7.71 7.46 6.92 6.79 7.58 7.75 6.83 7.04 7.75 7.25 6.63 6.88 7.75 8.08 0.002.004.006.008.00 Goodness of Fit (1-10) Agreement rate: •Agreement rate computed by AGaTE[1] •In average magnitude, agreement rates are Medium •Most referents belong to the medium range •Similar to other rates in literature Classification: •Half of the gestures were made in a medium range •Dominant hand was most used •The study resulted in 53 unique gestures go in 23 categories of gestures •No particular correlation between thinking time for pairs of complementary referents •The value of goodness of fit are above the average values, which indicates that participants were satisfied by the gestures they chose User satisfaction: •Subjective user satisfaction using the IBM PSSUQ questionnaire •Participants were subjectively satisfied with body-based gestures •All values of measures are superior to 5 on a scale of 1 to 7 Conclusion: •A base to describe body-based gestures for reasoning tasks (e.g. Beyond querying ) •Potential benefits from the ontology (Reusability, Coherence, Extensibility, …) •The RDF defines 3 main classes: •User: Information about user and anatomical body parts and joints. •Sensor: Raw data provided by a device. •Detected gestures and poses: Includes Gesture, GestureSegment, HandState, and Pose. •Representation of the body-based gesture in the context of use •The ontology based on user, sensor and physical environment •Expressed in OWL (Ontology Web Language), a W3C standard according to the triples <subject, predicate, object> •Joints: Objects that could be compared respecting their position in a Cartesian spaces • Segment: Merged joints which specify body pose •Pose: the basic part of a gesture which can be manipulated using logical constructs (And/Or/Not)