The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
This document provides an introduction and overview of eXtreme Programming (XP), an agile software development methodology. It discusses what XP is, its history and origins, core values and principles, practices, and components like the whole XP team. Key aspects of XP covered include pair programming, short development cycles, test-first development, simple design, frequent integration and feedback. The document aims to explain the philosophy and mechanics of the XP methodology.
The document provides an overview of the waterfall model and agile methodologies for software development projects. It discusses:
- The linear sequential phases of the waterfall model and when it is suitable.
- Issues with the waterfall model like inability to handle changes and lack of testing throughout.
- Benefits of agile like ability to adapt to changes, early delivery of working software, and improved success rates.
- Key aspects of the Scrum agile framework like sprints, daily stand-ups, and product backlogs.
- Differences in how development costs are treated as capital expenditures or operating expenses between waterfall, agile, and cloud-based models.
Publish Android Application on Google Play Store Sandip Kalola
The document outlines the steps to publish an Android application on the Google Play Store. It includes registering for a Google Play Publisher Account, setting up a Google Payments Merchant Account if the app is paid, testing the app through alpha and beta channels, uploading the Android app to Google Play, providing store listing details, and setting pricing and distribution information.
This document provides an overview of Agile software development principles and practices. It discusses:
- The problems with traditional waterfall software development approaches
- The evolution and principles of Agile development as outlined in the Agile Manifesto
- Key Agile practices like Scrum, product backlogs, sprints, and sprint planning meetings
- Tips for writing good user stories and splitting stories into smaller tasks
- The typical lifecycle of activities in a Scrum project including release planning, iterations (sprints), daily stand-ups, sprint reviews and retrospectives
Soli is a gesture sensing technology developed by Google that uses millimeter-wave radar to detect fine hand motions and gestures without the need for physical contact or devices. It allows for touchless interaction with electronics and has applications in areas like smart devices, VR/AR, IoT, gaming, and medicine. Soli works by emitting and receiving radio waves that are scattered by the hand, with the time and signal changes used to track hand position and motion. It has advantages like replacing buttons, wireless operation, and precision, though it also has limitations such as a small range and potential security issues.
This document discusses various agile software development methodologies including eXtreme Programming (XP), Scrum, Evolutionary Project Management (EVO), Unified Process (UP), Crystal, Lean Development (LD), Adaptive Software Development (ASD), Dynamic System Development Method (DSDM), and Feature Driven Development (FDD). It emphasizes that different methodologies may suit different clients and that the key is selecting the approach that best meets a client's requirements rather than taking a single approach for all. Communication is also highlighted as important for software project success.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Project Soli is a sensor developed by Google that uses radar technology to detect finger movements and gestures. It is small, about 5x5mm, and can be integrated into wearables. The sensor captures submillimeter motions of fingers at a high rate of 10,000 frames per second. It determines hand properties using machine learning to translate gestures into commands. Potential applications include medical devices, gaming, and controlling gadgets through free-hand gestures without touching them.
The goal of this project is to provide a platform that allows for communication between able-bodied and disabled people or between computers and human beings. There has been great emphasis on Human-Computer-Interaction research to create easy-to-use interfaces by directly employing natural communication and manipulation skills of humans . As an important part of the body, recognizing hand gesture is very important for Human-Computer-Interaction. In recent years, there has been a tremendous amount of research on hand gesture recognition
A fair analysis of the Agile Methodology. A quick comparison of Agile and Waterfall to clear up misconceptions about the two. Scalability is a major issue with Agile and is worth considering if you're not a large software company.
Haptic technology interfaces users with virtual environments through the sense of touch by applying forces, vibrations, or motions. It works by using haptic devices like Phantom, a robotic arm that provides mechanical stimulation, or CyberGlove, which tracks hand gestures. Applications of haptics include surgical simulation, medical training, and graphical user interfaces. The technology provides advantages like easy access and use as well as conservation during development, but also has disadvantages such as limited magnitude, expense, and complexity.
Introduction to the scrum framework: roles, activities and artifacts.
Scrum is an agile methodology for project management, to create a high quality product.
www.nieldeckx.be
Haptics is a technology that uses touch sensations to allow users to interact with virtual objects. It works by linking sensors in the body to actuators that provide resistance and movement to simulate the sense of touch. Common haptic devices include Phantom interfaces and Cyber Grasp systems which provide force feedback to users handling virtual objects. Haptics has applications in areas like medical training, military simulations, and entertainment like gaming.
The training offers an overview of Agile development and Scrum practices, focusing on how the Scrum framework follows the Agile Manifesto principles. ... The Scrum framework uses simple iterative practices for team collaboration on complex projects.
LEAN SOFTWARE DEVELOPMENT: A CASE STUDY IN A MEDIUM-SIZED COMPANY IN BRAZILI...Mehran Misaghi
This article presents a literature review whose purpose is to identify the key characteristics of lean software development and its similarities and differences with agile methodologies. A case study conducted in a team of software developers is presented, where lean concepts were applied within the current process, previously based on agile methodologies. It was found at the end of this work that the indicator used by the team, percentage of the time spent on improvements and new features, had a significant increase, causing the team being able to add more value to the product, and to increase the level of quality.
Building an Agile framework that fits your organisationKurt Solarte
The document discusses strategies for scaling agile practices within large organizations. It provides an overview of IBM's transformation to agility at scale, including challenges faced and key principles learned. The presentation emphasizes adopting an incremental approach, addressing people, processes, and tools, and establishing governance to manage uncertainty and variance as an organization's agile adoption matures. It also provides examples of metrics that can be used to measure agile project and program performance.
The document discusses Scrum, an agile framework for project management. It describes some issues with traditional waterfall models like high risks and uncertainty. Scrum aims to address these issues by allowing for frequent delivery of working software, adapting to changes, and welcoming late changes. The document then outlines the key aspects of Scrum like product and sprint backlogs, daily stand-ups, sprint reviews, and retrospectives. It discusses how Scrum has been used successfully in various domains like software, games, websites, and more. Finally, it covers some benefits of Scrum from different stakeholder perspectives.
This document discusses the implications of neuroscience research on metaphor for e-learning. It finds that mirror neurons activate both motor and language areas of the brain, allowing metaphors to embody meaning through physical experience. Effective e-learning may incorporate movement, hands-on activities, and physical manipulation to more fully engage both brain hemispheres. Work-based learning is given as an example that mirrors the brain's use of metaphor through detailed projects and reflective thinking.
This document discusses the concept of mobile learning in context. It describes how computers and mobile devices are becoming ubiquitous and context-aware. Sensors in environments and on mobile devices can provide contextual information to enhance learning experiences. However, mobile phones are still often seen only as toys in classrooms rather than learning tools. The document advocates for leveraging context through ubiquitous computing to design new approaches to mobile and ambient learning.
Participatory design fieldwork. Dealing with emotionsMariana Salgado
This document discusses dealing with emotions in participatory design research. It describes a case study of workshops with immigrant women in Finland where strong emotions were triggered discussing migration stories. The workshops identified needs for language courses and community spaces. Emotions are unavoidable in this type of research and time for reflection, debriefing, and defusing is important. Design researchers take on multiple roles as confidants, translators, and facilitators, so being empathetic while not exacerbating emotions is key. When emotions run high, reserving time for discussion with colleagues or trained personnel is important.
Smiljana Antonijevic - Second Life, Second Bodyguest92ff15
The document summarizes a study on nonverbal communication in the virtual world of Second Life. It discusses how users employ both predefined animations and self-directed movements to convey social cues like gestures and interpersonal distance. The study observed over 800 natural interactions over six months to analyze how both user-defined and system-generated nonverbal acts take on cultural and communicative meanings, creating a tension between how users and designers envision body language in virtual spaces.
Language is much more than the external expression and communication of internal thoughts formulated independently of their verbalization. In demonstrating the inadequacy and inappropriateness of such a view of language, attention has already been drawn to the ways in which one’s native language is intimately and in all sorts of details related to the rest of one’s life in a community and to smaller groups within that community. This is true of all peoples and all languages; it is a universal fact about language.
Designing with Immigrants. When emotions run high Mariana Salgado
This was a presentation of a paper with the same title in the European Academy of Design. 21.04.15 Paris. France. This paper was written with Helena Sustar and Michail Galanakis.
Designing with Inmigrants. When emotions run high Mariana Salgado
Presentation used in the European Academy of Design (2015) for presenting the paper with the same title. Paris, France. The paper can be found in: https://www.academia.edu/12261966/Designing_with_Immigrants._When_emotions_run_high
The document discusses new interfaces for embodied interaction that focus on designing user actions and movement before products. It proposes hands-only scenarios and video action walls as novel methods to design tangible user interaction by focusing on the choreography of interaction between users, objects, and their environment. The goal is to design for expressive and rich movement-based interaction that addresses human values like benevolence and universalism.
This document discusses designing quality in tangible, intuitive, interactive interfaces (TIII). It provides background on the physical and digital world and how tangible interaction combines the best of both. It describes three views of tangible interaction: data-centered, perceptual motor/expressive movement-centered, and space-centered. It details several example projects and frameworks for tangible interaction design. The document advocates designing for experiences over products and beauty in interaction over appearance. It proposes several phases for growing an inspirational test-bed of smart textile services, including incubation, nursery, and adoption phases.
Chapter 10: Symbolic Interactionism and Social Constructionism-Toby ZhuToby Zhu
This document summarizes key concepts from symbolic interactionism and related theories. It discusses how symbolic interactionism views symbols as representations that give meaning and structure to our experience. Symbols are learned through social interaction and mediate further interaction. People develop stocks of social knowledge and typifications that allow them to quickly interpret and respond to their social environment. Related theories discussed include social constructionism, pragmatism, and phenomenology.
Designing with Immigrants. When emotions run high.pptxMariana Salgado
This presentation took place in the 11th European Academy of Design. The value of Design Research. The paper was: Designing with Immigrants. When emotions run high. Paris, France. 2015
This document discusses the theoretical underpinnings of Learning Design from socio-cultural and ecological perspectives. It describes how Learning Design draws on socio-cultural thinking from Vygotsky, focusing on mediated activity through tools and signs. An ecological perspective views learning through the concept of affordances - how aspects of the environment enable certain actions. Learning Design aims to establish mediating artifacts that guide the design process and represent learning activities.
The document discusses finding a unified definition of immersion by examining competing definitions from different fields like virtual reality, video games, film, literature, art, and theater. It analyzes how presence, attention, goals, challenge, reward and flow loops contribute to immersive experiences. The author outlines current work examining links between presence, virtual environments, and flow loops in video games. Participants are being observed playing games and providing feedback to analyze relationships between elements and develop an immersion framework.
This is an introduction workshop to Designing Interactions / Experiences module I’m teaching at Köln International School of Design of the Cologne University of Applied Sciences, which I’m honored to give by invitation of Professor Philipp Heidkamp.
The document provides an overview of a communication course. It discusses key concepts like the definition of communication, its process, goals, types (verbal, nonverbal, written, listening), and factors that affect it like globalization and the new normal. It also outlines the course grading system and provides tips for effective listening. The document serves to introduce students to fundamental communication concepts.
The document discusses the future of e-learning and the need to humanize the "E" in learning. It suggests converging media, psychology and communication and energizing contributions to redesign higher education. Key points are that distance is dead due to new technologies, and understanding learning, behavior and technology is changing education. The future is presented as human-centered and screen-based, requiring a new professional language and understanding of how media impacts psychology.
Similar to Gestural Interaction, Is it Really Natural? (20)
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
UsyBus: A Communication Framework among Reusable Agents integrating Eye-Track...Jean Vanderdonckt
Presentation of ACM EICS '22 paper: https://dl.acm.org/doi/10.1145/3532207
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Mult...Jean Vanderdonckt
Paper presented at ACM EICS '22
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowle...Jean Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies
Gesture-based information systems: from DesignOps to DevOpsJean Vanderdonckt
Keynote address for the 29th International Conference on Information Systems Development ISD'2021 (Valencia, Spain, September 8-10, 2021). See https://isd2021.webs.upv.es/program.php#keynotes
This talk promotes the Seven I':
Implementation continuity
Inclusion of end-users
Interaction first
Integration among stakeholders
Iteration short
Incremental progress
Innovation openness
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.
Evaluating Gestural Interaction: Models, Methods, and MeasuresJean Vanderdonckt
The document discusses various methods for evaluating gestural user interfaces (UIs), including comparing a UI to a reference model, collecting evaluation data on usability criteria, and using standardized scales and metrics. Common dimensions for evaluation are goals, utility, usability, and factors like system acceptance, ease of use, and cost. Methods mentioned include observations, questionnaires, heuristic evaluations, and measuring task performance and preferences using standardized scales. Guidelines are provided for designing and assessing the usability of different gestures.
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From Peo...Jean Vanderdonckt
Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
This document provides an overview of gestural interaction and various gesture recognition techniques. It begins with definitions of gestures and how they can vary based on factors like the body part used, number of dimensions, whether they are contact-based or not. It then discusses benefits of gestures and examples of gesture recognizers like xStroke and techniques like Rubine, SiGeR, LVS, hidden Markov models, and the $-family of recognizers. The document provides details on properties like stroke, direction, and rotation invariance as well as training and recognition phases for different recognizers.
User-centred Development of a Clinical Decision-support System for Breast Can...Jean Vanderdonckt
See the paper at https://www.scitepress.org/Link.aspx?doi=10.5220/0010258900600071
We conducted a user-centered design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires, and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualization of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse
Simplifying the Development of Cross-Platform Web User Interfaces by Collabo...Jean Vanderdonckt
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that enables various stakeholders of a web application to collaboratively adopt a model-based design of the user interface for cross-platform deployment. The paper establishes a series of requirements for collaborative model-based design of cross-platform web user interfaces motivated by the literature, observational and situational design. It then elaborates on potential solutions that satisfy these requirements and explains the solution selected for Quill. A user survey has been conducted to determine how stakeholders appreciate model-based design user interface and how they estimate the importance of the requirements that lead to Quill
Detachable user interfaces consist of graphical user interfaces whose parts or whole can be detached at run-time from their host, migrated onto an- other computing platform while carrying out the task, possibly adapted to the new platform and attached to the target platform in a peer-to-peer fashion. De- taching is the property of splitting a part of a UI for transferring it onto another platform. AttAaching is the reciprocal property: a part of an existing interface can be attached to the currently being used interface so as to recompose another one on-demand, according to user's needs, task requirements. Assembling inter- face parts by detaching and attaching allows dynamically composing, decom- posing and re-composing new interfaces on demand. To support this interaction paradigm, a development infrastructure has been developed based on a series of primitives such as display, undisplay, copy, expose, return, transfer, delegate, and switch. We exemplify it with QTkDraw, a painting application with attach- ing and detaching based on the development infrastructure.
The Impact of Comfortable Viewing Positions on Smart TV GesturesJean Vanderdonckt
Whereas gesture elicitation studies for TV interaction
assume that participants adopt an upright, frontal viewing
position, we asked 21 participants to hold a natural, comfortable
viewing position, the posture they adopt when watching TV
at home. By involving a broad selection of users regarding
age, profession, our study targets a higher ecological validity
than in existing studies. Agreements rates were lower than existing studies using an upright, frontal viewing position. Participants experienced problems due to (1) having to use their slave hand instead of their dominant hand, (2) being in a certain orientation with their head making it more difficult to perform some physical movements, and (3) being hindered in their movement by the sofa there lay on. Since each person may have a different
position inducing different gestures due to the aforementioned
problems, the effect of a comfortable viewing position is analyzed
by comparison to gestures for a frontal position.
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper BodyJean Vanderdonckt
This paper presents empirical results about user-dened gestures
for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 sec, max: 22.50 sec), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures
will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213794
G-Menu: A Keyword-by-Gesture based Dynamic Menu Interface for SmartphonesJean Vanderdonckt
Instead of relying on graphical or vocal modalities for searching
an item by keyword (called K-Menu), this paper presents the G-Menu exploiting gesture interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gesturally by an appropriate gesture (called the G-Menu) or by touch only (called the T-Menu). This paper compares the three types of menu, i.e., by keyword, by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness),
and their subjective satisfaction (for measuring user satisfaction).
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213790
Unistroke and multistroke gesture recognizers have always striven to reach some robustness with respect to
all variations encountered when people issue gestures by hand
on touch surfaces or with sensing devices. For this purpose,
successful stroke recognizers rely on a gesture recognition
algorithm that satisfies a series of invariance properties such
as: stroke-order invariance, stroke-number invariance, stroke direction invariance, position, scale, and rotation invariance.
Before initiating any recognition activity, these algorithms
ensure these properties by performing several pre-processing
operations. These operations induce an additional computational
cost to the recognition process, as well as a potential error
bias. To cope with this problem, we introduce an algorithm that
ensures all these properties analytically instead of statistically
based on a vector algebra. Instead of points, the recognition
algorithm works on vectors between vectors. We demonstrate
that this approach not eliminates the need for these preprocessing
operations but also satisfies an entire structure preserving
transformation.
Paper available at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A217006
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
See paper at https://dl.acm.org/citation.cfm?id=3328238
AB4Web: An On-Line A/B Tester for Comparing User Interface Design AlternativesJean Vanderdonckt
This document describes AB4Web, a web-based tool for conducting randomized A/B tests of user interface designs. The tool allows researchers to collect preference data from online participants on pairs of UI variants. It summarizes the results with measures like preference percentage, latent score of preference, and a preference matrix. The document demonstrates the tool by analyzing preferences across 49 existing graphical adaptive menu designs. Results showed which designs were most and least preferred overall. The tool provides a low-cost way to compare UI alternatives and study design preferences over time without technical expertise.
Discover practical tips and tricks for streamlining your Marketo programs from end to end. Whether you're new to Marketo or looking to enhance your existing processes, our expert speakers will provide insights and strategies you can implement right away.
Jacquard Fabric Explained: Origins, Characteristics, and Usesldtexsolbl
In this presentation, we’ll dive into the fascinating world of Jacquard fabric. We start by exploring what makes Jacquard fabric so special. It’s known for its beautiful, complex patterns that are woven into the fabric thanks to a clever machine called the Jacquard loom, invented by Joseph Marie Jacquard back in 1804. This loom uses either punched cards or modern digital controls to handle each thread separately, allowing for intricate designs that were once impossible to create by hand.
Next, we’ll look at the unique characteristics of Jacquard fabric and the different types you might encounter. From the luxurious brocade, often used in fancy clothing and home décor, to the elegant damask with its reversible patterns, and the artistic tapestry, each type of Jacquard fabric has its own special qualities. We’ll show you how these fabrics are used in everyday items like curtains, cushions, and even artworks, making them both functional and stylish.
Moving on, we’ll discuss how technology has changed Jacquard fabric production. Here, LD Texsol takes center stage. As a leading manufacturer and exporter of electronic Jacquard looms, LD Texsol is helping to modernize the weaving process. Their advanced technology makes it easier to create even more precise and complex patterns, and also helps make the production process more efficient and environmentally friendly.
Finally, we’ll wrap up by summarizing the key points and highlighting the exciting future of Jacquard fabric. Thanks to innovations from companies like LD Texsol, Jacquard fabric continues to evolve and impress, blending traditional techniques with cutting-edge technology. We hope this presentation gives you a clear picture of how Jacquard fabric has developed and where it’s headed in the future.
Multimodal Embeddings (continued) - South Bay Meetup SlidesZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
Ensuring Secure and Permission-Aware RAG DeploymentsZilliz
In this talk, we will explore the critical aspects of securing Retrieval-Augmented Generation (RAG) deployments. The focus will be on implementing robust secured data retrieval mechanisms and establishing permission-aware RAG frameworks. Attendees will learn how to ensure that access control is rigorously maintained within the model when ingesting documents, ensuring that only authorized personnel can retrieve data. We will also discuss strategies to mitigate risks of data leakage, unauthorized access, and insider threats in RAG deployments. By the end of this session, participants will have a clearer understanding of the best practices and tools necessary to secure their RAG deployments effectively.
Leading Bigcommerce Development Services for Online RetailersSynapseIndia
As a leading provider of Bigcommerce development services, we specialize in creating powerful, user-friendly e-commerce solutions. Our services help online retailers increase sales and improve customer satisfaction.
Project management Course in Australia.pptxdeathreaper9
Project Management Course
Over the past few decades, organisations have discovered something incredible: the principles that lead to great success on large projects can be applied to projects of any size to achieve extraordinary success. As a result, many employees are expected to be familiar with project management techniques and how they apply them to projects.
https://projectmanagementcoursesonline.au/
The Hilarious Saga of Ships Losing Their Voices: these gigantic vessels that rule the seas can't even keep track of themselves without our help. When their beloved AIS system fails, they're rendered blind, deaf and dumb - a cruel joke on their supposed maritime prowess.
This document, in its grand ambition, seeks to dissect the marvel that is maritime open-source intelligence (maritime OSINT). Real-world case studies will be presented with the gravitas of a Shakespearean tragedy, illustrating the practical applications and undeniable benefits of maritime OSINT in various security scenarios.
For the cybersecurity professionals and maritime law enforcement authorities, this document will be nothing short of a revelation, equipping them with the knowledge and tools to navigate the complexities of maritime OSINT operations while maintaining a veneer of ethical and legal propriety. Researchers, policymakers, and industry stakeholders will find this document to be an indispensable resource, shedding light on the potential and implications of maritime OSINT in safeguarding our seas and ensuring maritime security and safety.
-------------------------
This document aims to provide a comprehensive analysis of maritime open-source intelligence (maritime OSINT) and its various aspects: examining the ethical implications of employing maritime OSINT techniques, particularly in the context of maritime law enforcement authorities, identifying and addressing the operational challenges faced by maritime law enforcement authorities when utilizing maritime OSINT, such as data acquisition, analysis, and dissemination.
The analysis will offer a thorough and insightful examination of these aspects, providing a valuable resource for cybersecurity professionals, law enforcement agencies, maritime industry stakeholders, and researchers alike. Additionally, the document will serve as a valuable resource for researchers, policymakers, and industry stakeholders seeking to understand the potential and implications of maritime OSINT in ensuring maritime security and safety.
Maritime Open-Source Intelligence (OSINT) refers to the practice of gathering and analyzing publicly available information related to maritime activities, vessels, ports, and other maritime infrastructure for intelligence purposes. It involves leveraging various open-source data sources and tools to monitor, track, and gain insights into maritime operations, potential threats, and anomalies. Maritime Open-Source Intelligence (OSINT) is crucial for capturing information critical to business operations, especially when electronic systems like Automatic Identification Systems (AIS) fail. OSINT can provide valuable context and insights into vessel operations, including the identification of vessels, their positions, courses, and speeds
A. Data Sources
• Vessel tracking websites and services (e.g., MarineTraffic, VesselFinder) that provide real-time and historical data on ship movements, positions, and d
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
Airports, banks, stock exchanges, and countless other critical operations got thrown into chaos!
In an unprecedented event, a recent CrowdStrike update had caused a global IT meltdown, leading to widespread Blue Screen of Death (BSOD) errors, and crippling 8.5 million Microsoft Windows systems.
What triggered this massive disruption? How did Microsoft step in to provide a lifeline? And what are the next steps for recovery?
Swipe to uncover the full story, including expert insights and recovery steps for those affected.
Webinar: Transforming Substation Automation with Open Source SolutionsDanBrown980551
This webinar will provide an overview of open source software and tooling for digital substation automation in energy systems. The speakers will provide a brief overview of how open source collaborative development works in general, then delve into how it is driving innovation and accelerating the pace of substation automation. Examples of specific open source solutions and real-world implementations by utilities will be discussed. Participants will walk away with a better understanding of the challenges of automating substations, the ecosystem of solutions available to help, and best practices for implementing them.
Planetek Italia is an Italian Benefit Company established in 1994, which employs 120+ women and men, passionate and skilled in Geoinformatics, Space solutions, and Earth science.
We provide solutions to exploit the value of geospatial data through all phases of data life cycle. We operate in many application areas ranging from environmental and land monitoring to open-government and smart cities, and including defence and security, as well as Space exploration and EO satellite missions.
Securiport Gambia is a civil aviation and intelligent immigration solutions provider founded in 2001. The company was created to address security needs unique to today’s age of advanced technology and security threats. Securiport Gambia partners with governments, coming alongside their border security to create and implement the right solutions.
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
TrustArc Webinar - Innovating with TRUSTe Responsible AI Certification
Gestural Interaction, Is it Really Natural?
1. Francqui Chair 2020, Inaugural Lesson:
Gestural Interaction, Is it Really Natural?
Jean Vanderdonckt, UCLouvain
Vrije Universiteit Brussel, February 20, 2020, 4 pm-6 pm
Location: Room I.0.02, Pleinlaan 2, B-1050 Brussels
Presented by
Prof. Dr. Beat Signer
3. 3
Jean Vanderdonckt
Université catholique de Louvain (UCLouvain)
Louvain School of Management (LSM)
Louvain Research Institute in Management and Organizations
(LouRIM)
Institute of Information and Communication Technologies,
Electronics and Applied Mathematics (ICTEAM)
Director of Louvain Interaction Lab
Place des Doyens, 1 – B-1348 Louvain-la-Neuve,
Belgium
4. Gestural Interaction (Francqui Chair, VUB, Brussels, February 20, 2020) 4
Some links
• Web site
https://wise.vub.ac.be/news/francqui-chair-2020-prof-jean-
vanderdonckt
• Join me on
• SlideShare: https://www.slideshare.net/jeanvdd
• LinkedIn: https://www.linkedin.com/in/jeanvdd/
• YouTube: https://www.youtube.com/user/jeanvdd
• Amazon: https://www.amazon.com/Jean-
Vanderdonckt/e/B01640UKYK
7. 7
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
Psychological dimension: What is a gesture?
Gesture
8. 8
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
• Gesture communication emerges in young children even
before development of language
Psychological dimension: What is a gesture?
Source: S.W. Goodwyn, L.P. Acredolo, C. Brown. (2000). Impact of Symbolic Gesturing on Early Language Development. Journal of
Nonverbal Behavior 24, 81-103
9. 9
• Innate gestures (natural?)
• Gestures that the user intuitively knows or that make sense,
based on the person’s understanding of the world
• Examples
• Pointing to aim a target
• Grabbing to pick an object
(MS Kinect)
• Pushing to select something
• Learned gestures (less natural => memorability?)
• Gestures the user needs to learn before
• Examples
• Waving to engage
• Making a specific pose
to cancel an action
Psychological dimension: What is a gesture?
10. 10
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
• Gesture communication emerges in young children even
before development of language
• Blind people gesture as they speak just as much as
sighted individuals do, even when they know their listener
is also blind.
Psychological dimension: What is a gesture?
Source: S.W. Goodwyn, L.P. Acredolo, C. Brown. (2000). Impact of Symbolic Gesturing on Early Language Development. Journal of
Nonverbal Behavior 24, 81-103
11. 11
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
• Gesture communication emerges in young children even
before development of language
• Blind people gesture as they speak just as much as
sighted individuals do, even when they know their listener
is also blind
• People gesture without a visual model
• Gestures therefore require neither a model nor an
observant partner
Psychological dimension: What is a gesture?
Source: J. M. Iverson, S. Goldin-Meadow. (1998). Why people gesture when they speak. Nature, 396:228
12. 12
• Kendon’s classification of gestures (1972)
Psychological dimension: What is a gesture?
Gesticulation
Spontaneous movements of the hands
and arms that accompany speech
Speech-framed gestures
Gesticulation that is integrated into a spoken
utterance, replacing a particular word
Pantomimes
Gestures that depict objects or actions, with or
without an accompanying speech
Emblems Familiar gestures accepted as a standard
Signs Complete linguistic system
13. • McNeil’s interpretation of Kendon’s continuum (1994)
Psychological dimension: What is a gesture?
Gesticulation
Speech-framed gestures
Pantomimes
Emblems
Signs
“Adam Kendon once
distinguished gestures
of different kinds along
a continuum that I
named “Kendon's
Continuum”, in his
honor.” [McNeill, 1992]
Source: D. McNeill. Hand and Mind: What Gesture Reveals about Thought. University Chicago Press, 1992
Kendon, A., Do gestures communicate? A review. Research on Language and Social Interaction 27, 1994, 175-200
Mandatory presence
of speech
Optional presence
of speech
Mandatory presence of
speech frames
Optional absence
of speech
Mandatory absence
of speech
14. • McNeil’s interpretation of Kendon’s continuum (1994)
Psychological dimension: What is a gesture?
Gesticulation
Speech-framed gestures
Pantomimes
Emblems
Signs
Source: D. McNeill. Hand and Mind: What Gesture Reveals about Thought. University Chicago Press, 1992
Kendon, A., Do gestures communicate? A review. Research on Language and Social Interaction 27, 1994, 175-200
Mandatory presence
of speech
Optional presence
of speech
Mandatory presence of
speech frames
Optional absence
of speech
Mandatory absence
of speech
“As one moves along Kendon’s
Continuum, two kinds of
reciprocal changes occur. First,
the degree to which speech is an
obligatory accompaniment of
gesture decreases from
gesticulation to signs. Second, the
degree to which a gesture shows
the properties of a language
increases.”
[McNeill, 1992]
15. • McNeil’s interpretation of Kendon’s continuum (1994)
Psychological dimension: What is a gesture?
Gesticulation
Speech-framed gestures
Pantomimes
Emblems
Signs
Source: D. McNeill. Hand and Mind: What Gesture Reveals about Thought. University Chicago Press, 1992.
“Gestures enhance,
complement, and
sometimes even
replace speech.”
16. • Gesture (…) are communicative movements of the hands
and arms which express — just as language — speakers’
attitudes, ideas, feelings and intentions…” (Müller, 1998)
Psychological dimension: What is a gesture?
17. • Saffer’s definition: “a gesture (…) is any physical
movement that a digital system can sense and
respond to without the aid of a traditional pointing
devices, such as a mouse or stylus”
Psychological dimension: What is a gesture?
Source: Saffer, D., Designing Gestural Interfaces, O'Reilly Media, November 2008.
Sensor
Gesture
recognizer
Actuator
Context of use =
(User, Platform/device, Environment)
Disturbances
Feedback
feeds drives
operates on
is sensed by
produces
18. • Turk’s definition in Human-Computer Interaction
(2002)
• ”…expressive, meaningful body motions –i.e. physical
movements of the fingers, hands, arms, head, face or
body with the intent to convey information or interact
with the environment.”
Psychological dimension: What is a gesture?
Sources: Turk, M. (2002). Gesture Recognition. In K. M. Stanney (Ed.), Handbook of Virtual Environments (pp. 223–237). London:
Lawrence Erlbaum Associates, Publishers.
Isabel Benavente Rodriguez, Nicolai Marquardt, Gesture Elicitation Study on How to Opt-in & Opt-out from Interactions
with Public Displays, Proc. of ISS ‘17, pp. 32-41.
19. • Aigner et al.’s taxonomy of mid-
air gestures
• P= Pointing gestures (= deictic
gestures) indicate people,
objects, directions
Psychological dimension: What is a gesture?
Source: Aigner, R., Wigdor, D., Benko, H., Haller, M., Lindlbauer, D., Ion, A., Zhao, S., et al. (2012). Understanding Mid-Air Hand
Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI.
20. • Aigner et al.’s taxonomy of mid-
air gestures
• Semaphoric gestures are hand
postures and movements
conveying specific meanings
• T= Static semaphorics are identified by a
specific hand posture. Example: a flat palm
facing from the actor means “stop”.
• D= Dynamic semaphorics convey information
through their temporal aspects. Example: a
circular hand motion means “rotate”
• S= Semaphoric strokes represent hand flicks
are single, stroke-like movements. Example: a
left flick of the hand means “dismiss this
object”
Psychological dimension: What is a gesture?
Source: Aigner, R., Wigdor, D., Benko, H., Haller, M., Lindlbauer, D., Ion, A., Zhao, S., et al. (2012). Understanding Mid-Air Hand
Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI.
21. • Aigner et al.’s taxonomy of mid-
air gestures
• M= Manipulation gestures a
guide movement in a short
feedback loop. Thus, they feature
a tight relationship between the
movements of the actor and the
movements of the object to be
manipulated. The actor waits for
the entity to “follow” before
continuing
Psychological dimension: What is a gesture?
Source: https://www.microsoft.com/en-us/research/publication/understanding-mid-air-hand-gestures-a-study-of-human-
preferences-in-usage-of-gesture-types-for-hci/
22. • A gesture is any particular type of body
movement performed in 1D, 2D, or 3D.
• e.g., Hand movement (supination, pronation, etc.)
• e.g., Head movement (lips, eyes, face, etc.)
• e.g., Full body movement (silhouette, posture, etc.)
Psychological dimension: What is a gesture?
Source: Kendon, A. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press, 2004.
23. • A gesture is any particular type of body
movement performed in 1D, 2D, or 3D
Psychological dimension: What is a gesture?
Individual body part Combined body parts
25. 25
Hardware dimension: How to gesture?
Paradigms of gesture interaction
Contact-based interaction
(surface limitation?)
26. 26
Hardware dimension: How to gesture?
Paradigms of gesture interaction
Contact-based interaction
(surface limitation?)
Contact-less interaction
With wearable
27. 27
Hardware dimension: How to gesture?
Paradigms of gesture interaction
Contact-based interaction
(surface limitation?)
Contact-less interaction
With wearable Without wearable
Close Far
29. 29
Software dimension: Which algorithm?
In Window mode In full screen mode
Source:
https://www.usenix.org/legacy/publications/library/proceedings/usenix03/tech/freenix03/full_papers/worth/worth_html/xstroke.html
XStroke
31. 31
Software dimension: Which algorithm?
Siger (2005)
Training phase
Vector string:
Stroke Vector direction
Left L
Right R
Up U
Down D
Regular expression:
(NE|E|SE)+(NW|N|NE)+(SW|W|NW)+(SE|S|SW)+.
LU,U,U,U,U,U,RU,RU,RU,RU,RU,RU,R,R,R,R,R,R,R,R,RD,RD,RD, RD,D,D,D,D,LD,LD,LD,LD,L,L,LD,LD,LD,D,D,D,D,D,D,D,D,D,L
33. 33
Software dimension: Which algorithm?
Source: Beat Signer, U. Kurmann, Moira C. Norrie: iGesture: A General Gesture Recognition Framework. ICDAR 2007: 954-958
35. 35
Nearest-Neighbor-Classification (NNC) for 2D strokes
Y
0 0.25 0.50 0.75 1
X
0
0.25
0.50
0.75
1
= candidate point
x = reference point
x
x
x
x x
x
x
x
x
x
x
x
x
x
k nearest neighbors
1 nearest
neighbor distance
0 1
x x x x x
x x x
x x x
Reference gestures
Training set
candidate
gesture
p2
p3
p4
p1
k-NN
k nearest neighbors
1-NN
Single nearest neighbor
applied to gesture recognition
q3
q1
q2
q4
reference
gesture
Software dimension: Which algorithm?
36. 36
Nearest-Neighbor-Classification (NNC)
• Pre-processing steps to ensure invariance
• Re-sampling
• Points with same space between: isometricity
• Points with same timestamp between: isochronicity
• Same amount of points: isoparameterization
• Re-Scaling
• Normalisation of the bounding box into [0..1]x[0..1] square
• Rotation to reference angle
• Rotate to 0°
• Re-rotating and distance computation
• Distance computed between candidate gesture and
reference gestures (1-NN)
Software dimension: Which algorithm?
37. 37
Nearest-Neighbor-Classification (NNC)
• Two families of approaches
• “Between points” distance
• $-Family recognizers: $1, $3, $N, $P, $P+,
$V, $Q,…
• Variants and optimizations: ProTractor,
Protactor3D,…
• “Vector between points” distance
• PennyPincher, JackKnife,…
[Vatavu R.-D. et al, ICMI ’12]
[Taranta E.M. et al, C&G ’16]
Software dimension: Which algorithm?
38. 38
Nearest-Neighbor-Classification (NNC)
• Two families of approaches
• “Between points” distance
• $-Family recognizers: $1, $3, $N, $P, $P+,
$V, $Q,…
• Variants and optimizations: ProTractor,
Protactor3D,…
• “Vector between points” distance
• PennyPincher, JackKnife,…
• A third new family of approaches
• “Vector between vectors” distance:
our approach
Software dimension: Which algorithm?
39. 39
• Local Shape Distance between 2 triangles based on
similarity (Roselli’s distance)
𝑎
𝑏
𝑢
𝑣
𝑎 + 𝑏
𝑢 + 𝑣
Paolo Roselli
Università degli Studi di Roma, Italy
Software dimension: Which algorithm?
Source: Lorenzo Luzzi & Paolo Roselli, The shape of planar smooth gestures and the convergence of a gesture recognizer, Aequationes
mathematicae volume 94, 219–233(2020).
40. 40
• Step 1. Vectorization for each pair of vectors between
three consecutive points
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
Training
gesture
Candidate
gesture
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
41. 41
• Step 1. Vectorization for each pair of vectors between
three consecutive points
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q4
q5
q6
Training
gesture
Candidate
gesture
q1
q2
q3
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
42. 42
• Step 2. Mapping candidate’s triangles onto training
gesture’s triangles
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
Training
gesture
Candidate
gesture
p1
p2
p3
p2
p3
p4
p3
p4
p5
p4
p5
p6
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
43. 43
• Step 2. Mapping candidate’s triangles onto training
gesture’s triangles
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
Training
gesture
Candidate
gesture
p1
p2
p3
p2
p3
p4
p3
p4
p5
p4
p5
p6
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
44. 44
• Step 3. Computation of Local Shape Distance between
pairs of triangles
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p3
p4
p5
p2
p3
p4
p4
p5
p6
Training
gesture
Candidate
gesture
p1p2p3,
q1q2q3
(N)LSD (
)
(
=0.02
p2p3p4,
q2q3q4 )
=0.04
(p3p4p5,
q3q4q5 )
=0.0001
)
p4p5p6,
q3q4q5
(
=0.03
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
45. 45
• Step 4. Summing all individual figures into final one
• Step 5. Iterate for every training gesture
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p3
p4
p5
p2
p3
p4
p4
p5
p6
Training
gesture
Candidate
gesture
p1p2p3,
q1q2q3
(N)LSD (
)
p2p3p4,
q2q3q4 )
(
)
p4p5p6,
q3q4q5
(
(p3p4p5,
q3q4q5 )
=0.02 =0.04 =0.0001 =0.03
=0.02+0.04+0.0001+0.03=0.0901
(indicative figures)
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
46. 3D Hand gesture recognition
Software dimension: Which algorithm?
47. Full body gesture recognition
Software dimension: Which algorithm?
See video at
https://youtu.be/RTEGMlDRDL0
50. • Smart Home: TV, fridge, coffee machine,…
• Example: Samsung Smart TV
Usage dimension: Which application domains?
51. • Ring device: gesture elicitation study
Usage dimension: Which application domains?
Source: Bogdan-Florin Gheran, Jean Vanderdonckt, Radu-Daniel Vatavu, Gestures for Smart Rings: Empirical Results, Insights, and
Design Implications. Conference on Designing Interactive Systems 2018: 623-635
52. • Ring device at Home (Family management)
Usage dimension: Which application domains?
53. • Ring device at Home (Family management)
Usage dimension: Which application domains?
55. • History: chironomia
Social dimension: critical factors
Source: Gilbert Austin, Chironomia, or a Treatise on Rhetorical Delivery (1806). Ed. Mary Margaret Robb and Lester Thonssen.
Carbondale, IL: Southern Illinois UP, 1966.
56. Social dimension: critical factors
56
Gestures in Movies
• 12 Angry Men
(dir.: S. Lumet)
The defense and the prosecution have rested and the
jury is filing into the jury room to decide if a young man
is guilty or innocent of murdering his father. What
begins as an open-and-shut case of murder soon
becomes a detective story that presents a succession
of clues creating doubt, and a mini-drama of each of the
jurors' prejudices and preconceptions about the trial,
the accused, and each other. Based on the play, all of
the action takes place on the stage of the jury room.
58. Range of
motion
• Range of motion
• Relates the distance between the position of the human
body producing the gesture and the location of the
gesture
• Possible values are:
• C= Close intimate, I= Intimate, P= Personal, S=Social,
U= Public, R= Remote
Social dimension: critical factors
59. • Cultural influence and interpretation
• The gesture ”The ring” has four major meanings:
OK/Good, Orifice, Zero, Threat
Social dimension: critical factors
Source: Morris, Collett, Marsh, & O’Shaghnessy (1979)
• “Very tasty”: how to gesture that?
60. • Cultural influence and interpretation
• The gesture ”The ring” has four major meanings:
OK/Good, Orifice, Zero, Threat
Social dimension: critical factors
Source: Morris, Collett, Marsh, & O’Shaghnessy (1979)
• “Very tasty”: how to gesture that?
61. • Cultural influence and interpretation
• The gesture ”The ring” has four major meanings:
OK/Good, Orifice, Zero, Threat
Social dimension: critical factors
Source: Morris, Collett, Marsh, & O’Shaghnessy (1979)
• “Very tasty”: how to gesture that?
• “Select”: how to gesture that?
Italy: pointing Sweden: open hand Turkey: two open hands
62. • Engagement by propagation:
only by careful consideration of social gestures
Social dimension: critical factors
Source: Jean-Yves Lionel Lawson, Jean Vanderdonckt, Radu-Daniel Vatavu: Mass-Computer Interaction for Thousands of Users and
Beyond. CHI Extended Abstracts 2018
www.skemmi.com
See video at https://www.youtube.com/watch?v=IZaAl59AUk8
63. • Social acceptance or reluctance?
Social dimension: critical factors
What do you think of the Itchy Nose?
See video at https://www.youtube.com/watch?v=IQ_LkPM_GHs
64. • Social acceptance or reluctance?
Social dimension: critical factors
0.319
0.246 0.232 0.225
0.203
0.185 0.181 0.178
0.167 0.167 0.167
0.244 0.231
0.141
0.218
0.179
0.103
0.218 0.218
0.103
0.141
0.
0.509
0.236
0.273
0.291
0.182
0.4
0.109
0.182
0.236
0.145
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Start
Player
Go to
Previous
Item
Turn
Alarm Off
Increase
Volume
Turn Light
On
Decrease
Volume
Turn TV
Off
Turn TV
On
Dim Light Turn
Alarm On
Han
C
Global Male
Both side
Double tap
2
Center tap Left push
Both sides
Both hands
tap
Right push Left to right
flick
Right to left
flick
Continuous
rubbing
Top push Top to
bottom flick
Repe
cent
n
Agreement
rate
Source: Jorge Luis Pérez-Medina, Santiago Villarreal, Jean Vanderdonckt: A Gesture Elicitation Study of Nose-Based Gestures.
Sensors 20(24): 7118 (2020)
67. • Compatibility
• Imposition: each OS imposes its own set of gestures
• Some are natural to use
• Some others are not natural at all and remain unused
• Lack of acceptability:
• Some gestures can be simple to be recognized by the system,
yet hard to remember and reproduce for end users
• Some gestures can be accepted by end users, but harder to be
recognized
• Gestures are often system-defined, sometimes
designer-defined, rarely not user-defined
• Need for Gesture Elicitation Study (GES)
• Is a study for asking end-users to elicit their own gestures for a
set of predefined functions through referents and reach a
consensus
User experience dimension: evaluation criteria
68. • Compatibility
• Need for a natural conceptual model: Virtual Library
User experience dimension: evaluation criteria
Source: https://www.youtube.com/watch?v=ls5kj7oVwto
69. • Consistency
• Imposition: each OS imposes its own set of gestures
• A few gestures are common (hopefully, natural)
• Most other gestures are inconsistent
User experience dimension: evaluation criteria
Source: Ryan Lee, www.gesturecons.com
71. • Consistency: standardization?
• Gesture example “Shake”: Wake up, Update, Reset, Next
track, Shuffle, Unlock, Enter a comment (SnappView)
User experience dimension: evaluation criteria
72. • Discoverability
• GUI interaction is based on
• action exploration: eg by menu
• recognition (best)
• Gestures are not easy
to discover
• Solutions appear: feedforward
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010
Olivier Bau, Wendy E. Mackay, OctoPocus: a dynamic guide for learning gesture-based command sets. UIST 2008: 37-46
73. • Control: explicit, mixed, not implicit
• “When users think they did one thing but actually did
something else, they lose their sense of controlling the
system because they don't understand the connection
between actions and results.”
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010.
Bert Schiettecatte, Jean Vanderdonckt, AudioCubes: a distributed cube tangible interface based on interaction range for
sound design. Tangible and Embedded Interaction 2008: 3-10
Commands AudioCube action(s)
Grab and set
DOF=0
(discrete)
DOF=1
(linearly
correlated)
Move cube in 2D, almost 3D, 3D
74. • Control: explicit, mixed, not implicit
• “When users think they did one thing but actually did
something else, they lose their sense of controlling the
system because they don't understand the connection
between actions and results.”
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010.
Bert Schiettecatte, Jean Vanderdonckt, AudioCubes: a distributed cube tangible interface based on interaction range for
sound design. Tangible and Embedded Interaction 2008: 3-10
Commands AudioCube action(s)
DOF=1
(linearly
correlated)
Move cube in 2D, almost 3D, 3D
75. • Control: explicit, mixed, not implicit
• “When users think they did one thing but actually did
something else, they lose their sense of controlling the
system because they don't understand the connection
between actions and results.”
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010.
Bert Schiettecatte, Jean Vanderdonckt, AudioCubes: a distributed cube tangible interface based on interaction range for
sound design. Tangible and Embedded Interaction 2008: 3-10
Commands AudioCube action(s)
DOF=1
(rotationally
correlated)
Rotate in 2D, 3D
DOF=2
(freeform)
2D, 3D gestures
76. • Physical demand depends on variables
• Gesture form: specifies which form of gesture is elicited.
Possible values are:
• S= stroke when the gesture only consists of taps and flicks
• T= static when the gesture is performed in only one location
• M= static with motion (when the gesture is performed with a
static pose while the rest is moving)
• D= dynamic when the gesture does capture any change or
motion
User experience dimension: evaluation criteria
77. • Physical demand depends on variables
• Laterality: characterizes how the two hands are
employed to produce gestures, with two categories, as
done in many studies. Possible values are:
• D= dominant unimanual, N= non-dominant unimanual,
S= symmetric bimanual, A= asymmetric bimanual
User experience dimension: evaluation criteria
Source: https://www.tandfonline.com/doi/abs/10.1080/00222895.1987.10735426
D
(right handed)
N
(right handed)
S
(right handed)
A
(right handed)
78. • Agreement among end users
• Agreement Rate = the number of pairs of participants in
agreement with each other divided by the total number
of pairs of participants that could be in agreement
• Compute co-agreement for pairs, groups (eg male vs
female), categories of referents (eg basic vs. advanced)
User experience dimension: evaluation criteria
agreement rate disagreement rate co-agreement rate
Source: Radu-Daniel Vatavu, Jacob O. Wobbrock, Between-Subjects Elicitation Studies: Formalization and Tool Support. CHI 2016:
3390-3402.
79. • FUN!
• In games, all gestures are permitted (body)
• In professional contexts, a gesture could be
considered as awkward, inappropriate
User experience dimension: evaluation criteria
Example: MiniEurope (Alterface)
81. 81
• Gesture interaction is suitable for
• Natural interactions: interact directly with objects in physical way
• Less cumbersome or visible hardware
• Flexibility in hardware
• Fun
• Gesture interaction is NOT suitable for
• Heavy data input (use keyboards instead)
• Absence of visual feedback (e.g., a system without a screen or
targeting users with visual impairments)
• Unmet physical demands (e.g., swipe to receive a phone call in
winter)
• Constrained contexts of use (e.g., privacy, embarrassment)
• User and task
• Platform/device
• Environment
Conclusion: is it really natural?
Kendon [12] added that an important part of ‘kinetics’ research shows that gesture phrases can be organized in relation to speech phrases. We can parallel his arguments and reasonings to relatively coincide with pen gestures (as natural human gestures) and the instantiated sketch-objects (as dictionised speech contents). He also stated that there is a consistent patterning in how gesture phrases are formed in relation to the phrases of speech – just as, in a continuous discourse, speakers group tone units into higher order groupings resembling a hierarchy, so gesture phrases may be similarly organized.
Gestures that are put together to form phrases of bodily actions have the characteristics that permit them to be ‘recognized’ as components of willing communicative action
Designed for speech-related gestures
Not completely relevant for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Chironomia is the art of using gesticulations or hand gestures to good effect in traditional rhetoric or oratory. Effective use of the hands, with or without the use of the voice, is a practice of great antiquity, which was developed and systematized by the Greeks and the Romans. Various gestures had conventionalized meanings which were commonly understood, either within certain class or professional groups, or broadly among dramatic and oratorical audiences.