Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
applsci-logo

Journal Browser

Journal Browser

New Technologies and Applications of Visual-Based Human–Computer Interactions

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 May 2025 | Viewed by 267

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, Kamp-Lintfort, Germany
Interests: eye tracking; human computer interaction; cognitive assistive systems; human factors; usability engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Currently, human–computer interactions (HCIs) are still mostly carried out using established input devices (e.g., mouse, touchscreen, keyboard, joysticks) or, more recently, spatially tracked controllers and gloves, which have emerged primarily with the advent of virtual reality. These input devices provide limited input capabilities (i.e., pointing, selecting, dragging, aiming), require a familiarization period and do not provide inputs with a sufficiently high degree of freedom and accuracy.

The combination of artificial intelligence, computer vision and visual sensors (such as cameras and depth sensors) has enabled the implementation of new, more intuitive and natural HCIs, for example, controller-free hand-tracking, as found in recent VR glasses. Analogously, modern natural language processing (NLP) using deep learning and large language models (LLMs) has enabled systems to understand human voices with various speech patterns and dialects, and their intentions, e.g., using Open-AI Whisper for speech recognition and LLMs like ChatGPT or Llama for semantic processing.

With regard to visual information, these developments have led to new and promising improvements too. For example, the biometrics of users (such as facial expressions, uncertainty, posture, gestures, and general activity) can be extracted from the video streams of several cameras in real time and serve as a passive, non-intrusive, non-contact and natural input alternative for HCI. For example, eye gaze, emotions, and facial expressions can be used to estimate what the user is focusing on and whether they are currently unsure or understand the current situation. Detected hand movements and gestures can be applied to control systems or—in combination with gaze—to anticipate which object the user intends to grasp (intention prediction, expertise estimation using eye–hand span). They can also be used to complement other senses, such as hearing, smell or touch. Therefore, visual-based HCI is probably the most widespread research area in HCI in the investigation natural and intuitive interfaces between humans and technical systems.

Although there has been much progress in the field of visual-based HCIs over the last decade, these methods are still not yet established and fully developed, resulting in many possibilities for future research and developments. For example, which methods/techniques should be used individually or in combination in a time- or context-based manner, how can they be made faster and more robust, how can systems be made more adaptive and natural, and how visual information should in general be more prominently applied in HCIs are all interesting questions. An additional question can be how auditory, olfactory and haptic inputs and outputs can complement visual in- and output.

To provide an overview of these recent developments in this fascinating and rapidly developing area, we invite submissions on a range of topics to a Special Issue titled “New Technologies and Applications of Visual-Based Human–Computer Interactions”.

Congruent with the overall aim of the journal, we hope to stimulate a useful and interdisciplinary interchange between individuals working primarily on fundamental and theoretical issues and those working on applied aspects.

We are particularly interested in papers that propose the application of visual-based HCI techniques, alone as well as in combination with other modalities (such as speech, eye tracking, EEG), within this broad field. Such papers can be explorative, empirical, meta-analytic, or technical. Papers that seek to bridge the gap between theoretical and applied aspects, as well as papers that study visual-based HCI integration in natural and intuitive forms of HCI, are especially welcome.

Some examples of possible topics for this Special Issue are provided below. These should be viewed as a suggestive list, rather than as an exhaustive catalog. Individuals unsure about whether a proposed submission would be appropriate are invited to contact the Special Issue Guest Editor, Kai Essig.

  1. Techniques of visual-based HCI applied on visual sensor data:
    • Head and face detection and tracking
    • Tracking and analysis of (large-scale) body movements
    • Hand tracking and gesture recognition for natural machine control
    • Recognition of postures, gestures, and general activities
    • Gaze-based interaction
    • Understanding users’ attention and intention from visual-based data and user behaviour prediction
    • Recognizing and analyzing emotions, feelings and intention through visual data
    • Cognitive load reduction, biofeedback and stress reduction
  1. Visual-based HCI for Personalized HCI
    • AI-based learning from extracted visual-based data sources for personalization/adaption in HCI
    • Enhancing accessibility for users with disabilities
  1. Visual-based HCI: storage, privacy and security aspects
    • Storage and processing of visual data in repositories and databases
    • Privacy and data security issues
    • Visual formalisms and mechanisms supporting both the presentation and the interaction with data
    • Biometric authentication, such as iris and face recognition
    • Protection, encryption and anonymization of visual biometric data
    • Special applications areas of visual-based HCI in security and surveillance
  1. General Aspects of Visual-based HCI
    • Visual-based HCI techniques alone as well as in combination with other modalities (such as speech, eye tracking, EEG)
    • Visual-based HCI to complement auditory, olfactory and haptic senses
    • Technical specifications and performance analysis (e.g., latency analysis and reduction) of visual-based HCI
    • Usability evaluation of visual-based HCI
    • New frontiers and applications areas in visual-based HCI
    • Summary or outline paper on current status and future developments

Prof. Dr. Kai Essig
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual-based HCI
  • AI
  • camera stream processing
  • multi-modal interfaces
  • movement
  • intention
  • emotion and face recognition
  • activity
  • intention and gesture recognition
  • presentation of visual data
  • interactive data visualizations
  • multi-modal databases

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers

This special issue is now open for submission.
Back to TopTop