Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3580962acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

A Framework and Call to Action for the Future Development of EMG-Based Input in HCI

Published: 19 April 2023 Publication History

Abstract

Electromyography (EMG) has been explored as an HCI input modality following a long history of success for prosthesis control. While EMG has the potential to address a range of hands-free interaction needs, it has yet to be widely accepted outside of prosthetics due to a perceived lack of robustness and intuitiveness. To understand how EMG input systems can be better designed, we sampled the ACM digital library to identify limitations in the approaches taken. Leveraging these works in combination with our research group’s extensive interdisciplinary experience in this field, four themes emerged (1) interaction design, (2) model design, (3) system evaluation, and (4) reproducibility. Using these themes, we provide a step-by-step framework for designing EMG-based input systems to strengthen the foundation on which EMG-based interactions are built. Additionally, we provide a call-to-action for researchers to unlock the hidden potential of EMG as a widely applicable and highly usable input modality.

1 Introduction

The HCI community is constantly searching for new hands-free input modalities to enable more intuitive, convenient, and efficient input for interactive systems. One such method is through the use of electromyography (EMG) signals – the electrical impulses produced during muscular contractions [167]. EMG is a particularly attractive solution as it provides a compact and low-powered means for detecting muscle-based inputs while avoiding many of the limitations imposed by other technologies. For example, it eliminates the need for cumbersome equipment such as data gloves [21] and can be less invasive and computationally complex than computer vision approaches [35, 160]. Additionally, depending on the desired interaction, users may elicit low-level muscle contractions that are faster and more subtle than the dynamic gestures or body postures required for other input modalities. While different sensing methods exist for measuring EMG signals [16, 105], surface EMG is by far the most common and attractive method used today as it is non-invasive and relatively robust.
Surface EMG (sEMG) uses electrodes placed directly on the surface of the skin to measure the electrical activity over a muscle site. The EMG signals acquired from these sensors are then converted into control signals that can be used to control a device or interface, effectively leveraging natural human contractions as an intuitive control input. sEMG technology has been around since the early 1900s [15] but has remained largely inaccessible for general-purpose use due to a lack of inexpensive, usable, and commercially available equipment.
The evolution of myoelectric control — the control of a device using EMG signals [11] — has primarily been driven by its niche application in prosthetics, enabling amputees to operate a powered prosthesis using the muscles of their residual limb. Early myoelectric control systems took a one-muscle one-function approach, where antagonistic muscles — muscles whose actions oppose each other (e.g., flexion/extension of the wrist) — were used as inputs for device control [135]. Each muscle was the functional equivalent of a momentary switch, with outputs being turned on when the muscle contraction exceeded some threshold. While this resulted in a simple control scheme, it offered a limited number of viable sites for control inputs and thus required unintuitive mode switching to increase the input space and enable effective device control [55]. To improve the control of more degrees of freedom, the synergistic behaviour between multiple muscle sites was introduced using pattern recognition [53]. Pattern recognition leverages machine learning algorithms to learn and detect repeatable patterns in EMG signals. Over the past two decades, the prosthetics community has slowly improved pattern recognition-based myoelectric control, reaching near-perfect offline classification accuracies (in controlled lab-based settings) [17, 97, 172], and pattern recognition-based myoelectric prosthesis control is now commercially available from several vendors [1, 2, 6]. Nevertheless, the research community continues to explore improvements to its intuitiveness, robustness, and dexterity, such as through algorithms that enable simultaneous independent proportional control (control over multiple degrees of freedom at once) [9, 65, 85]. Despite this rich history of success in prosthesis control, however, it wasn’t until the mid-2000s that EMG began to gain popularity within the HCI community [37, 38, 111, 132, 133, 134].
With the release of an inexpensive commercially available sEMG device in 2014 [130], EMG became a viable option for broad use by HCI researchers. Since then, EMG has been leveraged for a wide range of general-purpose applications, such as: controlling drones [147], enhanced piano control [88], sign language recognition [116], RC car control [92], spelling systems [158], interactions with games [173], and augmenting gym experiences [72], to name a few. However, while EMG-based control has become accessible to the HCI community and is continuing to grow in popularity, it has yet to make significant strides toward being used in real-world general-purpose applications. By ‘general-purpose’ we mean applications that are outside the most common use of controlling a prosthetic limb.

1.1 Scope and Contribution

To increase success in research and development practices for using EMG-based inputs in general-purpose applications, we make three specific contributions. First, we synthesize the main limitations of EMG-based input that exist in a sampling of HCI work. Second, we outline a framework for designing and building EMG-based input control in interactive systems. Third, we provide a call to action identifying the key areas of research that need to be addressed for the continued and improved success of EMG-based input. We elaborate on these contributions below.
The lack of widespread adoption of EMG-based control may be attributed, at least in part, to the fact that designing and implementing robust EMG-based input systems is inherently complex and requires careful treatment of the stochastic EMG signal. As a result, the successful adoption and application of EMG-based control for HCI researchers as part of a larger interactive system is extremely challenging. Inevitably, this has led to systems lacking the performance, robustness, and intuitiveness required for everyday, real-world use in general-purpose applications. These observations corroborate those made by members of our larger research team and through our longstanding collaborations with partners in both the prosthetics community and HCI fields. Our research team includes members of an HCI research lab, a biomedical engineering research lab, and a prosthetics clinic. We have published EMG research in HCI and biomedical engineering venues, in addition to working with industry collaborators who are developing commodity-level general-purpose EMG wearables. These experiences have reinforced the notion that understanding the exact source and nature of the difficulties that are currently hindering the adoption of EMG-based control is crucial for its successful uptake as an enabling technology, beyond its use in powered prostheses.
To characterize and synthesize the current challenges and limitations hindering the reliable use of EMG as an input modality outside of prosthesis control, we sampled the ACM library for HCI applications leveraging EMG for control. Through this process, we found an overarching theme of direct, and sometimes naive, adoption of the work done in the prosthetics community. Because this work has evolved for a highly specific — and different — use case, the reliance on methods designed specifically for prosthesis control may be leading to sub-optimal solutions within HCI and in applications to different contexts. Our findings suggest that for EMG systems to reliably perform at the levels that would be expected for real-world commercial applications, it is imperative that a solid basis and understanding of EMG and its use is available. From capturing EMG signals to the use of static contraction or dynamic gesture recognition, to algorithm selection, our findings suggest that HCI researchers could benefit from a common starting point that bridges the gap from prosthesis control to general-purpose use. While HCI researchers can leverage the expertise developed within prosthetics research, we must consider the unique challenges, opportunities, and use cases for EMG-based interactions with general-purpose applications in mind.
Having identified potential limitations to the general-purpose use of EMG input and drawing on our experience in building EMG-based input from the ground up, we propose a design framework for developing EMG-based control systems, specifically with HCI researchers in mind. Further, we provide a roadmap and call to action for research into EMG-based input. In doing so, we hope to facilitate the design process and success of these systems, and in turn, challenge researchers to unlock the hidden potential of EMG for general-purpose applications.

1.2 Layout

The rest of the paper is divided into five main sections. Section 2 (Background) provides a brief overview of the physiology of EMG and the history of its use for prosthesis control and general-purpose applications. This section sets the stage for exploring the use of EMG later in the paper. Section 3 (Exploring EMG Applications in HCI) highlights the process that we followed to extract the current challenges and limitations hindering EMG-based development from an HCI perspective. We also discuss the four main themes from our findings: (1) interaction design, (2) model design, (3) system evaluation, and (4) reproducibility. Sections 4 and 5 (A Framework for Building EMG-Based Interactions and Further Challenges and Considerations) lay out a new design framework for designing and adopting these interactions. Finally, in section 6, we discuss this work and its limitations, and evaluate the future of EMG research, providing a call to action for the HCI community.

2 Background

2.1 The EMG Signal

The electromyography (EMG) signal is a manifestation of the electrical activity that is created by a muscle when it contracts [117]. The initial discovery of these biological signals can be traced back to the mid-1600s [39], however, it was not until the early 1900s that advancements in technologies such as the galvanometer and the cathode ray oscilloscope enabled a more detailed analysis of the EMG signal [39]. Since then, EMG has been used for many different purposes, including identifying neuromuscular diseases [52, 149, 150], general human-machine interaction [12, 88, 90, 116], and powered prosthesis control [114, 135]. Physiologically, when a muscle contracts, action potentials propagate along the membrane of each muscle fiber. The combination of these action potentials from the different muscle fibers of each motor unit is known as a motor unit action potential (MUAP) [129]. The EMG signal is the summation of all of these MUAPs and is dependent on the physiological properties of the particular muscle and the relative recording location [129]. These action potentials occur at pseudo-random intervals and at varying distances from the sensors, causing the stochastic, or random, behaviour of the EMG signal [117]. For general-purpose applications, sEMG is predominately used (as opposed to intramuscular EMG, which involves the insertion of wires, needles, or sensors into the muscle transcutaneously) because it is non-invasive, easy to use, and relatively robust [135]. Although preferable, sEMG remains susceptible to signal corruption due to motion-artifact, improper skin-electrode contact, and other transient factors [28, 135, 165]. Nevertheless, the potential of sEMG as a solution for hands-free control remains a compelling option for computer interaction.

2.2 EMG for Prosthesis Control

With increasing technological capabilities for sEMG came the exploration of its potential use for prosthesis control in the early 1940s [53]. As a direct result, the evolution of control schemes was guided by the clinical needs of amputees and their desire to impart continuous control over their prostheses. The initial systems (i.e., conventional/direct control) took a one-muscle one-function approach for device control [135]. In these control schemes, the EMG signal is typically measured from two main muscle sites (e.g., for transradial amputees, over the forearm flexor and extensor muscles) and their amplitudes are compared to predefined thresholds to determine whether to activate a prosthesis function in one direction or the other (e.g., close/open hand). The speed of the device may be controlled based on the intensity of contraction (proportional control), or a fixed speed may be adopted (constant control) [59]. When using conventional control, mode switching — activated by the simultaneous contraction of antagonist muscle sites, such as the aforementioned flexor and extensor sites — is required to control more than a single degree of freedom, or function. In this way, mode switching is used to cycle through a list of prosthesis functions (e.g., hand open/close, wrist rotation, etc.) to select which is activated by the direct control. The inadvertent false activation of this mode switch often occurs, resulting in erroneous control inputs causing unintended device selection or action [153]. The unintuitive and sequential nature of these switches is also cumbersome. For example, imagine trying to control a cursor while having to co-contract each time to change control between the horizontal and vertical axes. This problem is exacerbated by every additional degree of freedom to be controlled within a prosthesis. As a result, prosthetics researchers turned to machine learning techniques to extract additional information and enable the control of more functions [56].
Initial pattern recognition-based myoelectric control systems were developed out of the necessity for more intuitive and robust device control in the mid to late 1900s [53]. Unlike conventional control, pattern recognition approaches use multiple EMG sites, feature extraction techniques (see section 4.2.5), and robust algorithms (see section 4.2.3) to leverage the added information in the way muscles work together [135]. Doing so removes the theoretical ceiling on the number of potential control outputs, as they are no longer tied to the number of electrode sites but, instead, the number of distinct patterns of contractions [78]. In 2003, Englehart et al. set the standard for continuous pattern recognition-based myoelectric control still used today in commercial systems [53]. A continuous stream of class decisions is sent to the prosthetic device and converted to a prosthesis function. This decision stream consists of N known classes to which the current pattern of EMG corresponds and when combined with proportional control, is used to control the velocity of the associated prosthesis function. As a result, the prosthesis is constantly reacting to EMG activity, and thus device control requires dedicated cognitive effort [63]. Nevertheless, with a lot of training and practice (sometimes years), amputees can become extremely efficient using this control scheme [70].
As exemplified in [135], pattern recognition-based EMG control systems used today for prosthesis control consist of the main stages shown in Figure 1. In the data preprocessing stage, unwanted sources of noise are filtered from the raw EMG signal. Filtering noise is important as it can negatively impact the ability of algorithms to differentiate between patterns of contractions. Next, the filtered data are divided into fixed-length overlapping windows (or frames) from which descriptive features are extracted. These features are necessary to overcome the stochastic behaviour of EMG and to increase the information density of the signal prior to classification. Based on the extracted features, a machine learning model classifies each window of EMG data as belonging to one of the specified muscle-based inputs (e.g., hand open). The corresponding decisions are constantly output as a series of class labels, each corresponding to a particular muscle-based input, and ultimately converted into prosthesis control commands. These windows are kept very short (on the order of milliseconds) to enable quick and responsive control of the prosthesis. Finally, if using proportional control, the decision stream can be combined with the amplitude of the EMG-based signal to control the velocity of the device [138]. As these classification-based pattern recognition approaches have become commercialized and clinically adopted, researchers have increasingly turned their attention to more complex control schemes.
Figure 1:
Figure 1: The stages of continuous EMG-based pattern recognition for prosthesis control.
A major limitation of classification-based approaches is that they require the sequential activation of motion classes (e.g., first wrist flexion, then wrist rotation, then hand open). Although combined classes can be incorporated (e.g., a class for both wrist flexion and wrist pronation at the same time) [166], the combinations of classes rapidly increase the training requirements for the user and the classifier. Consequently, alternative approaches - such as regression [9, 65] - have been explored, which continuously map the input EMG to output activations in multiple degrees of freedom, enabling the simultaneous and independent proportional control of multiple functions [8]. Although such approaches offer the potential of added dexterity for prosthesis control, they fall within the same control paradigm of continuous control, which may not always apply in real-world HCI applications.

2.3 EMG in HCI

While EMG-based control has been adopted clinically for prosthesis control for decades, it only began to garner attention from the HCI community for general-purpose use in the early 2000s [37, 38, 111, 132, 133, 134]. This timing coincides with many prosthesis-related EMG control achievements, including increased attention due to an influx of funding via the US Defense Advanced Research Projects Agency’s (DARPA) Revolutionizing Prosthetics program [109]. Then, in 2008, Saponas et al. articulated the need for a commercially available wearable device to enable practical and inexpensive use of EMG for everyday use in HCI [132]. It wasn’t until 2014, though, that the Myo Armband became arguably the first notable commercially available sEMG device [130].
The Myo Armband (Figure 2) was a low-cost commercially available sEMG wearable band. The advantage of the Myo was that it was an inexpensive, convenient, and relatively robust method for obtaining sEMG data. The armband consisted of eight bipolar pairs of surface electrodes (8 channels) that measured EMG activity at 200 Hz [93]. While this was below the typical 1000-2000 Hz sampling rates of medical-grade electrodes, it was adequate for the reliable detection of many gestures and kept cost low [124]. Additionally, the armband had a built-in 9-axis inertial measurement unit (IMU), potentially enabling the recognition of more dynamic inputs [29, 104]. The data were streamed wirelessly using Bluetooth Low Energy (BLE) to any BLE-enabled computing device. Although the Myo product has since been discontinued [126, 156], it set the stage for enabling EMG as a new input modality for general-purpose applications.
Figure 2:
Figure 2: Myo Armband – Each S corresponds to an individual bipolar differential electrode sensor.
With the rapid evolution of virtual and augmented reality, there is growing interest in novel interaction techniques that do not rely on physical devices, like mice or keyboards, as controllers. As such, there has been a renewed interest in the adoption of EMG technology in research (e.g.,[43, 74, 171]) and commercial applications (e.g., [83]). However, we believe that for the successful adoption of EMG in HCI, we must deepen our understanding of its use and define new directions for improved systems that are not explored meaningfully in the body of prosthetics research. Leveraging EMG for general-purpose applications has its own unique challenges and intricacies that must be considered and explored to ensure its future success.

3 Exploring EMG Applications in HCI

In this section, we analyze previous EMG-based control work in the HCI literature to identify the main challenges and limitations. Despite the individual contributions of many works, four main themes emerged that may be limiting the advancement of the field: (1) interaction design; the improper or sub-optimal selection of a control scheme for the desired interaction, (2) model design; improper design or optimization of the components of the control scheme, (3) system evaluation; evaluation of system performance in isolation or without sufficient interaction context, and (4) reproducibility; insufficient experimental control or detail provided to enable the replication, corroboration, or extension of presented results.

3.1 Paper Selection Process

Over the past two decades, the HCI community has contributed creative and valuable research in the area of EMG-based control. Even with the growing academic popularity of EMG, however, it has yet to make significant strides toward reliable commercial general-purpose use. Consequently, the goal of this sampling was to highlight the main barriers and challenges to EMG-based control for use in HCI research and consumer applications. We correspondingly sampled papers exclusively from the Association for Computing Machinery (ACM) library to focus on the HCI community and shift the focus away from the comparatively much larger and older EMG-based prosthesis control literature (predominantly found in IEEE or clinical journals). From the ACM body of literature, we selected approximately 50 papers (see references 174-220) on the use of EMG for general-purpose applications. The process used for paper selection was rather lenient but focused on those related to using EMG as an input for the control of an interactive system. This was therefore not meant to be a formal systematic review, but instead, a sampling to characterize the main challenges and limitations that HCI researchers are currently facing, drawing on a combination of the available literature and our collective experiences in both prosthetics and HCI. From each of the selected papers, we documented the following categories of information: application/context, inputs, algorithms(s), features used, windowing technique, training methods, user/context dependence, evaluation technique(s), evaluation metric(s), reported accuracy, number of participants and technological setups. Data resulting from this categorization process can be found in the supplementary material for this paper. These data, along with the research knowledge and clinical experience of our extended team and the additional context from the prosthetics literature, ultimately informed the outcomes of this process.

3.2 Emerging Themes

Adopting EMG-based control systems is inherently complex due to the stochastic EMG signal and subtle changes in user behaviours and environments, so special attention is needed when applying them to novel use cases — such as those emerging in HCI. While researchers can lean on the work done by the prosthetics community, HCI has distinct challenges in successfully adopting EMG as an input modality. As a result, taking the successes from prosthesis control and applying them directly in HCI for different use cases has been challenging. We briefly expand, below, on each of the four themes that emerged from this process.

3.2.1 Interaction Design.

Humans regularly use their hands and arms to communicate and express themselves in their everyday lives. Extending their use as an intuitive input for interaction between humans and computers is therefore a logical progression. As a result, many HCI researchers have attempted to leverage various input modalities, including EMG-based inputs, to enable these interactions. Despite this, we found a reoccurring theme of insufficient focus on interaction design for use in EMG-based interfaces. One of the main limitations in this regard is the insufficient consideration of the physiological properties of the EMG signal. At the core of any EMG-based interaction is some form of muscular activation that can be categorized (or mapped) and converted into a control input. Researchers must select muscle sites from which appropriate and adequately distinguishable EMG activity can be recorded in response to specific contractions or the interaction will be fundamentally flawed. For example, if electrodes are placed around the forearm, certain finger-based inputs (such as movements of the thumb activated by the thenar eminence muscles, which are intrinsic to the hand) may not generate sufficient EMG activity for robust input detection. It is also imperative for researchers to select muscle-based inputs that coincide with the desired interaction. For example, discrete event inputs (e.g., a button click) should ideally be associated with dynamic gesture-based inputs (e.g., a finger tap) (see section 4.1.1) that have a definitive start and end. Many of the surveyed papers, however, naïvely designed interactions based on control scheme precedents set by the prosthetics literature without sufficiently considering the design implications associated with different interactions.
Currently, the predominant use case for EMG centers around enabling amputees to control their prostheses. The interaction between the amputee and the computer during prosthesis control is particular as it requires coordinated and continuous inputs. This continuous control (see section 4.2.2) enables amputees to continuously make micro-adjustments to the positioning of their prosthesis (e.g., close the hand a bit more). Enabling this control requires that users sustain predominantly static contractions (see section 4.1.1) since input detection is constantly re-occurring. Physiologically appropriate contractions — those where the user contracts in a manner consistent with the prosthetic degree of freedom they are controlling — are commonly adopted with pattern recognition-based myoelectric control to improve intuitiveness (e.g., think “squeeze your hand” to close the prosthetic terminal device). While this may work for some general-purpose applications like telerobotics, it may be unintuitive and not ideal for others. For example, in the case of smart garments [18], hand-open/close and pinch grip may not be the best inputs as these contractions are commonly elicited during everyday activities and will inevitably trigger false activations. In HCI research we must also consider the intuitiveness of the chosen mappings from input contractions to interaction commands. For example, in the case of drone control [147], we must consider whether wrist flexion and extension are intuitive mappings for forward and backward movement. Additionally, HCI leverages many discrete events as inputs — those with a definitive start and end — like clicks, taps, and finger swipes. Unlike prosthesis control, these inputs are not continuous and they are often intended as time-limited event triggers. These command-type inputs are often used as an infrequent supplement to other input modes instead of a dedicated “always on” continuous control. For example, gestures could be used to quickly change the song playing through an individual’s headphones as they are exercising. In this scenario, continuously recognizing muscular activity would be prone to frequent inadvertent commands, so the interaction design should move from questioning “Which of the N known classes was that? (i.e., continuous control)” to “Did a particular event just occur? (i.e., discrete control)”.
Despite acknowledging this need for discrete event-based inputs, many have endeavored to adopt continuous input schemes to recognize discrete tasks. While this can be made possible, it is an awkward combination, arguably resulting from an over-reliance on the prosthetics literature. For example, consider the design of an EMG-based interaction using finger flexion to activate a button press. Using continuous control, the decision stream would likely output a sequence of “rest” (or inactive) decisions, followed by some number of finger flexion decisions (with length governed by the classifier update rate and the speed and length of the finger contraction), followed again by more rest decisions. This decision stream would then have to be mapped to the discrete event through some sequence of filtering or state logic. For example, is the button press registered when the EMG returns to rest, after a fixed number of finger flexion decisions, or after a certain amount of time has elapsed? Additionally, how many consecutive active decisions must occur to register a press (to avoid physiologically impossible activations, such as with switch debouncing in electrical circuits)? Furthermore, the rejection of any contraction other than one of the N trained to use the system (which is inevitable during everyday use) can be problematic [131]. Although some HCI researchers have recognized the benefit of leveraging inherently discrete gesture commands as inputs [50, 75, 116, 173], many more have defaulted to using the continuous control approach. For example, to recognize different finger taps in previous work [132], the authors opted for a continuous approach by splitting each input into a series of windows and making a prediction for each window. By leveraging no-movement thresholding and applying a majority vote for the active windows of data, the specific discrete finger tap was selected. With this approach, the temporal structure/envelope of the EMG signal is not fully leveraged, as each finger tap is treated as a subset of individual and unrelated static contractions. Ultimately, this could have negative impacts on the recognition capabilities of this control scheme during real-time use.

3.2.2 Model Design.

After selecting the appropriate interaction, a control scheme must be developed to accurately and reliably recognize the selected muscle-based inputs. The selection and implementation of the correct control system are imperative to the overall design process and to user experience. An incorrectly designed system will inevitably result in poor recognition, and as a result, the user experience will suffer. We found that many works proposed or used control schemes that heavily borrowed from the prosthetics literature without sufficient consideration for differences in model design and hyper-parameter requirements.
In addition to the widespread adoption of the continuous classification framework by the prosthetics community, its implementation has been heavily guided by its use case. Decades of research have explored the impact of different design parameters, largely building on the early work of Englehart et al. [53]. These design considerations include, but are not limited to, the appropriate preprocessing of the EMG signal, the selection of discriminative features [121, 122], specific ranges of window/increment lengths to ensure the optimal controller delay [145], the selection of appropriate classification schemes, the number and placement of EMG sensors [54, 69], and the use of physiologically appropriate contractions as inputs. These parameters have been carefully iterated over time to be optimized for the particular use case of prosthesis control. Although much can be learned from these findings, naïvely applying them to other use cases in HCI may result in sub-optimal control systems. For example, due to the often unique residual musculature of each amputee, these systems have evolved as being user-dependent (with a new model trained for each new user), with only recent consideration of cross-user models [30]. While these cross-user models are key to the future success of EMG-based input in HCI, they were of little focus in the sampled work. Furthermore, the assumed stationarity of EMG within the short windows used in the continuous control paradigm has largely trivialized the use of temporally aware classification schemes [31]. For general HCI, however, the selection of interaction design may result in substantially different control scheme requirements. Consequently, although a valuable starting point, HCI researchers should be cautious and discerning in how they leverage knowledge from the prosthetics field for use in HCI.

3.2.3 System Evaluation.

After selecting, designing, and implementing an EMG-based control scheme, it must be evaluated to determine how well it performs compared to previous works. For EMG-based control schemes, there are generally two agreed-upon forms of evaluation: offline evaluation and online evaluation (see section 4.3.1). Offline evaluation refers to the performance assessment of algorithms on a pre-recorded set of data. Although this can be preferable when comparing many different control schemes or hyper-parameters, the user is not part of the control loop and therefore, cannot correct for errors. As a result, metrics such as classification accuracy often correlate poorly with the online usability of the resulting control systems [68, 100, 112]. The substantial challenges associated with measuring online usability for prosthesis research, such as expensive prosthesis fittings and accessing amputee participants has led to the widespread use of such offline metrics. These metrics have translated into HCI research as well, where approximately half of the sampled papers focused solely on offline metrics (i.e., classification accuracy) as their only evaluation criteria. This reliance stems from the previous notion that “accuracy is undoubtedly one of the fundamental performance metrics for any recognition system” [116]. The focus on offline over online evaluations aligns with classic tension between evaluations that focus on internal over external and ecological validity [101]. However, a focus on offline evaluation, which overly prioritizes internal validity, provides little insight into how successful a control scheme would be in real-world use. Previous work has established that data recorded in offline settings differ from those seen during real-time use [33]. Therefore, user studies should be designed to maximize external and ecological validity.
Similarly, the challenges with recruiting amputees may have normalized the use of small numbers of participants in EMG research. HCI EMG research has tended towards smaller studies with very few participants (often less than 10), with some testing on as few as one pilot participant [10, 99, 156]. The issues around the lack of external and ecological validity and small sample sizes in these studies drastically limit the conclusions that can be drawn from previous work. While, arguably, HCI EMG work is still new, evaluations need to mature to the standards expected of the broader HCI community.

3.2.4 Reproducibility.

To grow and evolve the field of EMG-based control in HCI, we must be able to reproduce the work done by others. This is especially true in HCI, where reproducibility has become a popular topic in recent years [49, 161]. Reproducibility, however, is especially complicated when dealing with EMG and has been a struggle for both prosthesis and HCI researchers alike due to the unique EMG patterns among participants and confounded by different technologies and evaluation protocols. While some have called for better standardization [123], we found that many studies used inconsistent terminologies, failed to define or quantify parameters, used custom hardware, and lacked sufficient detail for replication. This ultimately hinders the impact of such works, and correspondingly, the growth of the field. Furthermore, whether due to research ethics restrictions, to maintain control over one’s own research, or otherwise, many datasets and source code have not been made publicly available. Without access to, and sufficient context for these data and code, it is often impossible to discern why a control system may have achieved the performance it did. We correspondingly suggest that all datasets, hardware specifications, source code, and anonymized participant information should be published alongside the associated work to further encourage collaboration and validation. The public Nina Pro dataset [13] is a strong example to build on and includes all of these elements. Additionally, EMBody [89] is another strong example that released all source code and hardware schematics. Making work publicly available will lead to better accessibility, validity, and ultimately research in this space. In general, we believe that the adoption of standardized approaches to EMG-based HCI research could help to propel the field forward.
Figure 3:
Figure 3: Each individual step of the proposed framework for designing EMG based control systems.

4 A Framework for Building EMG-based Interactions

To facilitate the design of EMG-based control systems, we propose a design framework (outlined in Figure 3 and summarized in Table 4) consisting of three main categories — each aligning with one of the extracted themes above: (1) Interaction Design, (2) Model Design, and (3) System Evaluation. The fourth theme of reproducibility is addressed indirectly through the creation of this framework. The framework begins with an interaction opportunity and follows eight steps along the path to deployment, with connections to promote iteration, as is common for robust design processes. This framework was refined through conversations with members of our larger research team (including experienced HCI and prosthetics researchers). It is important to emphasize that this framework is built on best practices and does not propose new steps. Instead, it is the first time, to the best of our knowledge, that these steps have been documented and organized in one place with a focus on EMG-based HCI research.

4.1 Interaction Design

4.1.1 Step 1: Input Type.

The initial step when designing any EMG-based interaction is to consider the muscle-based input that will enable the desired interaction. Ideally, these inputs should reflect the context of their eventual use (including situational, environmental, and other task-related contexts), as this will affect their efficacy for a given interaction. The literature contains a plethora of terminologies used to describe muscle-based inputs such as poses, contractions, static, and dynamic gestures. To eliminate confusion, we categorize all EMG-based inputs into two categories: static contractions and dynamic gestures. Static contractions are sustained over a period of time, and thus have no meaningful temporal change in the EMG envelope (i.e., a sequence of EMG activity) over that period (e.g., squeeze your hand closed). Conversely, dynamic gestures inherently involve a changing pattern of EMG consistent with the associated gesture (e.g., a finger snap). Figure 4 below highlights the differences between the two.
Figure 4:
Figure 4: The mean absolute value (MAV) of two separate EMG recordings — a static contraction (hand closed) and a dynamic gesture (finger snap). In this example, the MAV is a representation of the EMG envelope over a five-second period. Each line of the graph corresponds to an individual channel of EMG data.
Because static EMG contractions are sustained and relatively constant, there is little added benefit in analyzing their temporal envelopes (i.e., how the EMG evolves over time). This enables the extraction of features from short windows in which the data are assumed to be stationary (see section 4.2.5). As a result, static contraction recognition can occur at any point during elicitation such as in continuous prosthesis control (i.e., “always on” repeated classifications). Another added benefit of static contractions is that they often require less computationally complex algorithms and may be easier to recognize due to their reduced variability as compared to dynamic gestures. A caveat, however, is that even though the EMG patterns are considered static, variations in the intensity of the contraction have been used to enable proportional control over the velocity of prosthetic devices [28, 59, 138]. In EMG-based control, the concept of a static contraction must be differentiated from the term posture, which implies a given position of the limb. While a useful kinematic input for modalities such as computer vision, maintaining a posture does not necessarily require the contraction of muscles to generate EMG. For example, depending on gravity, a user’s hand could be resting in a hand-open posture without eliciting a corresponding hand-open EMG contraction. Conversely, their hand could be constrained in a closed posture (e.g., tucked in a pocket), but the user could be intentionally activating a “finger extension” EMG contraction. This difference between contraction and position can confound novice EMG-based interaction users without sufficient training but presents interesting design opportunities for the use of EMG in HCI.
We define dynamic gestures as a sequence of contractions elicited as part of a discrete event (e.g., a finger snap). This implies that dynamic gestures comprise at least a beginning and end state, with an evolving sequence of EMG in between. As such, gestures have a temporal envelope that must be considered for accurate recognition, allowing for natural variations in EMG and completion rates. Because of this temporal structure, the recognition of gestures typically occurs after the completion of the gesture. While there are techniques for predicting gestures midway through completion [79, 110], waiting until the end is usually the most robust detection method. For many HCI applications, gesture-based inputs may arguably be more natural (i.e., more similar to the hand/arm movements we use to communicate) and map intuitively to the control of discrete outputs. Potential examples of dynamic gestures include some sign language words, wrist flicks, finger swipes, and finger snaps.

4.2 Model Design

After selecting the appropriate interaction, a control system must be designed to reliably recognize the desired EMG-based input. In prosthesis control, the model design is an important part of the broader design, but factors such as the socket fit, device mechanics, component weight, and hardware limitations also play significant roles in overall performance. For general-purpose HCI applications, these same factors may not apply, potentially increasing the relative impact of model design on the overall system. We categorize the main considerations for model design into six steps: Technology Selection, Control Scheme, Algorithm Selection, Recognition Type, Parameter Selection, and Model Training.

4.2.1 Step 2: Input Technology Selection.

When designing any EMG-based input system, the appropriate input technology must be selected to enable the desired interaction. Incorrect technology selection from the beginning will inevitably hinder the ability of a system to recognize inputs reliably. Four factors that should be considered during this stage include (1) Sampling Rate, (2) Number of Electrodes, (3) Electrode Location, and (4) Multi-Sensor Fusion (such as with the inclusion of an Inertial Measurement Unit (IMU)). These factors are summarized in Table 1.
Table 1:
ConsiderationDescription
Sampling RateThe sampling rate of an sEMG device — measured in hertz (Hz) — is the number of times per second that a reading is taken from the electrodes. Sampling theory suggests that EMG should be sampled at above 1000 Hz, however, pattern recognition performance plateaus around 500 Hz [124]. Note that most EMG energy is between 50-150 Hz, with the usable energy being between 0-500 Hz [123].
Number of ElectrodesIncreasing the number of electrodes can improve recognition through better spatial distribution of the EMG sensors, but performance is dependent on placement location and target gestures/contractions. For cuffs mounted circumferentially around the forearm, diminishing returns have been found above 6 to 8 electrodes [69].
Electrode LocationThe location on the arm where the electrodes will be placed (e.g., forearm or wrist) affects which inputs can be accurately detected [24] and the type of electrodes that can be used (e.g., electrode cuffs or individual electrodes).
Sensor FusionThe inclusion of other sensing modalities can be beneficial for providing complementary information to EMG. For example, the use of an IMU can provide gross arm movement information not provided by wrist or forearm mounted EMG [137, 173]. As we move toward recognizing muscular inputs during everyday activities, these additional sensors and optimizing their use and even inclusion based on situational context may eventually become a core component of the framework. Currently, however, sensor fusion is an ongoing area of research.
Table 1: Factors Influencing Technology Selection

4.2.2 Step 3: Control Scheme.

A fundamental aspect of any EMG-based control scheme is the set of predefined static contractions or dynamic gestures that generate an event. These events are based on the underlying EMG and occur after a model has made an output decision. Once this happens, the event is processed by a controller and converted into a control input. How and when these events are generated drastically influences the behaviour of the control scheme, particularly whether they are irregular (based on discrete inputs) or continuous (based on a predefined fixed interval). As such, we split the event generation process for EMG control into two distinct control schemes: continuous control and discrete control. Figure 5 exemplifies the difference in mapping structures between the two.
Figure 5:
Figure 5: The two main control scheme mappings from windowed EMG inputs to output decisions; one-to-one mappings for continuous control outputs (left), and many-to-one mappings for the discrete recognition of dynamic gestures (right).
A continuous control scheme generates a sequence of regular periodic events based on a defined parameter (e.g., window length and increment). These events are continuously passed to the application and converted to a device function, and computed from a single short window of EMG data, in which the data are assumed to be stationary. In this way, the continuous control scheme can be largely considered as a one-to-one mapping (one window of EMG to one decision event). Applications leveraging this control scheme use static contractions as inputs and require a focused effort from the user due to the repeated mapping of EMG to control input. While this is useful when an application requires a continuous stream of events (e.g., controlling the movement of a cursor or a prosthetic limb), it is not universally advisable for many HCI applications.
The discrete control approach is ideal for dynamic gesture-based inputs as it generates a single event after the completion of a pre-defined set of actions with a definitive start and end. A temporal model is used to learn patterns in the sequence of transitions between internal states (contractions) and generate a particular event when such a pattern is recognized. Because these systems only respond after the completion of a sequence of states (a dynamic gesture) the outputs are discrete and sporadic and can be considered as employing a many-to-one mapping (many windows of EMG to one decision event). This imposition of temporal structure can improve the robustness of EMG pattern recognition to spurious activations and require less focused effort on the part of the user (inputs are only activated when a specific condition is met).

4.2.3 Step 4: Algorithm Selection.

To enable multi-channel EMG-based control systems, machine learning models learn to discriminate between patterns of muscular inputs. Prosthetics researchers have tested almost every available machine learning algorithm to classify EMG activity with mixed results [27, 31, 40, 113]. Therefore, algorithm selection for a new system can be daunting because there is no perfect or superior candidate for all use cases. To simplify the process, we group popular machine learning models into two categories: stationary models and temporal models. Selecting the category of algorithms useful for the designed interaction is important for reliable recognition.
The windowing of static contractions in continuous control lends itself to classification using stationary models. Because these windows are short, the contractions are assumed to be constant, alleviating the need for treatment of any temporal structure in the EMG envelope. Examples of stationary models are Linear Discriminant Analysis (LDA) [27], Quadratic Discriminant Analysis (QDA) [62], Support Vector Machines (SVM) [32], K-Nearest Neighbour (KNN) [86], Artificial Neural Networks (ANN) [82], Convolutional Neural Networks (CNN) [96, 169], and Random Forest [22].
The temporal structure inherent in repeatable dynamic gestures is best recognized using temporal models. Temporal models, therefore, rely on more than one window of data to understand the evolution of the EMG signal over time, and may require the optimization of additional hyperparameters related to the dynamics of the EMG gesture. Examples of temporal models include Long Short-Term Memory (LSTM) models [64, 80], Temporal Convolutional Networks (TCN) [14, 168], Hidden Markov Models (HMM) [74, 173], and Dynamic Time Warping (DTW) [20, 76].

4.2.4 Step 5: Recognition.

In classification-based machine learning each input/interaction recognized by the system is called a class. The goal of the machine learning model is then to match an input to its correct class, assuming that the patterns of EMG are repeatable and separable. In prosthesis control and in HCI, subtle variations in how users elicit contractions [152] and differences introduced by confounding factors such as electrode shift or limb position [28] have been shown to reduce the repeatability of patterns. Training with data that represent all possible situations that may occur during online use can improve the robustness of recognition models [135] but is tedious and unrealistic for many use cases. Recognition that different and potentially unwanted patterns may be generated gives way to the consideration of closed-set and open-set recognition.
In closed-set systems, a classifier always outputs a decision consisting of one of a predefined set of class labels (i.e., 1 of N classes) with the assumption that the user will only generate contractions consistent with one of the trained classes. While these models have a limited worldview, they are simpler to implement since the possibility of external classes does not need to be considered. However, this also means that these systems are prone to false activations, particularly if users are distracted by other tasks. While this form of recognition is used heavily in “always on” continuous control, it falls apart when users elicit contractions outside of the closed set of classes. Some forms of post-processing, such as the rejection of low confidence decisions to an inactive class, can alleviate some of these limitations [140].
Conversely, open-set recognition considers the possibility that the N known classes are only a subset of a larger set of possible actions. As a result, these models must be aware that current contractions or gestures may not belong to one of the trained classes, giving them a more robust worldview. The implementation of such models, however, can be more complicated as it is difficult to differentiate between a small target set of contractions/gestures and all possible (and therefore unknown) EMG patterns. There has correspondingly been relatively little work in open-set recognition of EMG, however, its solution could yield more robust models for general use in HCI [33].

4.2.5 Step 6: Control Parameter Selection.

The selection of appropriate parameters is critical to the performance and robustness of any control system. An important parameter in EMG-based control is the window length and increment. Because EMG is stochastic, features must be extracted that describe the signal characteristics. To be usable in a control interface, these features must be computed from a relatively short segment of EMG data (150-250 ms in prosthesis control [57]). Using longer windows can improve recognition rates through more robust feature estimation but they can also decrease controller responsiveness as decisions are influenced by older data. The window increment dictates how much the algorithm steps forward in time before extracting a new window, and thus how frequently classification decisions are computed (see Figure 6). Shorter increments result in more responsive systems (and a more dense event decision stream in continuous control) but higher processing requirements. Balancing the tradeoff between controller delay, responsiveness, and computational requirements is important for researchers to consider and justify when designing EMG-based control systems.
Figure 6:
Figure 6: Windowing of raw EMG data.
The extraction of features from EMG windows serves to increase the information density before classification. For decades, the prosthetics community has explored and evaluated new feature sets to improve prosthesis control. Phinyomark et al. have contributed robust guides on feature selection by exploring and explaining ideal features for prosthesis control [118, 121] and lower sampling rate EMG wearables [120, 125]. Many other studies have also explored the role of features and reducing feature redundancy [78, 84, 119]. These validated feature sets should be leveraged by researchers based on the selected input devices and chosen control parameters.
It is important to note that many of these parameters have been optimized by the prosthetics community for their specific use case. Therefore, while these parameters should work in general-purpose applications that employ continuous control, additional consideration may be required when implementing discrete control for gesture recognition. For example, delays induced by slightly longer windows may not be as noticeable in the context of a longer dynamic gesture, or perhaps shorter frame increments may improve recognition of temporal patterns. Furthermore, it is not yet clear whether the features identified for continuous control will capture, or be robust to, the dynamics required for discrete inputs. Understanding and exploring these intricacies of parameter selection may therefore be important for the continued advancement of discrete control.

4.2.6 Step 7: Model Training.

Conventional model training involves recording representative EMG data for each different input contraction or gesture and then training the model to discriminate between the various classes. Several considerations must be made before training, including a determination of the acceptable user- and context-dependence of the model. The majority of EMG models in the literature and in practice have been user-dependent, meaning that they are trained using one user’s data and are thus specific to that user. This is required in prosthetics due to differences in the residual musculature in amputees, but also remains an outstanding challenge for able-bodied EMG users [26]. The quality of the training data can also greatly affect the context-dependence of a model even when user dependent. The inclusion of a variety of factors such as arm position, contraction intensity, and electrode positioning has been found to reduce the impact of these common confounding factors at the cost of increased training time. Continued research in user-independent models [30], particularly for general-purpose HCI applications, could therefore improve the end-user experience by reducing the training time required and facilitate context-independence by leveraging pre-recorded datasets from other users.

4.3 System Evaluation

After the successful design and implementation of an EMG-based control system, the final step should always be system evaluation. The goal of this stage is to understand how the newly designed system performs and compares to previous works. Although there are many disparate methods for evaluation, they can largely be grouped into offline or online evaluation techniques. Both techniques are important tools when evaluating the viability and overall impact of new control systems, but each has its limitations.

4.3.1 Step 8: Evaluation.

The majority of EMG-based control system literature has used offline analysis techniques to evaluate the performance of control schemes. Classification accuracy (the percentage of correct output decisions) has been widely adopted in continuous control research but while it gives some insight into a system’s performance, it does not fully explain online usability [100, 112]. In discrete control, the temporal nature and relative infrequency of gesture-based inputs may also contribute to the inadequacy of accuracy as a metric, and thus additional metrics such as false negatives (missed gestures), false positives (accidental activations), and response time (the time between gesture initiation and recognition) should be considered. Nevertheless, while offline metrics are useful when developing initial models — in part because the same data can be used to evaluate many different models — they are not a replacement for evaluating online usability. As such, HCI researchers developing EMG-based interfaces should focus on online evaluation techniques whenever possible.
Online evaluation focuses on user “in-the-loop” interactions to simultaneously assess the performance and usability of an EMG-based control system. Metrics may vary by application but should include controlled objective comparisons against state-of-the-art and baseline systems such as recognition rates during real-time use [45] or throughput as computed using Fitts’ Law frameworks [139, 153]. Questionnaires (e.g., NASA TLX or SUS) [91] and qualitative responses via interviews [88] can provide important and complementary added context. Although the actual evaluation tasks are dependent on the interaction and control system being tested, the user should be allowed to interact with the designed control system. Their interaction with the control scheme is critical not only for overall performance evaluation but in understanding how the control scheme responds to natural variations in user inputs and their responses to recognition errors. Given our focus as a community on the interaction between humans and computers, offline analyses should be used sparingly, and online evaluation should be strongly prioritized.

4.4 Example Applications of the Framework

To exemplify how this framework can guide the design of EMG-based input, in this section, we provide a brief walkthrough of its application in the design of two EMG-based control systems. We have selected two examples from previous work. The first system controls a remote-control (RC) car [92] in four directions (up, down, left, and right) (see Table 2). The second is for music control during exercise [133] (see Table 3). The goal of this system is to enable the hands-free control of a music application (i.e., volume control, play/pause control, and song skipping) while jogging. These examples were selected as they follow distinct paths in our framework.
Table 2:
StepChoice and Reasoning
Input TypeStatic contractions are selected to enable the continuous control of direction commands. Based on a right-handed user, control mappings might include: wrist flexion (Left), wrist extension (Right), hand close (Down), and hand open (Up).
TechnologyAn inexpensive armband, sampling at 200-500 Hz would likely be sufficient, as this captures most of the usable EMG energy and is therefore adequate for this relatively simple five class problem. For the chosen inputs, the band would probably be placed around the participant’s forearm. No IMU inputs are required for recognizing the chosen controls.
Control SchemeContinuous control is chosen because RC control requires the constant adjustment of the direction of the vehicle.
AlgorithmA stationary model, such as an LDA, could be selected based on the use of static contractions and due to its robustness and simplicity.
RecognitionClosed-set recognition could be selected, assuming that the user is focused and unlikely to be performing other activities while controlling the RC car.
Parameter SelectionBecause this system requires regular and fast response times, using literature norms of a window size of 200 ms and an increment of 50 ms might be a good place to start. Phinyomark’s TD4 feature set [124], which is optimized for wearable devices, could be adopted.
Model TrainingFor better performance, user and context-dependent models could be trained, unless convenience is a priority. Additionally, the collection of variable training data [136] could improve robustness.
EvaluationOnline evaluation should be adopted, as the goal is the usable control of the vehicle. An example of this would be to compare the user’s ability to drive the car through an obstacle course using the EMG-based control system versus an alternate controller (e.g., joystick).
Table 2: RC Car Controller Example
Table 3:
StepChoice and Reasoning
Input TypeThe interactions with a music interface (e.g., pressing a pause button) are infrequent and discrete, motivating the use of discrete dynamic gesture-based inputs (i.e., a button click). Examples of mappings include hand wave (start/stop), wrist flick up/down (volume up/down) and double tap (next song).
TechnologyTo facilitate use during exercising, the technology should be low profile, ideally in the form of a wearable and have a sufficient sampling rate to capture dynamics for reliable recognition (e.g., 500-1000 Hz).
Control SchemeDiscrete control enables the recognition of discrete actions for controlling the music player.
AlgorithmA temporal model should be selected to recognize the temporal patterns of the dynamic gestures.
RecognitionOpen set recognition is crucial for this use case because the user is actively engaged in other, non-target, muscular activities. A unique wake word could be adopted to improve robustness to false activations (see section 5.2).
Parameter SelectionGiven its complexity and relative novelty, further experimentation would be required to determine optimal model parameters.
Model TrainingThe demanding and changing environment of physical exercise, and the potential for electrode shift, sweat, and muscle fatigue, suggest that the model should be context-independent.
EvaluationAlthough initial model validation could be done offline, robustness should be tested in an online setting where users interact with the control system while exercising.
Table 3: Music Control While Exercising Example
Table 4:
CategorySubcategoryDescription
   
Interaction Design
Input TypeStatic ContractionA fixed muscle contraction that is sustained (i.e., EMG does not intentionally change over time).
 Dynamic GestureA muscle contraction that varies dynamically over time (i.e., EMG intentionally changes in a specific way over time).
   
Model Design
Technology SelectionFactors to ConsiderFactors to consider include: (1) Sampling Rate, (2) Number of Electrodes, (3) Electrode Location, and (4) Multi-Sensor Fusion.
Control SchemeContinuousOutput events are regular and periodic based on defined parameters such as window length and increment.
 DiscreteOutput events are irregular and occur after the completion of a discrete sequence of actions with a definitive start and end.
Algorithm SelectionStationary ModelsConventional static machine learning models that do not explicitly consider the temporal structure of inputs.
 Temporal ModelsTemporal machine learning models that explicitly model the temporal structure of dynamic inputs.
RecognitionClosed SetOutputs are assumed to always belong to the set of N known classes (one of which may be a “do nothing” class).
 Open SetOutputs are assumed to be part of a larger set of M possible actions, of which the N known actions are a subset.
Control ParametersWindowingThe process of subdividing the EMG time series into regular, fixed-length segments from which features can be extracted. Windows may be non-overlapping, or overlapping (using some increment smaller than the window length) to increase the temporal resolution of output decisions.
 FeaturesFeature extraction increases the information density of the underlying EMG by computing descriptive properties from a window of data. The selection of features used for a control system greatly influences its performance.
Model TrainingUser DependenceUser dependent models are specific to a particular user and may not generalize to other users, whereas user independent models work across multiple users.
 Context DependenceA model that is sensitive to confounding factors known to degrade EMG patterns (such as changes in electrode placement or limb position) is considered context dependent. Models that are robust to such confounding factors are termed context independent.
   
System Evaluation
EvaluationOfflineModel evaluation is performed using EMG data that were previously recorded without explicit user feedback. Enables the direct comparison of multiple models on the same dataset.
 OnlineSystem evaluation is conducted with the user “in-the-loop”, responding to feedback from the control scheme under test. The resulting data cannot be used for further offline model development, because they are impacted by the user’s responses to that control scheme.
Table 4: Framework Summary

5 Further Challenges and Considerations

We acknowledge that the design framework outlined in the previous section does not comprehensively cover all of the intricacies and complexities of EMG-based control. Instead, we aim to provide a simple framework to guide the design thought process and serve as a starting point for researchers getting into this field. In this section, we explore further challenges and considerations, drawing on current concepts and challenges in prosthetics. We group these considerations into five categories: Signal Quality, False Activations, User Training, Adaptation, and Activity Detection. As the field evolves and expands, and with increased research in HCI, so too will this list.
Table 5:
Type of NoiseDescriptionSolution
Motion ArtifactLow-frequency noise added to the EMG due to the movement of the surface electrodes relative to the skin [103].Ensuring secure electrode-skin contact and high-pass filtering can help to reduce the impact of this noise [44].
Power Line InterferenceBand-limited noise added to the EMG due to electromagnetic power line interference on the human body [155].The simplest method for eliminating power line interference is by using a notch filter at the power line frequency of 50 or 60 Hz [155].
Equipment NoiseAs with all electronic equipment, electric noise can arise when measuring signals.Selecting high-quality surface electrodes can help alleviate equipment noise [129].
Table 5: Common Noise Sources to Consider When Acquiring EMG

5.1 Signal Quality

The quality of the input signal is critical for machine learning models as they rely on repeatable and separable patterns for accurate recognition [34]. As an electrophysiological signal, however, EMG is prone to noise corruption from sources such as power line interference and motion artifact (movement of the sensors relative to the skin). Consequently, signal quality has been a priority for the prosthetics community, where electrodes are embedded in sockets that may shift with movement. As a result, there have been many strategies employed to (1) detect signal noise [146, 154] and (2) remove noise from the signal [44, 107]. These techniques should similarly be employed in HCI research to ensure the most robust interactions possible. A list of common noise sources and methods to alleviate them can be found in Table 5.

5.2 False Activations

An undesirable characteristic of most control schemes is the presence of false activations, which occur when the system erroneously outputs an unintended action. In EMG-based control schemes, accidental muscle contractions or those generated during regular movement may result in false activations and can detract from the user experience and embodiment [71]. This is similar to the “Midas Touch” problem experienced with other input modalities such as eye tracking [81]. In prosthesis control, false activations can lead to catastrophic results such as dropping or crushing objects (e.g., spilling a hot beverage or breaking an egg). In general-purpose HCI applications, false activations may cause unexpected input commands, frustrating the user and detracting from a sense of agency. Because of these implications, extensive research has been done to help alleviate the issues caused by misclassifications.
Rejection [140] is a technique wherein classifier outputs are overridden to a default or inactive state when the output decision is unsure. This concept stems from the notion that it is often better (less costly) to incorrectly do nothing than it is to erroneously activate an output. This is especially beneficial in continuous prosthesis control, where the next event is assured to occur momentarily. Over-rejection, however, can be frustrating and leave the user with a sense of ‘frozen’ control. In discrete control systems, this tradeoff is also important because of the relative infrequency of decisions and the potential for user frustration, either from false activations or over-rejection, requiring repeated attempts to activate an output. Nevertheless, rejection is a simple and important tool that can reduce false activations in general-purpose HCI applications.
Post Processing is a group of strategies that leverage additional context, such as from previous outputs, to inform the current output. Majority voting [61, 159] is a popular method that overrides the current output with the label corresponding to the class that occurred most frequently over the past N decisions. As a form of simple low-pass filter, this introduces a delay into the system but reduces the likelihood of spurious false activations. While majority voting is only practical for continuous control, other forms of post processing could be used for discrete control applications. For one, a form of timeout could be introduced to avoid unwanted event detections in rapid succession. Alternatively, knowledge of previous events could be used to improve the prediction of future events, such as in an EMG-based spelling or a sign-language interface [143, 157].
Wake words [95] are another method that could be used to greatly decrease false activations by enabling users to decide when they want to interact with the system or not. Similar to wake words for voice command interfaces, this wake word would be some form of highly recognizable EMG input (e.g., two quick finger snaps in succession), or even a voice or button-activated command. It is important to note, however, that wake words add an additional step to the system, reducing responsiveness and convenience, and do not explicitly solve the issue of misclassifications. Nevertheless, the use of wake words could alleviate the algorithmic challenge of distinguishing explicit inputs from general non-input muscle activity. This could be particularly beneficial in the early stages of developing robust EMG-based controls for general-purpose applications.

5.3 User Training

Although EMG-based interactions should arguably be intuitive and easy to learn, muscle-based control is still a novel input modality for most users. Generating separable and repeatable contractions resulting in EMG that a controller can easily classify can be challenging at first. As a result, some level of user training is crucial to ensuring that a user becomes capable of interacting through the given control system. Prosthetics researchers have investigated the effects of various training tools on learning EMG-based control [48, 151, 163]. It has been shown that even amputees — who are most often highly motivated to improve — can find learning and adherence to training challenging [162]. This has led to a recent surge in the development of game-based and gamified EMG training tools to help motivate training [127, 152, 162]. Unlike amputees, if general users cannot become competent with the given control system quickly they may not be sufficiently motivated to use it. Consequently, HCI researchers must understand the challenges around training, how it might impact overall control, and how to incentivize sufficient training within their applications. Whether through fun tutorials [58] or serious games [152], the training process should be an extension of the application. Furthermore, unlike traditional input modalities (e.g., keyboards), the cause of errors in pattern recognition-based myoelectric control can be difficult to understand since these systems are viewed as black boxes by most users (inputs go in and decisions come out). Therefore user training becomes critical to enable the possibility of experimentation in a controlled environment. Ultimately, EMG-based control is a motor skill, and as such, requires training to develop proficiency [162].

5.4 Adaptation

One of the limitations of EMG-based control systems is that their performance can degrade over time due to user fatigue, electrode shift, changes in user behaviours, and other transient factors. Static models trained on an initial training set that do not change over time to accommodate these changes are often at fault. This lack of robustness can lead to frustration and, inevitably, the abandonment of a device [23]. Adaptation can alleviate these issues by maintaining and improving the performance of systems by updating a model over time. Both supervised [41, 141, 170] and unsupervised [42, 77, 141] strategies have been explored for prosthesis control. Supervised adaptation requires additional oversight to label new ground truth training data, which is cumbersome for the user [141]. Because of this supervision, however, the classification accuracies for this method are often higher. Gamified calibration techniques, as proposed by [58], could reduce the perceived effort. Unsupervised adaptation requires no user input, and data are tagged with pseudo-labels (i.e., predictions) through different automated strategies [141]. Due to its unsupervised nature, data mislabeling may occur and introduce errors into the model, possibly adding to the degradation over time. If properly constrained, such as through semi-supervision based on added application context (inferring what the user should have been doing), unsupervised adaptation would be the preferred approach for general-purpose applications. Interestingly, adaptation is not a new concept in HCI and has been explored for use cases such as adaptive interfaces [51, 108], but it has not been widely leveraged in EMG works.

5.5 Activity Detection

An arguably critical aspect of discrete control is activity detection; the extraction of a fragment of EMG corresponding to a dynamic gesture from a continuous stream of data. When classifying pre-segmented gestures in offline analyses, the extraction of relevant EMG activity is already done. However, during online use, data are continuously streamed to the system, and which segments of the signal stream to pass into a recognition model (i.e., where the start and end of a gesture are) must be determined. There are different techniques that have been proposed to solve this issue. When a period of inactivity is assumed to occur between gestures, onset and offset detection can be used to determine the onset/activation of EMG activity, denoting the beginning and ending of a gesture [106, 115, 164]. Another method employs sliding windows, in a process similar to the continuous windowing process, but at a larger time scale. In this scenario, larger windows (with a fixed length, determined based on assumed gesture lengths) are extracted, examined, and compared to target gestures. Although many such windows may not correspond to known gestures, thresholding techniques such as rejection may be used to avoid false activations. Nevertheless, more exploration is needed to determine the optimal strategies for extracting dynamic gestures associated with discrete events from a continuous stream of EMG.

6 Discussion

This work has provided a new examination of EMG in HCI that recognizes the important influence of research and engineering in the field of prosthetics. Here we have characterized “general-purpose applications” as the use of EMG sensors as input to interactive systems, other than for the use of prosthetics. Our presentation thus far has demonstrated that while EMG work in prosthetics provides a good foundation, a deeper understanding, and a different focus are needed for EMG to be successful in HCI. Below, we critique our own work to better characterize its contribution.
Our exploration of the general-purpose use of EMG was intentionally focused. First, we focused primarily on HCI venues by focusing on the ACM DL rather than more broad searches or including other digital libraries. Second, while we followed a general procedure for collecting papers, the process was largely informed by the experience of our larger research group, which consists of two closely collaborating labs: one HCI lab in a computer science department; the other, a biomedical engineering lab collocated with a prosthetics clinic. Our groups have collaborated closely for several years, and so although our paper selection procedure was not a systematic treatment of the literature, we drew on the experiences of our larger research team that has straddled two distinct areas of research, making contributions to both HCI and prosthetics research. From this perspective, we believe we are uniquely positioned to characterize our observations and experiences. Nevertheless, our sampling of the literature was the source and motivation of this characterization, rather than simply being an organization of convenience. The exploration of the EMG literature in HCI, contrasted with that in the prosthetics field, enabled us to clearly characterize why HCI work and general-purpose applications have been falling short of their potential. It was through our organization and consideration of a larger body of work that we were able to identify common limiting factors.
Our framework proposes a step-by-step process for the design and development of new EMG-based inputs for interactive systems. The framework is predominantly an expression and translation of the steps that prosthetics researchers employ in the development of a new EMG system, and, thus intuitive for us to identify. That being said, we have not found any source that has previously fully documented these steps in prosthetics or elsewhere. While this is a contribution in itself, we believe the novelty in this framework is in the succinct communication of the processes used in prosthetics for an HCI audience, allowing us to communicate how previous pitfalls can be avoided. Thus, we have provided a new and important starting point for any work in HCI that will employ EMG as an input modality.
Interestingly, it was only after we had both fully described the themes from our exploration of previous work and enumerated the steps in our framework that we recognized the perfect alignment (the groupings of steps in our framework aligned with the themes identified from our exploration). This provides further confidence that the results of the exploration and the framework provide good coverage of the main concerns in developing EMG-based input.
In this paper, we have necessarily focused on how the field of prosthetics has and can inform work in HCI, but we have largely ignored how HCI can inform and play a role in prosthetics research and practice. From improved prosthetics training [152] to empowering amputees to create their own assistive tools [73], to a better understanding of how people value their prosthetic devices [19], the breadth of HCI research has already made an important impact and still has much to offer the field of prosthetics. We believe these two fields offer an ideal area for continued cross-fertilization that we have only begun to explore. In the next section, we make a call to researchers in both prosthetics and HCI to combine their expertise to unlock the potential of EMG as an important enabling technology that can be used by all as part of their everyday interactive systems.
As emphasized throughout this work, EMG-based control research has primarily occurred within the prosthetics community, heavily influencing its development in HCI for general-purpose use. While this has been a good place to start, based on an understanding of the types of EMG control systems we need in this field, we now need to move in new directions. Current EMG control systems are designed for mechanically complicated multi-articulated prosthetic devices and for a user base with individual and potentially very different abilities from one another. In many general-purpose applications, the context is very different and the motivations, usage contexts, hardware, and technical challenges are unique. Instead of forcing our interactions into the prosthesis-control framework, we must explore other opportunities and avenues. For example, what algorithms and device form factors allow people to most effectively interact with mobile computing devices while exercising? Or, what gesture set provides the most expressive vocabulary for a virtual reality system? Answering these questions requires an evolution in how HCI researchers have been approaching EMG-based control. In fact, it is quite possible that the research field surrounding EMG-based control for general-purpose applications diverges from prosthetic applications. And, like in biomedical engineering, HCI researchers could dedicate substantial time to understanding and improving EMG control, including the development of conferences or publication venues focusing exclusively on the topic (e.g., [3]).

6.1 A New Era of EMG Wearables and Beyond

When the idea of EMG wearables was explored in the early 2000s [132], sEMG technology was only beginning to become sufficiently inexpensive and accessible for general-purpose use. The 2014 release of the Myo Armband by Thalmic Labs revolutionized the space of EMG wearables [130], albeit not without its limitations. Firstly, its sampling rate was capped at 200 Hz, thus limiting its overall performance and robustness. Secondly, users did not always find it comfortable. Over long periods, people found it sweaty and heavy; and, for others, it could not be made to fit tightly enough for the electrodes to make consistent contact [152]. Despite these limitations, the subsequent discontinuation of the Myo Armband has likely slowed the exploration of EMG for general-purpose applications because there is currently no relatively robust, low-cost device that is as readily available. However, ongoing commercial initiatives are trying to improve on past limitations (e.g., [2, 4, 5, 83]) to provide more robust, low-cost wearable EMG devices.
EMG lends itself to wearable form factors because of its small size and low power requirements. For an EMG wearable to be successful it can be assumed that it should be accessible, comfortable, convenient, and easy to use. Following this, wearables should, therefore, be available in a range of forms (e.g., armbands, watches, or bracelets) to enable the most convenient and effective format for a particular context. They should also have sufficient electrodes (i.e., 6-10) with a sampling rate of at least 500 Hz, to allow for the robust detection of complicated inputs [69, 98]. With that said, each electrode increases the required circumference of a wearable device. As such, armbands should be adjustable or come in different sizes. It is also important that these wearable devices have built-in IMUs (Inertial Measurement Units) to allow for improved control through movements combined with EMG (e.g., for controlling a cursor [67]), but also to improve the accuracy of EMG-based input (e.g., arm position affects the production of EMG signals [28, 128]). Exploring this combination of EMG and IMU as complimentary sensing modalities is an exciting direction of potential future research. Additionally, the continued research and development of pre-built user-independent and context-independent models (for both static contractions and dynamic gestures) could further facilitate the integration of EMG in new applications.
Beyond what we traditionally think of as EMG wearables (e.g., cuffs or wristbands), an increasing body of prosthetics research is dedicated to other EMG measurement modalities (as opposed to sEMG). Invasive techniques, such as implanted and intramuscular electrodes, can vastly improve the quality of EMG signals [66] and thus improve the usability of such systems [36, 87, 144]. Alongside other surgical augmentation procedures such as osseointegration [7, 102] and targeted muscle reinnervation [94], these techniques can vastly improve the quality and robustness of prosthesis control. Given the research and public interest in body augmentation [25], body implants [148], and other neurological links [60], future work could explore the reception and improved performance of interfaces other than generic sEMG in general-purpose applications.
Similarly, EMG inputs could be combined with EMS (electrical muscle stimulation; sometimes referred to as functional electrical stimulation, or FES, in biomedical engineering [142]). EMS has been widely explored in recent HCI work as a form of output (e.g., [46, 47], and is effectively the opposite of EMG recordings (instead of measuring the electrical activity of contractions, EMS stimulates muscle contractions using electrical activity). The notion of having a wearable device that enables both input and output through a muscle interface is compelling and has been successfully demonstrated by Duente et al. [46]. Future work could explore how such technologies could enable richer forms of input/output built into a single wearable device.

6.2 Facilitating EMG-Based Development and Prototyping

One of the major barriers limiting the development of EMG-based control is its inherent complexity. Ideally, a toolkit would exist that facilitates the design and development of EMG-based interactions using state-of-the-art control systems. Such a toolkit would have the goal of adding EMG-based control to an interactive system with as close to a plug-and-play experience as possible. This would enable HCI researchers to focus on the interaction rather than on the design of the control system. For the success of the toolkit, however, it would be imperative that it was validated by domain experts (e.g., EMG researchers from both HCI and prosthetics). Additionally, it should be well-documented with examples and data sets — including both static contractions and dynamic gestures — for facilitating the iterative design process. One challenge for such a toolkit is that EMG-based technology and control systems are continuously evolving. As such, this toolkit should be open-sourced and ideally adopted by a community of researchers to allow it to evolve. EMBody [89] is an excellent initial effort towards such a toolkit. However, EMBody is limited by many of the issues explored in this paper. For example, it does not support the use of discrete control systems, thus unintentionally influencing researchers to follow a continuous control framework when designing their system. Ultimately, this is one of the important barriers to the adoption we discussed in the emerging themes section (see section 3.2.1). From a data perspective, the Nina Pro Database [13] highlights the importance of having a robust open-sourced dataset. A fully featured toolkit and corresponding dataset would not only enable more researchers to experiment with the technology but it could be crucial to the future success of EMG.

6.3 Ease of Use and Convenience

EMG has become the dominant interface for prosthesis use as it enables device control by solely relying on input from the residual muscles — those remaining above the amputation. In most cases, amputees do not have access to other control approaches — such as computer vision, typical controllers, or data gloves — to control their prostheses. Since EMG is currently the main viable option, it has inevitably been adopted by amputees. However, users interacting with general-purpose applications will likely have the option of using these other technologies. As such, EMG should provide benefits that others cannot, including increased convenience and ease of use. Individuals should want to use EMG as an input modality, not because it is novel but because it is discrete, robust, reliable, and easy to use. One crucial step in making this a reality is by focusing on robust user- and context-independent models. Users will not be keen if they have to tediously re-train a model multiple times per day or every time the specific context changes. In theory, these systems will be plug-and-play with the option of allowing additional calibration and customization if desired for more efficient interactions, such as with modern speech recognition engines. Finally, these systems should be as intuitive as possible to reduce the training time required to achieve control proficiency. Addressing these concerns will drastically improve usability and hopefully in turn, the adoption of this technology.

6.4 Discrete and Continuous Input Discovery

One of the major contributions of this work was differentiating discrete (i.e., dynamic gestures) and continuous (i.e., static contractions) inputs for EMG-based interactions. In doing so, we have highlighted a crucial need for evaluating these two groups of inputs separately. Static contractions have dominated the research space due to their prevalence in prosthesis control. As such, physiologically appropriate contractions — those consistent with the prosthetic components they are meant to control — have been the primary focus. For similar reasons, dynamic gestures have been largely neglected in EMG-based prosthesis control. From an HCI perspective, however, this motivation may be reduced, and other contractions and gestures may be preferred. As such, work must explore what gestures can be recognized and the best methods for recognition. In this exploration, the IMU becomes appealing as it adds additional information for classifying dynamic movements — understanding these impacts is crucial. For example, one research goal could be to understand what gestures can be recognized robustly with and without the IMU when combined with EMG. Or potentially, researchers could explore how EMG could be leveraged in addition to traditional computer vision or data glove approaches. We must also explore gesture sets that are less likely to cause false activations. Gestures should be unique enough from common muscular contractions to reduce false activations as users participate in other activities [33]. Another interesting consideration arises in scenarios where subtle gestures are desirable and that are not conducive to large dynamic gestures (e.g., dismissing a call during a meeting). Is there a set of gestures that are distinguishable, yet discreet enough for these contexts? Future work should explore the identification of standardized inputs (including both static contractions and dynamic gestures) that are intuitive, robust, and ideal for general-purpose use.

7 Conclusion

With this paper, we aim to deepen the understanding and research practices around a frequently explored input technology, EMG-based sensors. As computing continues to move away from desktop settings, EMG offers potentially low-cost, small, and wearable sensors that can fit into a wide range of scenarios where other technologies often do not work well. Despite its long history of use in prosthetics, we are still without an application or consumer-grade technology that has revolutionized the use and uptake of EMG for general use; despite the notable past and ongoing efforts. We believe that it is not because it is impossible, but instead that more coordinated research and development is needed to realize truly robust, reliable, and intuitive EMG input. Towards this end, we demonstrate fundamental limitations to previous applications of EMG-based input, and, through our design framework, we provide a starting point for HCI researchers to better design successful EMG-based interactions. The framework draws on the long history of EMG-based prosthetics research and distinguishes the main concepts that can be applied to general-use applications. Finally, we call the HCI community to action by providing a research agenda for HCI researchers interested in EMG-based control. The new directions for research that we identify are somewhat distinct from those in prosthetics. While HCI and the prosthetics field can learn from one another, there will inevitably be a need for some divergence as the HCI community’s understanding and practices around EMG continue to mature, and we unlock the hidden potential of EMG as an input technology for a wide range of interactive systems.

Acknowledgments

We want to thank all of the experienced staff, researchers and clinicians, past and present, at the Institute of Biomedical Engineering and the Human Computer Interaction Lab at the University of New Brunswick who provided their guidance and expertise in making this project possible. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Atlantic Canada Opportunities Agency.

Supplementary Material

Supplemental Materials (3544548.3580962-supplemental-materials.zip)
MP4 File (3544548.3580962-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
2022. Coapt - Myoelectric Pattern Recognition for Upper Limb Prostheses. https://coaptengineering.com
[2]
2022. Infinite Biomedical Technologies. http://www.i-biomed.com/index.html
[3]
2022. MEC Symposium | Institute of Biomedical Engineering | UNB. https://www.unb.ca/ibme/mec/index.html
[4]
2022. Mudra Inspire - Wearable Devices. https://getmudra.com
[5]
2022. Sifi Labs. https://sifilabs.com/
[6]
2022. Welcome to Ottobock. https://www.ottobock.com/en-us/Home
[7]
T. Albrektsson and C. Johansson. 2001. Osteoinduction, osteoconduction and osseointegration. European Spine Journal 10, 2 (Oct. 2001), S96–S101. https://doi.org/10.1007/s005860100282
[8]
Ali Ameri, Ernest N. Kamavuako, Erik J. Scheme, Kevin B. Englehart, and Philip A. Parker. 2014. Real-time, simultaneous myoelectric control using visual target-based training paradigm. Biomedical Signal Processing and Control 13 (Sept. 2014), 8–14. https://doi.org/10.1016/j.bspc.2014.03.006
[9]
Ali Ameri, Ernest N. Kamavuako, Erik J. Scheme, Kevin B. Englehart, and Philip A. Parker. 2014. Support Vector Regression for Improved Real-Time, Simultaneous Myoelectric Control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 6 (Nov. 2014), 1198–1209. https://doi.org/10.1109/TNSRE.2014.2323576 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[10]
Kikuo Asai and Norio Takase. 2019. Finger Motion Estimation Based on Sparse Multi-Channel Surface Electromyography Signals Using Convolutional Neural Network. In Proceedings of the 2019 3rd International Conference on Digital Signal Processing(ICDSP 2019). Association for Computing Machinery, New York, NY, USA, 55–59. https://doi.org/10.1145/3316551.3316572
[11]
Mohammadreza Asghari Oskoei and Huosheng Hu. 2007. Myoelectric control systems—A survey. Biomedical Signal Processing and Control 2, 4 (Oct. 2007), 275–294. https://doi.org/10.1016/j.bspc.2007.07.009
[12]
Christopher Assad, Michael Wolf, Theodoros Theodoridis, Kyrre Glette, and Adrian Stoica. 2013. BioSleeve: a natural EMG-based interface for HRI. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction(HRI ’13). IEEE Press, Tokyo, Japan, 69–70.
[13]
Manfredo Atzori, Arjan Gijsberts, Simone Heynen, Anne-Gabrielle Mittaz Hager, Olivier Deriaz, Patrick van der Smagt, Claudio Castellini, Barbara Caputo, and Henning Müller. 2012. Building the Ninapro database: A resource for the biorobotics community. In 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob). 1258–1265. https://doi.org/10.1109/BioRob.2012.6290287 ISSN: 2155-1782.
[14]
Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. 2018. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. http://arxiv.org/abs/1803.01271 arXiv:1803.01271 [cs].
[15]
JC Baits, RW Todd, and JM Nightingale. 1968. Paper 10: The Feasibility of an Adaptive Control Scheme for Artificial Prehension. In Proceedings of the Institution of Mechanical Engineers, Conference Proceedings, Vol. 183. SAGE Publications Sage UK: London, England, 54–59.
[16]
A Bakiya and K Kamalanand. 2018. Information analysis on electromyograms acquired using monopolar needle, concentric needle and surface electrodes. In 2018 International Conference on Recent Trends in Electrical, Control and Communication (RTECC). 240–244. https://doi.org/10.1109/RTECC.2018.8625631
[17]
Marco E. Benalcázar, Carlos E. Anchundia, Jonathan A. Zea, Patricio Zambrano, Andrés G. Jaramillo, and Marco Segura. 2018. Real-Time Hand Gesture Recognition Based on Artificial Feed-Forward Neural Networks and EMG. In 2018 26th European Signal Processing Conference (EUSIPCO). 1492–1496. https://doi.org/10.23919/EUSIPCO.2018.8553126 ISSN: 2076-1465.
[18]
Simone Benatti, Elisabetta Farella, and Luca Benini. 2014. Towards EMG control interface for smart garments. In Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct Program(ISWC ’14 Adjunct). Association for Computing Machinery, New York, NY, USA, 163–170. https://doi.org/10.1145/2641248.2641352
[19]
Cynthia L. Bennett, Keting Cen, Katherine M. Steele, and Daniela K. Rosner. 2016. An Intimate Laboratory? Prostheses as a Tool for Experimenting with Identity and Normalcy. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 1745–1756. https://doi.org/10.1145/2858036.2858564
[20]
Donald J. Berndt and James Clifford. 1994. Using dynamic time warping to find patterns in time series. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining(AAAIWS’94). AAAI Press, Seattle, WA, 359–370.
[21]
K Abhijith Bhaskaran, Anoop G. Nair, K Deepak Ram, Krishnan Ananthanarayanan, and H.R. Nandi Vardhan. 2016. Smart gloves for hand gesture recognition: Sign language to speech conversion system. In 2016 International Conference on Robotics and Automation for Humanitarian Applications (RAHA). 1–6. https://doi.org/10.1109/RAHA.2016.7931887
[22]
Gérard Biau and Erwan Scornet. 2016. A random forest guided tour. TEST 25, 2 (June 2016), 197–227. https://doi.org/10.1007/s11749-016-0481-7
[23]
Elaine A. Biddiss and Tom T. Chau. 2007. Upper limb prosthesis use and abandonment: a survey of the last 25 years. Prosthetics and Orthotics International 31, 3 (Sept. 2007), 236–257. https://doi.org/10.1080/03093640600994581
[24]
Fady S. Botros, Angkoon Phinyomark, and Erik J. Scheme. 2022. Electromyography-Based Gesture Recognition: Is It Time to Change Focus From the Forearm to the Wrist?IEEE Transactions on Industrial Informatics 18, 1 (Jan. 2022), 174–184. https://doi.org/10.1109/TII.2020.3041618 Conference Name: IEEE Transactions on Industrial Informatics.
[25]
Lauren M. Britton and Bryan Semaan. 2017. Manifesting the Cyborg through Techno-Body Modification: From Human-Computer Interaction to Integration. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 2499–2510. https://doi.org/10.1145/3025453.3025629
[26]
Evan Campbell, Jason Chang, Angkoon Phinyomark, and Erik Scheme. 2020. A COMPARISON OF AMPUTEE AND ABLE-BODIED INTER-SUBJECT VARIABILITY IN MYOELECTRIC CONTROL. MEC20 Symposium (Jul. 2020). https://conferences.lib.unb.ca/index.php/mec/article/view/45
[27]
Evan Campbell, Angkoon Phinyomark, and Erik Scheme. 2019. Linear Discriminant Analysis with Bayesian Risk Parameters for Myoelectric Control. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP). 1–5. https://doi.org/10.1109/GlobalSIP45357.2019.8969237
[28]
Evan Campbell, Angkoon Phinyomark, and Erik Scheme. 2020. Current Trends and Confounding Factors in Myoelectric Control: Limb Position and Contraction Intensity. Sensors (Basel, Switzerland) 20, 6 (March 2020), 1613. https://doi.org/10.3390/s20061613
[29]
Evan Campbell, Angkoon Phinyomark, and Erik Scheme. 2020. Differences in Perspective on Inertial Measurement Unit Sensor Integration in Myoelectric Control. (March 2020). https://doi.org/10.48550/arXiv.2003.03424 arXiv:2003.03424 [cs, eess].
[30]
Evan Campbell, Angkoon Phinyomark, and Erik Scheme. 2021. Deep Cross-User Models Reduce the Training Burden in Myoelectric Control. Frontiers in Neuroscience 15 (2021). https://www.frontiersin.org/articles/10.3389/fnins.2021.657958
[31]
A.D.C. Chan and K.B. Englehart. 2005. Continuous myoelectric control for powered prostheses using hidden Markov models. IEEE Transactions on Biomedical Engineering 52, 1 (Jan. 2005), 121–124. https://doi.org/10.1109/TBME.2004.836492 Conference Name: IEEE Transactions on Biomedical Engineering.
[32]
Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 3 (May 2011), 27:1–27:27. https://doi.org/10.1145/1961189.1961199
[33]
Jason Chang, Angkoon Phinyomark, Scott Bateman, and Erik Scheme. 2020. Wearable EMG-Based Gesture Recognition Systems During Activities of Daily Living: An Exploratory Study. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 3448–3451. https://doi.org/10.1109/EMBC44109.2020.9176615 ISSN: 2694-0604.
[34]
Jason Chang, Angkoon Phinyomark, and Erik Scheme. 2020. Assessment of EMG Benchmark Data for Gesture Recognition Using the NinaPro Database. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference 2020 (July 2020), 3339–3342. https://doi.org/10.1109/EMBC44109.2020.9175260
[35]
Hong Cheng, Lu Yang, and Zicheng Liu. 2016. Survey on 3D Hand Gesture Recognition. IEEE Transactions on Circuits and Systems for Video Technology 26, 9 (Sept. 2016), 1659–1673. https://doi.org/10.1109/TCSVT.2015.2469551 Conference Name: IEEE Transactions on Circuits and Systems for Video Technology.
[36]
Christian Cipriani, Jacob L. Segil, J. Alex Birdwell, and Richard F. ff Weir. 2014. Dexterous Control of a Prosthetic Hand Using Fine-Wire Intramuscular Electrodes in Targeted Extrinsic Muscles. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 4 (July 2014), 828–836. https://doi.org/10.1109/TNSRE.2014.2301234 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[37]
Enrico Costanza, Samuel A. Inverso, and Rebecca Allen. 2005. Toward subtle intimate interfaces for mobile devices using an EMG controller. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’05). Association for Computing Machinery, New York, NY, USA, 481–489. https://doi.org/10.1145/1054972.1055039
[38]
Enrico Costanza, Samuel A. Inverso, Rebecca Allen, and Pattie Maes. 2007. Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’07). Association for Computing Machinery, New York, NY, USA, 819–828. https://doi.org/10.1145/1240624.1240747
[39]
Eleanor Criswell. 2010. Cram’s Introduction to Surface Electromyography. Jones & Bartlett Publishers. Google-Books-ID: ADYm0TqiDo8C.
[40]
Ulysse Côté-Allard, Evan Campbell, Angkoon Phinyomark, François Laviolette, Benoit Gosselin, and Erik Scheme. 2020. Interpreting Deep Learning Features for Myoelectric Control: A Comparison With Handcrafted Features. Frontiers in Bioengineering and Biotechnology 8 (2020). https://www.frontiersin.org/articles/10.3389/fbioe.2020.00158
[41]
Ulysse Côté-Allard, Cheikh Latyr Fall, Alexandre Drouin, Alexandre Campeau-Lecours, Clément Gosselin, Kyrre Glette, François Laviolette, and Benoit Gosselin. 2019. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27, 4 (April 2019), 760–771. https://doi.org/10.1109/TNSRE.2019.2896269 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[42]
Ulysse Côté-Allard, Gabriel Gagnon-Turcotte, Angkoon Phinyomark, Kyrre Glette, Erik J. Scheme, François Laviolette, and Benoit Gosselin. 2020. Unsupervised Domain Adversarial Self-Calibration for Electromyography-Based Gesture Recognition. IEEE Access 8(2020), 177941–177955. https://doi.org/10.1109/ACCESS.2020.3027497 Conference Name: IEEE Access.
[43]
Qingfeng Dai, Xiangdong Li, Weidong Geng, Wenguang Jin, and Xiubo Liang. 2021. CAPG-MYO: A Muscle-Computer Interface Supporting User-defined Gesture Recognition. In The 2021 9th International Conference on Computer and Communications Management(ICCCM ’21). Association for Computing Machinery, New York, NY, USA, 52–58. https://doi.org/10.1145/3479162.3479170
[44]
Carlo J. De Luca, L. Donald Gilmore, Mikhail Kuznetsov, and Serge H. Roy. 2010. Filtering the surface EMG signal: Movement artifact and baseline noise contamination. Journal of Biomechanics 43, 8 (May 2010), 1573–1579. https://doi.org/10.1016/j.jbiomech.2010.01.027
[45]
Joseph DelPreto and Daniela Rus. 2020. Plug-and-Play Gesture Control Using Muscle and Motion Sensors. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction(HRI ’20). Association for Computing Machinery, New York, NY, USA, 439–448. https://doi.org/10.1145/3319502.3374823
[46]
Tim Duente, Justin Schulte, Max Pfeiffer, and Michael Rohs. 2018. MuscleIO: Muscle-Based Input and Output for Casual Notifications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 2, Article 64 (jul 2018), 21 pages. https://doi.org/10.1145/3214267
[47]
Tim Duente, Justin Schulte, Max Pfeiffer, and Michael Rohs. 2018. MuscleIO: Muscle-Based Input and Output for Casual Notifications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 2, Article 64 (jul 2018), 21 pages. https://doi.org/10.1145/3214267
[48]
A.-C. Dupont and E.L. Morin. 1994. A myoelectric control evaluation and trainer system. IEEE Transactions on Rehabilitation Engineering 2, 2 (June 1994), 100–107. https://doi.org/10.1109/86.313151 Conference Name: IEEE Transactions on Rehabilitation Engineering.
[49]
Florian Echtler and Maximilian Häußler. 2018. Open Source, Open Science, and the Replication Crisis in HCI. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems(CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3170427.3188395
[50]
Chloe Eghtebas, Sandro Weber, and Gudrun Klinker. 2018. Investigation into Natural Gestures Using EMG for "SuperNatural" Interaction in VR. In The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings(UIST ’18 Adjunct). Association for Computing Machinery, New York, NY, USA, 102–104. https://doi.org/10.1145/3266037.3266115
[51]
Jacob Eisenstein and Angel Puerta. 2000. Adaptation in automated user-interface design. In Proceedings of the 5th international conference on Intelligent user interfaces(IUI ’00). Association for Computing Machinery, New York, NY, USA, 74–81. https://doi.org/10.1145/325737.325787
[52]
I. Elamvazuthi, N. H. X. Duy, Zulfiqar Ali, S. W. Su, M. K. A. Ahamed Khan, and S. Parasuraman. 2015. Electromyography (EMG) based Classification of Neuromuscular Disorders using Multi-Layer Perceptron. Procedia Computer Science 76 (Jan. 2015), 223–228. https://doi.org/10.1016/j.procs.2015.12.346
[53]
K. Englehart and B. Hudgins. 2003. A robust, real-time control scheme for multifunction myoelectric control. IEEE Transactions on Biomedical Engineering 50, 7 (July 2003), 848–854. https://doi.org/10.1109/TBME.2003.813539 Conference Name: IEEE Transactions on Biomedical Engineering.
[54]
Yinfeng Fang and Honghai Liu. 2014. Robust sEMG electrodes configuration for pattern recognition based prosthesis control. In 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2210–2215. https://doi.org/10.1109/SMC.2014.6974252 ISSN: 1062-922X.
[55]
Dario Farina, Ning Jiang, Hubertus Rehbaum, Aleš Holobar, Bernhard Graimann, Hans Dietl, and Oskar C. Aszmann. 2014. The Extraction of Neural Information from the Surface EMG for the Control of Upper-Limb Prostheses: Emerging Avenues and Challenges. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 4 (July 2014), 797–809. https://doi.org/10.1109/TNSRE.2014.2305111 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[56]
Dario Farina, Roberto Merletti, and Roger M. Enoka. 2004. The extraction of neural strategies from the surface EMG. Journal of Applied Physiology 96, 4 (April 2004), 1486–1495. https://doi.org/10.1152/japplphysiol.01070.2003 Publisher: American Physiological Society.
[57]
Todd R. Farrell and Richard F. Weir. 2007. The Optimal Controller Delay for Myoelectric Prostheses. IEEE Transactions on Neural Systems and Rehabilitation Engineering 15, 1 (March 2007), 111–118. https://doi.org/10.1109/TNSRE.2007.891391 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[58]
David R. Flatla, Carl Gutwin, Lennart E. Nacke, Scott Bateman, and Regan L. Mandryk. 2011. Calibration games: making calibration tasks enjoyable by adding motivating game elements. In Proceedings of the 24th annual ACM symposium on User interface software and technology(UIST ’11). Association for Computing Machinery, New York, NY, USA, 403–412. https://doi.org/10.1145/2047196.2047248
[59]
Anders Fougner, Øyvind Stavdahl, Peter J. Kyberd, Yves G. Losier, and Philip A. Parker. 2012. Control of Upper Limb Prostheses: Terminology and Proportional Myoelectric Control—A Review. IEEE Transactions on Neural Systems and Rehabilitation Engineering 20, 5 (Sept. 2012), 663–677. https://doi.org/10.1109/TNSRE.2012.2196711 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[60]
Christopher Frauenberger. 2019. Entanglement HCI The Next Wave?ACM Trans. Comput.-Hum. Interact. 27, 1, Article 2 (nov 2019), 27 pages. https://doi.org/10.1145/3364998
[61]
Weidong Geng, Yu Du, Wenguang Jin, Wentao Wei, Yu Hu, and Jiajun Li. 2016. Gesture recognition by instantaneous surface EMG images. Scientific Reports 6, 1 (Nov. 2016), 36571. https://doi.org/10.1038/srep36571 Number: 1 Publisher: Nature Publishing Group.
[62]
Benyamin Ghojogh and Mark Crowley. 2019. Linear and Quadratic Discriminant Analysis: Tutorial. http://arxiv.org/abs/1906.02590 arXiv:1906.02590 [cs, stat].
[63]
S. B. Godfrey, A. Ajoudani, M. Catalano, G. Grioli, and A. Bicchi. 2013. A synergy-driven approach to a myoelectric hand. In 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR). 1–6. https://doi.org/10.1109/ICORR.2013.6650377 ISSN: 1945-7901.
[64]
Klaus Greff, Rupesh K. Srivastava, Jan Koutník, Bas R. Steunebrink, and Jürgen Schmidhuber. 2017. LSTM: A Search Space Odyssey. IEEE Transactions on Neural Networks and Learning Systems 28, 10 (Oct. 2017), 2222–2232. https://doi.org/10.1109/TNNLS.2016.2582924 Conference Name: IEEE Transactions on Neural Networks and Learning Systems.
[65]
J. M. Hahne, F. Bießmann, N. Jiang, H. Rehbaum, D. Farina, F. C. Meinecke, K.-R. Müller, and L. C. Parra. 2014. Linear and Nonlinear Regression Techniques for Simultaneous and Proportional Myoelectric Control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 2 (March 2014), 269–279. https://doi.org/10.1109/TNSRE.2014.2305520 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[66]
Janne M. Hahne, Dario Farina, Ning Jiang, and David Liebetanz. 2016. A Novel Percutaneous Electrode Implant for Improving Robustness in Advanced Myoelectric Control. Frontiers in Neuroscience 10 (March 2016), 114. https://doi.org/10.3389/fnins.2016.00114
[67]
Faizan Haque, Mathieu Nancel, and Daniel Vogel. 2015. Myopoint: Pointing and Clicking Using Forearm Mounted Electromyography and Inertial Motion Sensors. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). Association for Computing Machinery, New York, NY, USA, 3653–3656. https://doi.org/10.1145/2702123.2702133
[68]
L. Hargrove, Y. Losier, B. Lock, K. Englehart, and B. Hudgins. 2007. A real-time pattern recognition based myoelectric control usability study implemented in a virtual environment. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference 2007(2007), 4842–4845. https://doi.org/10.1109/IEMBS.2007.4353424
[69]
Levi J. Hargrove, Kevin Englehart, and Bernard Hudgins. 2007. A comparison of surface and intramuscular myoelectric signal classification. IEEE transactions on bio-medical engineering 54, 5 (May 2007), 847–853. https://doi.org/10.1109/TBME.2006.889192
[70]
Levi J. Hargrove, Laura A. Miller, Kristi Turner, and Todd A. Kuiken. 2017. Myoelectric Pattern Recognition Outperforms Direct Control for Transhumeral Amputees with Targeted Muscle Reinnervation: A Randomized Clinical Trial. Scientific Reports 7 (Oct. 2017), 13840. https://doi.org/10.1038/s41598-017-14386-w
[71]
Levi J. Hargrove, Erik J. Scheme, Kevin B. Englehart, and Bernard S. Hudgins. 2010. Multiple Binary Classifications via Linear Discriminant Analysis for Improved Controllability of a Powered Prosthesis. IEEE Transactions on Neural Systems and Rehabilitation Engineering 18, 1 (Feb. 2010), 49–57. https://doi.org/10.1109/TNSRE.2009.2039590 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[72]
Bo-Jhang Ho, Renju Liu, Hsiao-Yun Tseng, and Mani Srivastava. 2017. MyoBuddy: Detecting Barbell Weight Using Electromyogram Sensors. In Proceedings of the 1st Workshop on Digital Biomarkers(DigitalBiomarkers ’17). Association for Computing Machinery, New York, NY, USA, 27–32. https://doi.org/10.1145/3089341.3089346
[73]
Megan Hofmann, Jeffrey Harris, Scott E. Hudson, and Jennifer Mankoff. 2016. Helping Hands: Requirements for a Prototyping Methodology for Upper-Limb Prosthetics Users. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 1769–1780. https://doi.org/10.1145/2858036.2858340
[74]
Yu Hu and Qiao Wang. 2020. A Comprehensive Evaluation of Hidden Markov Model for Hand Movement Recognition with Surface Electromyography. In Proceedings of the 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence(RICAI 2020). Association for Computing Machinery, New York, NY, USA, 85–91. https://doi.org/10.1145/3438872.3439060
[75]
Donny Huang, Xiaoyi Zhang, T. Scott Saponas, James Fogarty, and Shyamnath Gollakota. 2015. Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology(UIST ’15). Association for Computing Machinery, New York, NY, USA, 523–528. https://doi.org/10.1145/2807442.2807506
[76]
Gan Huang, Dingguo Zhang, Xidian Zheng, and Xiangyang Zhu. 2010. An EMG-based handwriting recognition through dynamic time warping. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. 4902–4905. https://doi.org/10.1109/IEMBS.2010.5627246 ISSN: 1558-4615.
[77]
Qi Huang, Dapeng Yang, Li Jiang, Huajie Zhang, Hong Liu, and Kiyoshi Kotani. 2017. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition. Sensors 17, 6 (June 2017), 1370. https://doi.org/10.3390/s17061370 Number: 6 Publisher: Multidisciplinary Digital Publishing Institute.
[78]
B. Hudgins, P. Parker, and R.N. Scott. 1993. A new strategy for multifunction myoelectric control. IEEE Transactions on Biomedical Engineering 40, 1 (Jan. 1993), 82–94. https://doi.org/10.1109/10.204774 Conference Name: IEEE Transactions on Biomedical Engineering.
[79]
Ryo Izuta, Kazuya Murao, Tsutomu Terada, and Masahiko Tsukamoto. 2014. Early gesture recognition method with an accelerometer. In Proceedings of the 5th Augmented Human International Conference(AH ’14). Association for Computing Machinery, New York, NY, USA, 1–2. https://doi.org/10.1145/2582051.2582105
[80]
Milad Jabbari, Rami N. Khushaba, and Kianoush Nazarpour. 2020. EMG-Based Hand Gesture Classification with Long Short-Term Memory Deep Recurrent Neural Networks. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 3302–3305. https://doi.org/10.1109/EMBC44109.2020.9175279 ISSN: 2694-0604.
[81]
Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-Based Interaction Techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’90). Association for Computing Machinery, New York, NY, USA, 11–18. https://doi.org/10.1145/97243.97246
[82]
A.K. Jain, Jianchang Mao, and K.M. Mohiuddin. 1996. Artificial neural networks: a tutorial. Computer 29, 3 (March 1996), 31–44. https://doi.org/10.1109/2.485891 Conference Name: Computer.
[83]
Lisa Brown Jaloza. 2021. Inside Facebook Reality Labs: Wrist-based interaction for the next computing platform. Tech at Meta (March 2021). https://tech.fb.com/ar-vr/2021/03/inside-facebook-reality-labs-wrist-based-interaction-for-the-next-computing-platform/
[84]
Haider Ali Javaid, Nasir Rashid, Mohsin Islam Tiwana, and Muhammad Waseem Anwar. 2018. Comparative Analysis of EMG Signal Features in Time-domain and Frequency-domain using MYO Gesture Control. In Proceedings of the 2018 4th International Conference on Mechatronics and Robotics Engineering(ICMRE 2018). Association for Computing Machinery, New York, NY, USA, 157–162. https://doi.org/10.1145/3191477.3191495
[85]
Ning Jiang $⌃*$, Kevin B. Englehart, and Philip A. Parker. 2009. Extracting Simultaneous and Proportional Neural Control Information for Multiple-DOF Prostheses From the Surface Electromyographic Signal. IEEE Transactions on Biomedical Engineering 56, 4 (April 2009), 1070–1080. https://doi.org/10.1109/TBME.2008.2007967 Conference Name: IEEE Transactions on Biomedical Engineering.
[86]
Liangxiao Jiang, Zhihua Cai, Dianhong Wang, and Siwei Jiang. 2007. Survey of Improving K-Nearest-Neighbor for Classification. In Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007), Vol. 1. 679–683. https://doi.org/10.1109/FSKD.2007.552
[87]
Ernest N. Kamavuako, Erik J. Scheme, and Kevin B. Englehart. 2014. On the usability of intramuscular EMG for prosthetic control: A Fitts’ Law approach. Journal of Electromyography and Kinesiology 24, 5 (Oct. 2014), 770–777. https://doi.org/10.1016/j.jelekin.2014.06.009
[88]
Jakob Karolus, Annika Kilian, Thomas Kosch, Albrecht Schmidt, and Paweł W. Wozniak. 2020. Hit the Thumb Jack! Using Electromyography to Augment the Piano Keyboard. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. Association for Computing Machinery, New York, NY, USA, 429–440. http://doi.org/10.1145/3357236.3395500
[89]
Jakob Karolus, Francisco Kiss, Caroline Eckerth, Nicolas Viot, Felix Bachmann, Albrecht Schmidt, and Pawel W. Wozniak. 2021. EMBody: A Data-Centric Toolkit for EMG-Based Interface Prototyping and Experimentation. Proceedings of the ACM on Human-Computer Interaction 5, EICS (May 2021), 195:1–195:29. https://doi.org/10.1145/3457142
[90]
Jakob Karolus, Hendrik Schuff, Thomas Kosch, Paweł W. Wozniak, and Albrecht Schmidt. 2018. EMGuitar: Assisting Guitar Playing with Electromyography. In Proceedings of the 2018 Designing Interactive Systems Conference(DIS ’18). Association for Computing Machinery, New York, NY, USA, 651–655. https://doi.org/10.1145/3196709.3196803
[91]
Frederic Kerber, Pascal Lessel, and Antonio Krüger. 2015. Same-side Hand Interactions with Arm-placed Devices Using EMG. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’15). Association for Computing Machinery, New York, NY, USA, 1367–1372. https://doi.org/10.1145/2702613.2732895
[92]
Jonghwa Kim, Stephan Mastnik, and Elisabeth André. 2008. EMG-based hand gesture recognition for realtime biosignal interfacing. In Proceedings of the 13th international conference on Intelligent user interfaces(IUI ’08). Association for Computing Machinery, New York, NY, USA, 30–39. https://doi.org/10.1145/1378773.1378778
[93]
Heli Koskimäki, Pekka Siirtola, and Juha Röning. 2017. MyoGym: introducing an open gym data set for activity recognition collected using myo armband. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers(UbiComp ’17). Association for Computing Machinery, New York, NY, USA, 537–546. https://doi.org/10.1145/3123024.3124400
[94]
Todd A. Kuiken, Guanglin Li, Blair A. Lock, Robert D. Lipschutz, Laura A. Miller, Kathy A. Stubblefield, and Kevin Englehart. 2009. Targeted Muscle Reinnervation for Real-Time Myoelectric Control of Multifunction Artificial Arms. JAMA : the journal of the American Medical Association 301, 6 (Feb. 2009), 619. https://doi.org/10.1001/jama.2009.116 Publisher: NIH Public Access.
[95]
Pradeep Kumar, Angkoon Phinyomark, and Erik Scheme. 2021. Verification-Based Design of a Robust EMG Wake Word. In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 638–642. https://doi.org/10.1109/EMBC46164.2021.9630922 ISSN: 2694-0604.
[96]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (Nov. 1998), 2278–2324. https://doi.org/10.1109/5.726791 Conference Name: Proceedings of the IEEE.
[97]
M. León, J. M. Gutiérrez, L. Leija, and R. Muñoz. 2011. EMG pattern recognition using Support Vector Machines classifier for myoelectric control purposes. In 2011 Pan American Health Care Exchanges. 175–178. https://doi.org/10.1109/PAHCE.2011.5871873 ISSN: 2327-817X.
[98]
Guanglin Li, Yaonan Li, Zhiyong Zhang, Yanjuan Geng, and Rui Zhou. 2010. Selection of sampling rate for EMG pattern recognition based prosthesis control. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. 5058–5061. https://doi.org/10.1109/IEMBS.2010.5626224 ISSN: 1558-4615.
[99]
Yun Li, Xiang Chen, Jianxun Tian, Xu Zhang, Kongqiao Wang, and Jihai Yang. 2010. Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction(ICMI-MLMI ’10). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/1891903.1891926
[100]
B. A. Lock, K. Englehart, and B. Hudgins. 2005. REAL-TIME MYOELECTRIC CONTROL IN A VIRTUAL ENVIRONMENT TO RELATE USABILITY VS. ACCURACY. (2005). https://dukespace.lib.duke.edu/dspace/handle/10161/2721 Accepted: 2010-07-21T17:10:39Z Publisher: Myoelectric Symposium.
[101]
I Scott MacKenzie. 2012. Human-computer interaction: An empirical research perspective. (2012).
[102]
Enzo Mastinu, Pascal Doguet, Yohan Botquin, Bo Håkansson, and Max Ortiz-Catalan. 2017. Embedded System for Prosthetic Control Using Implanted Neuromuscular Interfaces Accessed Via an Osseointegrated Implant. IEEE Transactions on Biomedical Circuits and Systems 11, 4 (Aug. 2017), 867–877. https://doi.org/10.1109/TBCAS.2017.2694710 Conference Name: IEEE Transactions on Biomedical Circuits and Systems.
[103]
Paul McCool, Graham D. Fraser, Adrian D. C. Chan, Lykourgos Petropoulakis, and John J. Soraghan. 2014. Identification of Contaminant Type in Surface Electromyography (EMG) Signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 4 (July 2014), 774–783. https://doi.org/10.1109/TNSRE.2014.2299573 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[104]
Morgan McCullough, Hong Xu, Joel Michelson, Matthew Jackoski, Wyatt Pease, William Cobb, William Kalescky, Joshua Ladd, and Betsy Williams. 2015. Myo Arm: Swinging to Explore a VE. In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception (Tübingen, Germany) (SAP ’15). Association for Computing Machinery, New York, NY, USA, 107–113. https://doi.org/10.1145/2804408.2804416
[105]
R. Merletti, A. Rainoldi, and D. Farina. 2001. Surface electromyography for noninvasive characterization of muscle. Exercise and Sport Sciences Reviews 29, 1 (2001), 20–25. https://doi.org/10.1097/00003677-200101000-00005
[106]
A. Merlo, D. Farina, and R. Merletti. 2003. A fast and reliable technique for muscle activity detection from surface EMG signals. IEEE Transactions on Biomedical Engineering 50, 3 (March 2003), 316–323. https://doi.org/10.1109/TBME.2003.808829 Conference Name: IEEE Transactions on Biomedical Engineering.
[107]
D.T. Mewett, H. Nazeran, and K.J. Reynolds. 2001. Removing power line noise from recorded EMG. In 2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vol. 3. 2190–2193 vol.3. https://doi.org/10.1109/IEMBS.2001.1017205 ISSN: 1094-687X.
[108]
Nesrine Mezhoudi. 2013. User interface adaptation based on user feedback and machine learning. In Proceedings of the companion publication of the 2013 international conference on Intelligent user interfaces companion(IUI ’13 Companion). Association for Computing Machinery, New York, NY, USA, 25–28. https://doi.org/10.1145/2451176.2451184
[109]
Robbin A. Miranda, William D. Casebeer, Amy M. Hein, Jack W. Judy, Eric P. Krotkov, Tracy L. Laabs, Justin E. Manzo, Kent G. Pankratz, Gill A. Pratt, Justin C. Sanchez, Douglas J. Weber, Tracey L. Wheeler, and Geoffrey S. F. Ling. 2015. DARPA-funded efforts in the development of novel brain–computer interface technologies. Journal of Neuroscience Methods 244 (April 2015), 52–67. https://doi.org/10.1016/j.jneumeth.2014.07.019
[110]
A. Mori, S. Uchida, R. Kurazume, R. Taniguchi, T. Hasegawa, and H. Sakoe. 2006. Early Recognition and Prediction of Gestures. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 3. 560–563. https://doi.org/10.1109/ICPR.2006.467 ISSN: 1051-4651.
[111]
Ganesh R. Naik, Dinesh Kant Kumar, Vijay Pal Singh, and Marimuthu Palaniswami. 2006. Hand gestures for HCI using ICA of EMG. In Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56(VisHCI ’06). Australian Computer Society, Inc., AUS, 67–72.
[112]
Jena L. Nawfel, Kevin B. Englehart, and Erik J. Scheme. 2021. A Multi-Variate Approach to Predicting Myoelectric Control Usability. IEEE Transactions on Neural Systems and Rehabilitation Engineering 29 (2021), 1312–1327. https://doi.org/10.1109/TNSRE.2021.3094324 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[113]
Mohammadreza Asghari Oskoei and Huosheng Hu. 2008. Support Vector Machine-Based Classification Scheme for Myoelectric Control Applied to Upper Limb. IEEE Transactions on Biomedical Engineering 55, 8 (Aug. 2008), 1956–1965. https://doi.org/10.1109/TBME.2008.919734 Conference Name: IEEE Transactions on Biomedical Engineering.
[114]
P. Parker, K. Englehart, and B. Hudgins. 2006. Myoelectric signal processing for control of powered limb prostheses. Journal of Electromyography and Kinesiology 16, 6 (Dec. 2006), 541–548. https://doi.org/10.1016/j.jelekin.2006.08.006
[115]
Simone Pasinetti, Matteo Lancini, Ileana Bodini, and Franco Docchio. 2015. A Novel Algorithm for EMG Signal Processing and Muscle Timing Measurement. IEEE Transactions on Instrumentation and Measurement 64, 11 (Nov. 2015), 2995–3004. https://doi.org/10.1109/TIM.2015.2434097 Conference Name: IEEE Transactions on Instrumentation and Measurement.
[116]
Prajwal Paudyal, Ayan Banerjee, and Sandeep K.S. Gupta. 2016. SCEPTRE: A Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology. In Proceedings of the 21st International Conference on Intelligent User Interfaces(IUI ’16). Association for Computing Machinery, New York, NY, USA, 282–293. https://doi.org/10.1145/2856767.2856794
[117]
Angkoon Phinyomark, Evan Campbell, and Erik Scheme. 2020. Surface Electromyography (EMG) Signal Processing, Classification, and Practical Considerations. In Biomedical Signal Processing: Advances in Theory, Algorithms and Applications, Ganesh Naik (Ed.). Springer, Singapore, 3–29. https://doi.org/10.1007/978-981-13-9097-5_1
[118]
Angkoon Phinyomark, Rami N. Khushaba, Esther Ibáñez-Marcelo, Alice Patania, Erik Scheme, and Giovanni Petri. 2017. Navigating features: a topologically informed chart of electromyographic features space. Journal of the Royal Society Interface 14, 137 (Dec. 2017), 20170734. https://doi.org/10.1098/rsif.2017.0734
[119]
Angkoon Phinyomark, Chusak Limsakul, and Pornchai Phukpattaranont. 2009. A Novel Feature Extraction for Robust EMG Pattern Recognition. https://doi.org/10.48550/arXiv.0912.3973 arXiv:0912.3973 [cs].
[120]
Angkoon Phinyomark, Rami N. Khushaba, and Erik Scheme. 2018. Feature Extraction and Selection for Myoelectric Control Based on Wearable EMG Sensors. Sensors 18, 5 (May 2018), 1615. https://doi.org/10.3390/s18051615 Number: 5 Publisher: Multidisciplinary Digital Publishing Institute.
[121]
Angkoon Phinyomark, Pornchai Phukpattaranont, and Chusak Limsakul. 2012. Feature reduction and selection for EMG signal classification. Expert Systems with Applications 39, 8 (June 2012), 7420–7431. https://doi.org/10.1016/j.eswa.2012.01.102
[122]
Angkoon Phinyomark, Franck Quaine, Sylvie Charbonnier, Christine Serviere, Franck Tarpin-Bernard, and Yann Laurillau. 2013. EMG feature evaluation for improving myoelectric pattern recognition robustness. Expert Systems with Applications 40, 12 (Sept. 2013), 4832–4840. https://doi.org/10.1016/j.eswa.2013.02.023
[123]
Angkoon Phinyomark and Erik Scheme. 2018. EMG Pattern Recognition in the Era of Big Data and Deep Learning. Big Data and Cognitive Computing 2, 3 (Sept. 2018), 21. https://doi.org/10.3390/bdcc2030021 Number: 3 Publisher: Multidisciplinary Digital Publishing Institute.
[124]
Angkoon Phinyomark and Erik Scheme. 2018. A feature extraction issue for myoelectric control based on wearable EMG sensors. In 2018 IEEE Sensors Applications Symposium (SAS). 1–6. https://doi.org/10.1109/SAS.2018.8336753
[125]
Angkoon Phinyomark and Erik Scheme. 2018. A feature extraction issue for myoelectric control based on wearable EMG sensors. In 2018 IEEE Sensors Applications Symposium (SAS). 1–6. https://doi.org/10.1109/SAS.2018.8336753
[126]
Cosima Prahm, Michael Bressler, Korbinian Eckstein, Hideaki Kuzuoka, Adrien Daigeler, and Jonas Kolbenschlag. 2022. Developing a wearable Augmented Reality for treating phantom limb pain using the Microsoft Hololens 2. In Augmented Humans 2022(AHs 2022). Association for Computing Machinery, New York, NY, USA, 309–312. https://doi.org/10.1145/3519391.3524031
[127]
Cosima Prahm, Fares Kayali, Ivan Vujaklija, Agnes Sturma, and Oskar Aszmann. 2017. Increasing motivation, effort and performance through game-based rehabilitation for upper limb myoelectric prosthesis control. In 2017 International Conference on Virtual Rehabilitation (ICVR). 1–6. https://doi.org/10.1109/ICVR.2017.8007517 ISSN: 2331-9569.
[128]
Ashkan Radmand, Erik Scheme, and Kevin Englehart. 2014. On the Suitability of Integrating Accelerometry Data with Electromyography Signals for Resolving the Effect of Changes in Limb Position during Dynamic Limb Movement. JPO: Journal of Prosthetics and Orthotics 26, 4 (Oct. 2014), 185–193. https://doi.org/10.1097/JPO.0000000000000041
[129]
M.B.I. Raez, M.S. Hussain, and F. Mohd-Yasin. 2006. Techniques of EMG signal analysis: detection, processing, classification and applications. Biological Procedures Online 8 (March 2006), 11–35. https://doi.org/10.1251/bpo115
[130]
Seema Rawat, Somya Vats, and Praveen Kumar. 2016. Evaluating and exploring the MYO ARMBAND. In 2016 International Conference System Modeling Advancement in Research Trends (SMART). 115–120. https://doi.org/10.1109/SYSMART.2016.7894501
[131]
Jason W. Robertson, Kevin B. Englehart, and Erik J. Scheme. 2019. Effects of Confidence-Based Rejection on Usability and Error in Pattern Recognition-Based Myoelectric Control. IEEE Journal of Biomedical and Health Informatics 23, 5 (Sept. 2019), 2002–2008. https://doi.org/10.1109/JBHI.2018.2878907 Conference Name: IEEE Journal of Biomedical and Health Informatics.
[132]
T Scott Saponas, Desney S. Tan, Dan Morris, and Ravin Balakrishnan. 2008. Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’08). Association for Computing Machinery, New York, NY, USA, 515–524. https://doi.org/10.1145/1357054.1357138
[133]
T. Scott Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, and James A. Landay. 2009. Enabling always-available input with muscle-computer interfaces. In Proceedings of the 22nd annual ACM symposium on User interface software and technology(UIST ’09). Association for Computing Machinery, New York, NY, USA, 167–176. https://doi.org/10.1145/1622176.1622208
[134]
T. Scott Saponas, Desney S. Tan, Dan Morris, Jim Turner, and James A. Landay. 2010. Making muscle-computer interfaces more practical. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’10). Association for Computing Machinery, New York, NY, USA, 851–854. https://doi.org/10.1145/1753326.1753451
[135]
Erik Scheme and Kevin Englehart. 2011. Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use. Journal of Rehabilitation Research and Development 48, 6(2011), 643–659. https://doi.org/10.1682/jrrd.2010.09.0177
[136]
Erik Scheme and Kevin Englehart. 2013. Training Strategies for Mitigating the Effect of Proportional Control on Classification in Pattern Recognition Based Myoelectric Control. Journal of prosthetics and orthotics : JPO 25, 2 (April 2013), 76–83. https://doi.org/10.1097/JPO.0b013e318289950b
[137]
E. Scheme, A. Fougner, Ø Stavdahl, A. C. Chan, and K. Englehart. 2010. Examining the adverse effects of limb position on pattern recognition based myoelectric control. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference 2010(2010), 6337–6340. https://doi.org/10.1109/IEMBS.2010.5627638
[138]
Erik Scheme, Blair Lock, Levi Hargrove, Wendy Hill, Usha Kuruganti, and Kevin Englehart. 2014. Motion Normalized Proportional Control for Improved Pattern Recognition-Based Myoelectric Control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 1 (Jan. 2014), 149–157. https://doi.org/10.1109/TNSRE.2013.2247421 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[139]
Erik J. Scheme and Kevin B. Englehart. 2013. Validation of a Selective Ensemble-Based Classification Scheme for Myoelectric Control Using a Three-Dimensional Fitts’ Law Test. IEEE Transactions on Neural Systems and Rehabilitation Engineering 21, 4 (July 2013), 616–623. https://doi.org/10.1109/TNSRE.2012.2226189 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[140]
Erik J. Scheme, Bernard S. Hudgins, and Kevin B. Englehart. 2013. Confidence-based rejection for improved pattern recognition myoelectric control. IEEE transactions on bio-medical engineering 60, 6 (June 2013), 1563–1570. https://doi.org/10.1109/TBME.2013.2238939
[141]
Jonathon W. Sensinger, Blair A. Lock, and Todd A. Kuiken. 2009. Adaptive Pattern Recognition of Myoelectric Signals: Exploration of Conceptual Framework and Practical Algorithms. IEEE Transactions on Neural Systems and Rehabilitation Engineering 17, 3 (June 2009), 270–278. https://doi.org/10.1109/TNSRE.2009.2023282 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[142]
Nitin Seth, Rafaela C. de Freitas, Mitchell Chaulk, Colleen O’Connell, Kevin Englehart, and Erik Scheme. 2019. EMG Pattern Recognition for Persons with Cervical Spinal Cord Injury. In 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR). 1055–1060. https://doi.org/10.1109/ICORR.2019.8779450
[143]
Manoj Kumar Sharma and Debasis Samanta. 2014. Word Prediction System for Text Entry in Hindi. ACM Transactions on Asian Language Information Processing 13, 2 (June 2014), 8:1–8:29. https://doi.org/10.1145/2617590
[144]
Lauren H. Smith and Levi J. Hargrove. 2013. Comparison of surface and intramuscular EMG pattern recognition for simultaneous wrist/hand motion classification. Conference proceedings :... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference 2013 (2013), 4223–4226. https://doi.org/10.1109/EMBC.2013.6610477
[145]
Lauren H. Smith, Levi J. Hargrove, Blair A. Lock, and Todd A. Kuiken. 2011. Determining the Optimal Window Length for Pattern Recognition-Based Myoelectric Control: Balancing the Competing Effects of Classification Error and Controller Delay. IEEE Transactions on Neural Systems and Rehabilitation Engineering 19, 2 (April 2011), 186–192. https://doi.org/10.1109/TNSRE.2010.2100828 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[146]
John A. Spanias, Eric J. Perreault, and Levi J. Hargrove. 2016. Detection of and Compensation for EMG Disturbances for Powered Lower Limb Prosthesis Control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 24, 2 (Feb. 2016), 226–234. https://doi.org/10.1109/TNSRE.2015.2413393 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[147]
Adrian Stoica, Federico Salvioli, and Caitlin Flowers. 2014. Remote control of quadrotor teams, using hand gestures. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction(HRI ’14). Association for Computing Machinery, New York, NY, USA, 296–297. https://doi.org/10.1145/2559636.2559853
[148]
Paul Strohmeier and Jess McIntosh. 2020. Novel Input and Output Opportunities Using an Implanted Magnet. In Proceedings of the Augmented Humans International Conference (Kaiserslautern, Germany) (AHs ’20). Association for Computing Machinery, New York, NY, USA, Article 10, 5 pages. https://doi.org/10.1145/3384657.3384785
[149]
Erik Stålberg and Peter Dioszeghy. 1991. Scanning EMG in normal muscle and in neuromuscular disorders. Electroencephalography and Clinical Neurophysiology/Evoked Potentials Section 81, 6 (Dec. 1991), 403–416. https://doi.org/10.1016/0168-5597(91)90048-3
[150]
Abdulhamit Subasi. 2013. Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders. Computers in Biology and Medicine 43, 5 (June 2013), 576–586. https://doi.org/10.1016/j.compbiomed.2013.01.020
[151]
Aaron Tabor, Scott Bateman, and Erik Scheme. 2018. Evaluation of Myoelectric Control Learning Using Multi-Session Game-Based Training. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, 9 (Sept. 2018), 1680–1689. https://doi.org/10.1109/TNSRE.2018.2855561 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[152]
Aaron Tabor, Scott Bateman, Erik Scheme, David R. Flatla, and Kathrin Gerling. 2017. Designing Game-Based Myoelectric Prosthesis Training. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, New York, NY, USA, 1352–1363. https://doi.org/10.1145/3025453.3025676
[153]
Aaron Tabor, Wendy Hill, Scott Bateman, and Erik Scheme. 2017. Quantifying muscle control in myoelectric training games. Proc Myoelectr Control Symp (MEC) 4 (2017). https://www.unb.ca/ibme/_assets/documents/past-docs/MEC17-papers/tabor-quantifying-muscle-control.pdf
[154]
Muhammad Tanweer and Kari A. I. Halonen. 2019. Development of wearable hardware platform to measure the ECG and EMG with IMU to detect motion artifacts. In 2019 IEEE 22nd International Symposium on Design and Diagnostics of Electronic Circuits & Systems (DDECS). 1–4. https://doi.org/10.1109/DDECS.2019.8724639 ISSN: 2473-2117.
[155]
Marco Tomasini, Simone Benatti, Bojan Milosevic, Elisabetta Farella, and Luca Benini. 2016. Power Line Interference Removal for High-Quality Continuous Biosignal Monitoring With Low-Power Wearable Devices. IEEE Sensors Journal 16, 10 (May 2016), 3887–3895. https://doi.org/10.1109/JSEN.2016.2536363 Conference Name: IEEE Sensors Journal.
[156]
Stefano Tortora, Michele Moro, and Emanuele Menegatti. 2019. Dual-Myo real-time control of a humanoid arm for teleoperation. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction(HRI ’19). IEEE Press, Daegu, Republic of Korea, 624–625.
[157]
Keith Trnka, John McCaw, Debra Yarrington, Kathleen F. McCoy, and Christopher Pennington. 2009. User Interaction with Word Prediction: The Effects of Prediction Quality. ACM Transactions on Accessible Computing 1, 3 (Feb. 2009), 17:1–17:34. https://doi.org/10.1145/1497302.1497307
[158]
Mindaugas Vasiljevas, Rūtenis Turčinas, and Robertas Damaševičius. 2014. Development of EMG-Based Speller. In Proceedings of the XV International Conference on Human Computer Interaction(Interacción ’14). Association for Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/2662253.2662260
[159]
Md Ferdous Wahid, Reza Tafreshi, and Reza Langari. 2020. A Multi-Window Majority Voting Strategy to Improve Hand Gesture Recognition Accuracies Using Electromyography Signal. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28, 2 (Feb. 2020), 427–436. https://doi.org/10.1109/TNSRE.2019.2961706 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[160]
Chong Wang, Zhong Liu, and Shing-Chow Chan. 2015. Superpixel-Based Hand Gesture Recognition With Kinect Depth Camera. IEEE Transactions on Multimedia 17, 1 (Jan. 2015), 29–39. https://doi.org/10.1109/TMM.2014.2374357 Conference Name: IEEE Transactions on Multimedia.
[161]
Max L. Wilson, Wendy Mackay, Ed Chi, Michael Bernstein, Dan Russell, and Harold Thimbleby. 2011. RepliCHI - CHI should be replicating and validating results more: discuss. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’11). Association for Computing Machinery, New York, NY, USA, 463–466. https://doi.org/10.1145/1979742.1979491
[162]
Brent D. Winslow, Mitchell Ruble, and Zachary Huber. 2018. Mobile, Game-Based Training for Myoelectric Prosthesis Control. Frontiers in Bioengineering and Biotechnology 6 (2018). https://www.frontiersin.org/articles/10.3389/fbioe.2018.00094
[163]
Richard B. Woodward and Levi J. Hargrove. 2019. Adapting myoelectric control in real-time using a virtual environment. Journal of NeuroEngineering and Rehabilitation 16, 1 (Jan. 2019), 11. https://doi.org/10.1186/s12984-019-0480-5
[164]
Qi Xu, Yazhi Quan, Lei Yang, and Jiping He. 2013. An Adaptive Algorithm for the Determination of the Onset and Offset of Muscle Contraction by EMG Signal Processing. IEEE Transactions on Neural Systems and Rehabilitation Engineering 21, 1 (Jan. 2013), 65–73. https://doi.org/10.1109/TNSRE.2012.2226916 Conference Name: IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[165]
Aaron J. Young, Levi J. Hargrove, and Todd A. Kuiken. 2011. The Effects of Electrode Size and Orientation on the Sensitivity of Myoelectric Pattern Recognition Systems to Electrode Shift. IEEE Transactions on Biomedical Engineering 58, 9 (Sept. 2011), 2537–2544. https://doi.org/10.1109/TBME.2011.2159216 Conference Name: IEEE Transactions on Biomedical Engineering.
[166]
Aaron J. Young, Lauren H. Smith, Elliott J. Rouse, and Levi J. Hargrove. 2013. Classification of simultaneous movements using surface EMG pattern recognition. IEEE transactions on bio-medical engineering 60, 5 (May 2013), 1250–1258. https://doi.org/10.1109/TBME.2012.2232293
[167]
Jamileh Yousefi and Andrew Hamilton-Wright. 2014. Characterizing EMG data using machine-learning tools. Computers in Biology and Medicine 51 (Aug. 2014), 1–13. https://doi.org/10.1016/j.compbiomed.2014.04.018
[168]
Marcello Zanghieri, Simone Benatti, Alessio Burrello, Victor Kartsch, Francesco Conti, and Luca Benini. 2020. Robust Real-Time Embedded EMG Recognition Framework Using Temporal Convolutional Networks on a Multicore IoT Processor. IEEE Transactions on Biomedical Circuits and Systems 14, 2 (April 2020), 244–256. https://doi.org/10.1109/TBCAS.2019.2959160 Conference Name: IEEE Transactions on Biomedical Circuits and Systems.
[169]
Xiaolong Zhai, Beth Jelfs, Rosa H. M. Chan, and Chung Tin. 2017. Self-Recalibrating Surface EMG Pattern Recognition for Neuroprosthesis Control Based on Convolutional Neural Network. Frontiers in Neuroscience 11 (2017). https://www.frontiersin.org/articles/10.3389/fnins.2017.00379
[170]
Haoshi Zhang, Yaonan Zhao, Fuan Yao, Lisheng Xu, Peng Shang, and Guanglin Li. 2013. An adaptation strategy of using LDA classifier for EMG pattern recognition. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 4267–4270. https://doi.org/10.1109/EMBC.2013.6610488 ISSN: 1558-4615.
[171]
Mingchuan Zhang, Zuhao Wang, and Guannan Meng. 2022. Intelligent Perception Recognition of Multi-modal EMG Signals Based on Machine Learning. In 2022 2nd International Conference on Bioinformatics and Intelligent Computing(BIC 2022). Association for Computing Machinery, New York, NY, USA, 389–396. https://doi.org/10.1145/3523286.3524576
[172]
Xu Zhang, Xiang Chen, Yun Li, Vuokko Lantz, Kongqiao Wang, and Jihai Yang. 2011. A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 41, 6 (Nov. 2011), 1064–1076. https://doi.org/10.1109/TSMCA.2011.2116004 Conference Name: IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.
[173]
Xu Zhang, Xiang Chen, Wen-hui Wang, Ji-hai Yang, Vuokko Lantz, and Kong-qiao Wang. 2009. Hand gesture recognition and virtual game control based on 3D accelerometer and EMG sensors. In Proceedings of the 14th international conference on Intelligent user interfaces(IUI ’09). Association for Computing Machinery, New York, NY, USA, 401–406. https://doi.org/10.1145/1502650.1502708
[174]
Adel Al-Jumaily and Ricardo A. Olivares. 2009. Electromyogram (EMG) driven system based virtual reality for prosthetic and rehabilitation devices. In Proceedings of the 11th International Conference on Information Integration and Web-based Applications & Services(iiWAS ’09). Association for Computing Machinery, New York, NY, USA, 582–586. https://doi.org/10.1145/1806338.1806448
[175]
Christoph Amma, Thomas Krings, Jonas Böer, and Tanja Schultz. 2015. Advancing Muscle-Computer Interfaces with High-Density Electromyography. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). Association for Computing Machinery, New York, NY, USA, 929–938. https://doi.org/10.1145/2702123.2702501
[176]
Kikuo Asai and Norio Takase. 2019. Finger Motion Estimation Based on Sparse Multi-Channel Surface Electromyography Signals Using Convolutional Neural Network. In Proceedings of the 2019 3rd International Conference on Digital Signal Processing(ICDSP 2019). Association for Computing Machinery, New York, NY, USA, 55–59. https://doi.org/10.1145/3316551.3316572
[177]
Christopher Assad, Michael Wolf, Theodoros Theodoridis, Kyrre Glette, and Adrian Stoica. 2013. BioSleeve: a natural EMG-based interface for HRI. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction(HRI ’13). IEEE Press, Tokyo, Japan, 69–70.
[178]
Vincent Becker, Pietro Oldrati, Liliana Barrios, and Gábor Sörös. 2018. Touchsense: classifying finger touches and measuring their force with an electromyography armband. In Proceedings of the 2018 ACM International Symposium on Wearable Computers(ISWC ’18). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3267242.3267250
[179]
Simone Benatti, Elisabetta Farella, and Luca Benini. 2014. Towards EMG control interface for smart garments. In Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct Program(ISWC ’14 Adjunct). Association for Computing Machinery, New York, NY, USA, 163–170. https://doi.org/10.1145/2641248.2641352
[180]
Baptiste Caramiaux, Marco Donnarumma, and Atau Tanaka. 2015. Understanding Gesture Expressivity through Muscle Sensing. ACM Transactions on Computer-Human Interaction 21, 6 (Jan. 2015), 31:1–31:26. https://doi.org/10.1145/2687922
[181]
Changmok Choi and Sang Joon Kim. 2019. Preliminary studies of SEMG-based finger gesture classification for smart watch application using deep learning. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction(HRI ’19). IEEE Press, Daegu, Republic of Korea, 562–563.
[182]
Enrico Costanza, Samuel A. Inverso, and Rebecca Allen. 2005. Toward subtle intimate interfaces for mobile devices using an EMG controller. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’05). Association for Computing Machinery, New York, NY, USA, 481–489. https://doi.org/10.1145/1054972.1055039
[183]
Enrico Costanza, Samuel A. Inverso, Rebecca Allen, and Pattie Maes. 2007. Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’07). Association for Computing Machinery, New York, NY, USA, 819–828. https://doi.org/10.1145/1240624.1240747
[184]
Qingfeng Dai, Xiangdong Li, Weidong Geng, Wenguang Jin, and Xiubo Liang. 2021. CAPG-MYO: A Muscle-Computer Interface Supporting User-defined Gesture Recognition. In The 2021 9th International Conference on Computer and Communications Management(ICCCM ’21). Association for Computing Machinery, New York, NY, USA, 52–58. https://doi.org/10.1145/3479162.3479170
[185]
Joseph DelPreto and Daniela Rus. 2020. Plug-and-Play Gesture Control Using Muscle and Motion Sensors. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction(HRI ’20). Association for Computing Machinery, New York, NY, USA, 439–448. https://doi.org/10.1145/3319502.3374823
[186]
Chloe Eghtebas, Sandro Weber, and Gudrun Klinker. 2018. Investigation into Natural Gestures Using EMG for "SuperNatural" Interaction in VR. In The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings(UIST ’18 Adjunct). Association for Computing Machinery, New York, NY, USA, 102–104. https://doi.org/10.1145/3266037.3266115
[187]
Faizan Haque, Mathieu Nancel, and Daniel Vogel. 2015. Myopoint: Pointing and Clicking Using Forearm Mounted Electromyography and Inertial Motion Sensors. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). Association for Computing Machinery, New York, NY, USA, 3653–3656. https://doi.org/10.1145/2702123.2702133
[188]
Bo-Jhang Ho, Renju Liu, Hsiao-Yun Tseng, and Mani Srivastava. 2017. MyoBuddy: Detecting Barbell Weight Using Electromyogram Sensors. In Proceedings of the 1st Workshop on Digital Biomarkers(DigitalBiomarkers ’17). Association for Computing Machinery, New York, NY, USA, 27–32. https://doi.org/10.1145/3089341.3089346
[189]
Yu Hu and Qiao Wang. 2020. A Comprehensive Evaluation of Hidden Markov Model for Hand Movement Recognition with Surface Electromyography. In Proceedings of the 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence(RICAI 2020). Association for Computing Machinery, New York, NY, USA, 85–91. https://doi.org/10.1145/3438872.3439060
[190]
Donny Huang, Xiaoyi Zhang, T. Scott Saponas, James Fogarty, and Shyamnath Gollakota. 2015. Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology(UIST ’15). Association for Computing Machinery, New York, NY, USA, 523–528. https://doi.org/10.1145/2807442.2807506
[191]
Haider Ali Javaid, Nasir Rashid, Mohsin Islam Tiwana, and Muhammad Waseem Anwar. 2018. Comparative Analysis of EMG Signal Features in Time-domain and Frequency-domain using MYO Gesture Control. In Proceedings of the 2018 4th International Conference on Mechatronics and Robotics Engineering(ICMRE 2018). Association for Computing Machinery, New York, NY, USA, 157–162. https://doi.org/10.1145/3191477.3191495
[192]
Jakob Karolus, Annika Kilian, Thomas Kosch, Albrecht Schmidt, and Paweł W. Wozniak. 2020. Hit the Thumb Jack! Using Electromyography to Augment the Piano Keyboard. In Proceedings of the 2020 ACM Designing Interactive Systems Conference(DIS ’20). Association for Computing Machinery, New York, NY, USA, 429–440. https://doi.org/10.1145/3357236.3395500
[193]
Jakob Karolus, Francisco Kiss, Caroline Eckerth, Nicolas Viot, Felix Bachmann, Albrecht Schmidt, and Pawel W. Wozniak. 2021. EMBody: A Data-Centric Toolkit for EMG-Based Interface Prototyping and Experimentation. Proceedings of the ACM on Human-Computer Interaction 5, EICS (May 2021), 195:1–195:29. https://doi.org/10.1145/3457142
[194]
Jakob Karolus, Hendrik Schuff, Thomas Kosch, Paweł W. Wozniak, and Albrecht Schmidt. 2018. EMGuitar: Assisting Guitar Playing with Electromyography. In Proceedings of the 2018 Designing Interactive Systems Conference(DIS ’18). Association for Computing Machinery, New York, NY, USA, 651–655. https://doi.org/10.1145/3196709.3196803
[195]
Weijie Ke, Yannan Xing, Gaetano Di Caterina, Lykourgos Petropoulakis, and John Soraghan. 2020. Intersected EMG Heatmaps and Deep Learning Based Gesture Recognition. In Proceedings of the 2020 12th International Conference on Machine Learning and Computing(ICMLC 2020). Association for Computing Machinery, New York, NY, USA, 73–78. https://doi.org/10.1145/3383972.3383982
[196]
Frederic Kerber, Pascal Lessel, and Antonio Krüger. 2015. Same-side Hand Interactions with Arm-placed Devices Using EMG. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’15). Association for Computing Machinery, New York, NY, USA, 1367–1372. https://doi.org/10.1145/2702613.2732895
[197]
Frederic Kerber, Michael Puhl, and Antonio Krüger. 2017. User-independent real-time hand gesture recognition based on surface electromyography. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI ’17). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3098279.3098553
[198]
Jonghwa Kim, Stephan Mastnik, and Elisabeth André. 2008. EMG-based hand gesture recognition for realtime biosignal interfacing. In Proceedings of the 13th international conference on Intelligent user interfaces(IUI ’08). Association for Computing Machinery, New York, NY, USA, 30–39. https://doi.org/10.1145/1378773.1378778
[199]
Heli Koskimäki, Pekka Siirtola, and Juha Röning. 2017. MyoGym: introducing an open gym data set for activity recognition collected using myo armband. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers(UbiComp ’17). Association for Computing Machinery, New York, NY, USA, 537–546. https://doi.org/10.1145/3123024.3124400
[200]
Yun Li, Xiang Chen, Jianxun Tian, Xu Zhang, Kongqiao Wang, and Jihai Yang. 2010. Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction(ICMI-MLMI ’10). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/1891903.1891926
[201]
Jess McIntosh, Charlie McNeill, Mike Fraser, Frederic Kerber, Markus Löchtefeld, and Antonio Krüger. 2016. EMPress: Practical Hand Gesture Classification with Wrist-Mounted EMG and Pressure Sensing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 2332–2342. https://doi.org/10.1145/2858036.2858093
[202]
Tobias Mulling and Mithileysh Sathiyanarayanan. 2015. Characteristics of hand gesture navigation: a case study using a wearable device (MYO). In Proceedings of the 2015 British HCI Conference(British HCI ’15). Association for Computing Machinery, New York, NY, USA, 283–284. https://doi.org/10.1145/2783446.2783612
[203]
Ganesh R. Naik, Dinesh Kant Kumar, Vijay Pal Singh, and Marimuthu Palaniswami. 2006. Hand gestures for HCI using ICA of EMG. In Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56(VisHCI ’06). Australian Computer Society, Inc., AUS, 67–72.
[204]
Prajwal Paudyal, Ayan Banerjee, and Sandeep K.S. Gupta. 2016. SCEPTRE: A Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology. In Proceedings of the 21st International Conference on Intelligent User Interfaces(IUI ’16). Association for Computing Machinery, New York, NY, USA, 282–293. https://doi.org/10.1145/2856767.2856794
[205]
Carl Peter Robinson, Baihua Li, Qinggang Meng, and Matthew Pain. 2018. Effectiveness of Surface Electromyography in Pattern Classification for Upper Limb Amputees. In Proceedings of the 2018 International Conference on Artificial Intelligence and Pattern Recognition(AIPR 2018). Association for Computing Machinery, New York, NY, USA, 107–112. https://doi.org/10.1145/3268866.3268889
[206]
Carl Peter Robinson, Baihua Li, Qinggang Meng, and Matthew T.G. Pain. 2017. Pattern Classification of Hand Movements using Time Domain Features of Electromyography. In Proceedings of the 4th International Conference on Movement Computing(MOCO ’17). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3077981.3078031
[207]
T Scott Saponas, Desney S. Tan, Dan Morris, and Ravin Balakrishnan. 2008. Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’08). Association for Computing Machinery, New York, NY, USA, 515–524. https://doi.org/10.1145/1357054.1357138
[208]
T. Scott Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, and James A. Landay. 2009. Enabling always-available input with muscle-computer interfaces. In Proceedings of the 22nd annual ACM symposium on User interface software and technology(UIST ’09). Association for Computing Machinery, New York, NY, USA, 167–176. https://doi.org/10.1145/1622176.1622208
[209]
T. Scott Saponas, Desney S. Tan, Dan Morris, Jim Turner, and James A. Landay. 2010. Making muscle-computer interfaces more practical. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’10). Association for Computing Machinery, New York, NY, USA, 851–854. https://doi.org/10.1145/1753326.1753451
[210]
Rajneesh Sharma and Amit Kukker. 2017. Neural Reinforcement Learning based Identifier for Typing Keys using Forearm EMG Signals. In Proceedings of the 9th International Conference on Signal Processing Systems(ICSPS 2017). Association for Computing Machinery, New York, NY, USA, 225–229. https://doi.org/10.1145/3163080.3163117
[211]
Adrian Stoica, Federico Salvioli, and Caitlin Flowers. 2014. Remote control of quadrotor teams, using hand gestures. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction(HRI ’14). Association for Computing Machinery, New York, NY, USA, 296–297. https://doi.org/10.1145/2559636.2559853
[212]
Direk Sueaseenak, Thunchanok Uburi, and Paphawarin Tirasuwannarat. 2017. Optimal Placement of Multi-Channels sEMG Electrod for Finger Movement Classification. In Proceedings of the 2017 4th International Conference on Biomedical and Bioinformatics Engineering(ICBBE 2017). Association for Computing Machinery, New York, NY, USA, 78–83. https://doi.org/10.1145/3168776.3168802
[213]
Stefano Tortora, Michele Moro, and Emanuele Menegatti. 2019. Dual-Myo real-time control of a humanoid arm for teleoperation. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction(HRI ’19). IEEE Press, Daegu, Republic of Korea, 624–625.
[214]
Hideaki Touyama, Koichi Hirota, and Michitaka Hirose. 2005. Prototype application with electromyogram interface in immersive virtual environment. In Proceedings of the 2005 international conference on Augmented tele-existence(ICAT ’05). Association for Computing Machinery, New York, NY, USA, 271. https://doi.org/10.1145/1152399.1152461
[215]
Ayumu Tsuboi, Mamoru Hirota, Junki Sato, Masayuki Yokoyama, and Masao Yanagisawa. 2017. A proposal for wearable controller device and finger gesture recognition using surface electromyography. In SIGGRAPH Asia 2017 Posters(SA ’17). Association for Computing Machinery, New York, NY, USA, 1–2. https://doi.org/10.1145/3145690.3145731
[216]
Mindaugas Vasiljevas, Rūtenis Turčinas, and Robertas Damaševičius. 2014. Development of EMG-Based Speller. In Proceedings of the XV International Conference on Human Computer Interaction(Interacción ’14). Association for Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/2662253.2662260
[217]
A. Saleh Zadeh, A. P. Calitz, and J. H. Greyling. 2018. Evaluating a biosensor-based interface to recognize hand-finger gestures using a Myo armband. In Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists(SAICSIT ’18). Association for Computing Machinery, New York, NY, USA, 229–238. https://doi.org/10.1145/3278681.3278709
[218]
Mingchuan Zhang, Zuhao Wang, and Guannan Meng. 2022. Intelligent Perception Recognition of Multi-modal EMG Signals Based on Machine Learning. In 2022 2nd International Conference on Bioinformatics and Intelligent Computing(BIC 2022). Association for Computing Machinery, New York, NY, USA, 389–396. https://doi.org/10.1145/3523286.3524576
[219]
Qiwu Zhang and Junru Zhu. 2022. The Application of EMG and Machine Learning in Human Machine Interface. In 2022 2nd International Conference on Bioinformatics and Intelligent Computing(BIC 2022). Association for Computing Machinery, New York, NY, USA, 465–469. https://doi.org/10.1145/3523286.3524588
[220]
Xu Zhang, Xiang Chen, Wen-hui Wang, Ji-hai Yang, Vuokko Lantz, and Kong-qiao Wang. 2009. Hand gesture recognition and virtual game control based on 3D accelerometer and EMG sensors. In Proceedings of the 14th international conference on Intelligent user interfaces(IUI ’09). Association for Computing Machinery, New York, NY, USA, 401–406. https://doi.org/10.1145/1502650.1502708

Cited By

View all
  • (2024)Context-informed incremental learning improves both the performance and resilience of myoelectric controlJournal of NeuroEngineering and Rehabilitation10.1186/s12984-024-01355-421:1Online publication date: 3-May-2024
  • (2024)Enabling Advanced Interactions through Closed-loop Control of Motor Unit Activity After TetraplegiaAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686325(1-3)Online publication date: 13-Oct-2024
  • (2024)A Touch of Gold - Spraying and Electroplating 3D Prints to Create Biocompatible On-Skin WearablesAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3680227(1-7)Online publication date: 21-Sep-2024
  • Show More Cited By

Index Terms

  1. A Framework and Call to Action for the Future Development of EMG-Based Input in HCI

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
      April 2023
      14911 pages
      ISBN:9781450394215
      DOI:10.1145/3544548
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 April 2023

      Check for updates

      Author Tags

      1. design framework
      2. dynamic gestures
      3. electromyography
      4. emg
      5. emg control
      6. static contractions

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CHI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,400
      • Downloads (Last 6 weeks)182
      Reflects downloads up to 16 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Context-informed incremental learning improves both the performance and resilience of myoelectric controlJournal of NeuroEngineering and Rehabilitation10.1186/s12984-024-01355-421:1Online publication date: 3-May-2024
      • (2024)Enabling Advanced Interactions through Closed-loop Control of Motor Unit Activity After TetraplegiaAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686325(1-3)Online publication date: 13-Oct-2024
      • (2024)A Touch of Gold - Spraying and Electroplating 3D Prints to Create Biocompatible On-Skin WearablesAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3680227(1-7)Online publication date: 21-Sep-2024
      • (2024)Mitigating the Concurrent Interference of Electrode Shift and Loosening in Myoelectric Pattern Recognition Using Siamese Autoencoder NetworkIEEE Transactions on Neural Systems and Rehabilitation Engineering10.1109/TNSRE.2024.345085432(3388-3398)Online publication date: 2024
      • (2024)Discrete Gesture Recognition Using Multimodal PPG, IMU, and Single-Channel EMG Recorded at the WristIEEE Sensors Letters10.1109/LSENS.2024.34472408:9(1-4)Online publication date: Sep-2024
      • (2024)Comparing online wrist and forearm EMG-based control using a rhythm game-inspired evaluation environmentJournal of Neural Engineering10.1088/1741-2552/ad692eOnline publication date: 30-Jul-2024
      • (2024)Understanding the influence of confounding factors in myoelectric control for discrete gesture recognitionJournal of Neural Engineering10.1088/1741-2552/ad491521:3(036015)Online publication date: 17-May-2024
      • (2023)Development of an Integrated Protocol for XR EEG Authentication and BCI Illiteracy Classification Based on 2D CNNJOURNAL OF BROADCAST ENGINEERING10.5909/JBE.2023.28.5.58928:5(589-602)Online publication date: 30-Sep-2023
      • (2023)Seeing the Wind: An Interactive Mist Interface for Airflow InputProceedings of the ACM on Human-Computer Interaction10.1145/36264807:ISS(398-419)Online publication date: 1-Nov-2023
      • (2023)Leveraging Task-Specific Context to Improve Unsupervised Adaptation for Myoelectric Control2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC53992.2023.10394393(4661-4666)Online publication date: 1-Oct-2023
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media