Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3581037acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input

Published: 19 April 2023 Publication History

Editorial Notes

A corrigendum was issued for this paper on June 19, 2023. You can download the corrigendum from the supplemental material section of this citation page.

Abstract

Surface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm ∼ 1mm), meso-scale (1mm ∼ 200μm), and micro-scale (<200μm). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale", where the simplicity and easy implementation of the technique allows it to blend with objects around us. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity.
Figure 1:
Figure 1: Surface I/O combines macro-, meso- and micro-scale surface features (A – C) to create passive interface widgets with haptic affordances and touch sensing. This example one-piece plastic widget (D and E), installed on the side of a VR headset, includes a macro-scale "valley" to horizontally constrain the finger (blue), meso-scale detents and rough textures to demarcate the ends (teal), and micro-scale textures to enable input tracking (pink). An inexpensive vibration sensor is attached to the back of the widget and can capture the signal of a swiping finger (F) to enable absolute position tracking.

1 Introduction

As computing devices have become smaller, less expensive, more powerful, and more connected, they have proliferated in our everyday lives, and some would say we have reached the era of ubiquitous computing predicted by Mark Weiser and those at Xerox PARC over three decades ago [64]. Interacting with our devices, however, has still remained largely constrained to two dominant types of interfaces. First are seamless, continuous surfaces, most often flat, with some form of integrated touch sensing (e.g., membrane keypads on microwaves, touchscreens in smartphones). Second are interfaces comprised of mechanical inputs, typified by mechanisms like buttons, sliders, and dials. An example of this dichotomy currently exists in automotive interface design where both flat, touchscreen surfaces, and buttons and knobs co-exist, sometimes even offering adjustment of the same parameter (e.g., speaker volume or environmental controls). This has led some in the automotive space to begin rethinking how to best implement a given function when a variety of operations need to be performed in a tactile, intuitive manner [12, 30].
Touchscreens offer a seamless, robust, and re-configurable platform for interface designers to tweak, tune, and test, but, being entirely flat, they lack tangibility and rely on visual intuition and feedback to operate. Conversely, buttons, knobs, and other discrete, physical mechanisms offer great tactile affordances and direct interaction methods that have stood the test of time, but they can be difficult to design, implement, and adjust at scale. To achieve high reliability, these moving mechanisms are built by a specialty manufacturer, and then enclosures and other mechanical linkages are tailor-made to fit around these discrete components, protecting them from the environment and defining other aesthetic qualities of the experience (the "look and feel"). All of this adds to the cost, complexity, and reliability of a given implementation, leading some OEMs to remove these components altogether (e.g., the removal of the home button on smartphones or the dashboard of a Tesla Model 3).
Instead of the wholesale removal of tactile components from interfaces, we explore an intermediary interface approach which we call Surface I/O. This approach functionalizes the exterior surface of devices, allowing them to offer haptic affordances without dedicated mechanical components, while simultaneously enabling user input tracking. There are no moving parts in Surface I/O, and interfaces can be rapidly prototyped using 3D printing and laser etching techniques. This simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale," an idea put forth in the realm of digital textiles [54], where the simplicity and easy of implementation of an interface technique can allow it to proliferate and blend into objects around us. In this way, we envision Surface I/O not necessarily replacing more traditional touchscreens and buttons, but rather being designed into existing surfaces (perhaps even as part of touchscreens and button surfaces) as a way of imbuing additional functionality with minimal additional cost.
Our system utilizes three scales of surface geometry working together (Figure 1). First are macro-scale surface features, between roughly 5cm and 1mm, which create discernible static deformation on fingertips (e.g., shapes, edges). At our meso-scale — between roughly 1mm and 200μm — repeated features can create textures that can be felt when translating a finger across the surface. Finally, micro-scale features (below 200μm) cannot be readily felt, but create vibrations that can be sensed and used to drive interactive functions. These surface features can be mixed-and-matched, as well as overlaid with one another, in order to create functional objects (Figures 1, 8 and 9). Importantly, once prototyped, these interfaces can be manufactured at scale using methods such as injection molding or stamping. These methods are extremely well understood and cost-effective, and it is how the enclosure surfaces of most consumer goods are already produced today.
In this paper, we detail our explorations of this design space, drawing on prior work in psychophysics, industrial design, and HCI. We developed sets of exemplar designs, which we presented to users in a series of studies to investigate the affordances and aesthetics of our multiscale features. To assess input accuracy, we built and tested a machine learning classifier in a separate evaluation. We also performed a baseline wear test of our micro-scale input features. We conclude the paper with several illustrative example interfaces, demonstrating the potential and feasibility of the approach.

2 Related Work

Surface I/O draws on many different disciplines and fields of practice, including industrial design, manufacturing, mechanical engineering, vibro-acoustics, psychology, and human factors. In this section, however, we chiefly focus on prior work in the HCI and haptics literature.

2.1 Active Haptic Surfaces

Surface displays that are able to programmably change their tactile qualities generally fall into two categories, surface haptic displays [4], which remain nominally flat but couple energy to the user’s finger in some way, and shape displays, which physically render different geometry or physical properties to the user [3, 55].
Surface haptic devices transmit haptic information to the user using either vibration or by varying the friction between the user’s finger and the device. Vibrations can be coupled into the surface and designed to converge to a single point [33], or simply be transmitted through the surface [67]. Variable friction devices typically rely on ultrasonic lubrication [44, 66] or electroadhesion [6, 58] to either reduce friction or increase friction dynamically with finger position.
Shape-changing interfaces are much broader in definition and can change a number of tactile surface qualities, such as form, volume, texture, addition/subtraction, and permeability [55]. Notable examples include Project FEELEX [35], Relief [43], and inForm [21], which actuate entire surfaces; Surflex [18], which bends the surface; Sprout I/O [19], which adjust texture properties; BubbleWrap [5], which adjusts both passive and active haptics using magnets and coils; and pneumatically changing interfaces by Harrison and Hudson [27].

2.2 Passive Haptic Surfaces

Tactile feedback built into the surface and geometry of an object is nothing new to the industry. Small, passive tactile affordances pervade physical product design. The knurling on a mechanical dial informs users how to grip it, while subtle home key "bumps" help users to locate their fingers on a keyboard. This is how many products communicate their function to users, a concept discussed in detail by Don Norman in his book on product design [51]. Automotive interfaces have incorporated passive haptic feedback nearly since their inception [37], and the design space for automotive interiors includes a range of buttons, knobs, sliders and other controls which offer passive haptic feedback [38]. Only recently, when the touchscreen has supplanted many of these passive tactile interfaces, are designers beginning to rethink how pure tactile forms may be designed to influence and inform interface functions, for instance, in car interiors [1, 11], or on textile surfaces [2, 47, 48, 52, 54]. As larger passive haptic surfaces can provide tactile affordance, researchers have also investigated non-planar surfaces for touch sensing [7, 50, 54, 56]. While most of these large curvatures fall under object-scale designs, rather than surface-level designs, these non-planar interfaces can provide tactile affordance when multiple fingers explore together. That said, our work focuses on surface-level designs explorable by a single finger.

2.3 Surface Features for Input Sensing

For passive surfaces to operate as an interface, they must also include a method to sense a user’s input. Traditional knobs and buttons did this with switch contacts, while more sophisticated methods, such as textile interfaces [54], commonly use projected capacitance methods. Surface I/O, however, uses acoustic sensing from touch interaction with the surface. One example system using acoustic data for touch sensing was Scratch Input [26], which used natural surface textures to enable touch gesture classification. Another sensing interface, Rubbinput [36], operated on different squeaking sounds fingers make as they stick-slip across wet surfaces. Continuous touch tracking was also showcased by Pham et al. [53] using time difference of arrival techniques. Acoustic Barcodes [29] embedded binary codes for acoustic information transfer. Recently, TriboTouch [57] showed how micron-scale textures from diffraction gratings can be applied to commercial touchscreens to improve touch latency.
Perhaps the most closely related work, Stane [49], built a proof of concept handheld tactile controller which sensed user movement against large tactile features by listening to the sounds that were produced with a contact microphone. Similar to Acoustic Barcodes and Scratch Input, the authors showed that three scratched-based input classes (and one miscellaneous noise class) could be classified using an ML model, though only 75% accuracy was achieved. We use the same general configuration as Stane, using a contact microphone and 3D printed surfaces, but expand the range and diversity of printed tactile and sensing textures. Critically, we directly print meso- and micro-scale sensing textures (described in the next section), which allow acoustic interaction sounds to occur at much higher frequencies than those in Stane. Our textures generate up to 20kHz signals as opposed to the 2kHz limit of Stane. This allows sensing signals that are spectrally separated from other incidental tactile contact sounds.

2.4 Tactile Graphics

Tactile graphics make information more accessible to people who are blind or have low vision through tactile information such as raised lines, textures, shapes and elevations. The applications of tactile graphics span multiple domains, including educational models [14, 22, 23], picture books [39, 60], appliance overlay [24] and tactile maps [25, 31, 32, 62]. Tactile graphics are created most commonly by embossing a special paper called "swell paper", or by vacuum-forming a plastic sheet over a master with different textures [20]. More recently, 3D printing is being explored to fabricate tactile graphics [20, 31, 32]. One major challenge for tactile graphics is limited information density [13]. Creating tactile graphics that combines rich information without being too cognitively challenging is hard [61]. Tactile spatial acuity also remains a complex topic [15]. While we were not directly influenced by this literature, it is notable that we independently converged on many similar salient tactile structures, such as raised edges, dots, and textures. We believe that this group of "power users" provides proof that there is much information to be communicated through structured, tactile interaction with surfaces.

3 Surface I/O

Surface I/O is a technique and design approach that incorporates different scales of surface geometry into functional objects. We do not discuss object-scale design, which can be thought of as the gross contours and shape of the object or device itself (e.g., the shape of a computer mouse, TV remote control, or steering wheel), and generally operates at physical scales above 5cm. Instead, Surface I/O focuses on geometry below 5cm. We break the physical scale continuum into three functional regions (Figure 1, D). The first two, macro- and meso- scales, align well with common ways in which humans are spontaneously known to explore objects [41]. We call out the relevant types of exploratory procedures (lateral motion, contour following, function test, and part test) when appropriate. A final micro-scale is used for input sensing. We developed a representative set of 35 stimuli that explored different aspects of these three design scales (Figure 2 and Table 3). We now describe each of these design scales, along with motivating prior work.
Figure 2:
Figure 2: Set of 35 stimuli developed to explore different aspects of macro- and meso-scale surface features, which we used in our later user studies. Each stimulus is centered on an 8 × 8cm panel and 3D-printed. Stimuli with a brown overline were included in Study 1; stimuli with a blue overline were included in Study 2. We include a closeup of our meso-scale stimuli (A – E, common scale).

3.1 Macro-Scale: Shape Features (5cm ∼ 1mm)

Surface features between roughly 5cm and 1mm create discernible static deformation on and around the fingertip as humans probe those features. Such indentations are sensed primarily by SAI, FAI, and SAII tactile mechanoreceptors that respond directly to static and dynamic skin deformation and skin stretch, respectively. The surface shape is a unique property that we sense by moving our fingers around the different contours of an object, integrating this deformation information into a perception of exact shape [42].
The possible set of macro-scale shapes is essentially infinite. As a starting point, we identified several geometric primitives that are well-suited for use in physical interfaces. The first major category we explored was discrete locating features, including bumps, holes, rings, and detents (see examples in Figure 2, F – M). These can be approximately finger-sized, or smaller, creating different tactile affordance effects. For instance, a finger-sized "hole" naturally "captures" a finger that moves into it, whereas a smaller hole feels more like a locating feature, and offers little to no resistance translating on or off of the feature. This allows "function test" type exploration between the finger and the surface, which is common behavior during haptic object recognition.
In a different manner, we can create features that guide fingers along predefined paths, another tactile affordance for movement. This affordance plays off our natural exploratory desire to follow surface contours wherever they lead. For this, we create finger-sized valleys that "capture" the finger once it has "fallen in", but permit lateral movement in the valley (example in Figure 2, N). Another possibility is one or more raised ridges that act like a knife’s edge, or more functionally speaking, railway tracks (Figure 2, O and P). These afford low-friction translation in one axis, but resist movement perpendicular to the rail. The natural "path" afforded by valleys and rails can be straight (Figure 2, N, O and P), circular (Figure 2, Γ, Δ and Θ), spiral (Figure 2, Λ), or any other arbitrary shape. It is also straightforward to create wall-like features that arrest a finger translating across a surface, such as a protruding square (Figure 2, L) and raised edge (Figure 2, O and P, going vertically against the rails). Further, such features can be asymmetric (e.g., sawtooth-like features), so they more strongly resist finger movement in one direction than the other, creating something akin to a one-way effect (example in Figure 2, Q). All of these movement-restrictive features are a type of "part motion" test, where the finger is the part that is moving around the surface.
Finally, we note that macro-scale features can be superimposed to create more complex geometries. This design space is extremely large, but we include some illustrative examples in Figure 2. For example, a rail feature (Figure 2, O) can be replicated to increase the grip force (Figure 2, P), and the rails themselves can have features, such as hills (Figure 2, V and W), which feel much like detents as the finger translates. Also, instead of rails being parallel, they can narrow, which creates a pinching effect on skin that is proportional to the relative linear movement (Figure 2, X). We can create similar geometric compositions with valleys. For example, a valley (basic version in Figure 2, N) can have hills (Figure 2, T), or raised lines (Figure 2, R) acting as detents. We can also taper the width of the valley such that the finger no longer comfortably fits, yet the geometry naturally expresses a direction the finger can move to reduce tactile entropy, creating a basic widget with two resting states (Figure 2, Φ).

3.2 Meso-Scale: Textural Features (1mm ∼ 200µm)

Features between roughly 1mm and 200μm operate more in the textural realm. The features are small enough not to dramatically hinder the movement of the finger, but still offer salient tactile information. Tactile texture is generally separated into two categories, depending on the spatial size [16]. Larger "coarse" texture features (closer to 1mm) are generally mediated by SAI and FAI mechanoreceptors, which preserve spatial information [9]. Conversely, smaller "fine" texture features (closer to 200μm), operate more on the vibrotactile level, mediated by FAI and FAII mechanoreceptors [8], and feel more like subtle vibrations traveling into the finger. The dominant exploratory procedure associated with our meso-scale is lateral scanning.
Similar to macro-scale features, the possible set of meso-scale textures is quite large. For example, surfaces can be so smooth as to be sticky, like glass (Figure 2, A), or smooth and low-friction like fine velvet (Figure 2, B). Conversely, textures can be rough, like sandpaper (Figure 2, C) to grippy, creating a velcro-like effect on the skin (Figure 2, D). Instead of a random field of meso-scale geometry (last four examples), structured fine features can generate guiding-like effects when sufficient force is applied to the finger (Figure 2, E).

3.3 Micro-Scale: Input Features (<200µm)

Patterned features with pitches below approximately 200μm cannot be discerned with a static finger, and are even difficult to detect when translating the soft tissues of the finger over the surface. Prior work has found that few tactile textures have geometric features below this size [46]. This is because humans must laterally scan a surface to feel the fine texture, and scanning features this small at even moderate scanning speeds leads to vibrations that are outside of the range of human tactile perception (>500Hz) [10]. Even when these ultra-fine textures are detectable (at low scanning speeds), they typically manifest as a more subtle velvet-like texture, and not e.g., the rough, sandpaper-like texture described in the previous section. When fingers are translated across such micro-patterned surfaces, they produce distinctive vibro-acoustic spectra (example spectrogram from a finger swipe in Figure 1). This is caused by the spatial spectra of the skin’s roughness interacting with the spatial spectra of the textured surface, leading to vibrations that are characteristic of the two surfaces’ spatial profiles and the scanning speed [57, 65]. We utilize this signal for sensing touch movement input.

4 Implementation

In this section, we first document the software workflow and physical production methods we used to explore the Surface I/O design space. However, as a general technique, Surface I/O is not tightly coupled to specific software or hardware. We also document our sensing and machine learning implementation used for detecting user input.

4.1 Software Workflow

We created all of our designs in Cinema 4D, which can export to a range of formats. For repeating surface textures, we use Cinema 4D’s cloner functionality rather than modeling all of the features manually. The result is a monolithic 3D model, which we export as an STL. We use the popular Lychee Slicer and no special preparation is needed to handle our files (just a sufficiently accurate 3D printer).

4.2 Rapid Prototyping

To fabricate our prototype designs, we primarily relied on an ELEGOO Mars 3 3D Printer. This masked stereolithography apparatus (MSLA) printer cost around $250 USD at the time of research. Using a 6.6" 4098 × 2560 mono LCD screen, it can produce details as fine as 35μm (Figure 4, D). Higher-end 3D printers are capable of even finer details. To give us more design flexibility at our micro-scale, we also experimented with a custom laser cutter capable of creating features as small as 20μm (Figure 4, A). We note that it is also possible to create macro- and meso-scale features using laser cutters, but we chose to focus most of our efforts on a 3D printer workflow.

4.3 Mass Production Methods

3D printers and laser cutters worked great for rapidly prototyping our designs, but they are less well-suited for mass production of consumer goods. Fortunately, Surface I/O is compatible with many existing and popular methods of product manufacturing. Chief among these methods is injection molding — macro-to-micro-scale features could be incorporated into the mold design, such that it is a one-shot process. Such features could be CNC milled, which is most common, or laser etched into the mold walls [17, 59]. There are even injection molding machines on the market that can produce holographic effects, which have feature sizes on the order of a few microns [40, 63]. Stamping methods can also scale from macro-scale shapes (e.g., car parts) down to micro-scale features (e.g., vinyl records, which have grooves 40 ∼ 80μm). In fact, stamping is how BlueRay disks are mass-produced from a master die, which have pit sizes of just 150nm. In general, methods that apply less pressure to materials (e.g., casting, blow molding) have a lower probability of embossing ultra-fine details.

4.4 Production Cost

Surface I/O-augmented interfaces can be produced with very low and essentially no additional component cost for consumer goods. Instead of adding components, Surface I/O designs can be incorporated into the existing surfaces of goods using single-shot mass-production methods such as injection molding or stamping, as mentioned in Section 4.3. We note that the costs involved in the production of Surface I/O-augmented devices will mainly be material costs, rather than component costs, as the primary electronic component is simply a small piezoelectric disc. These components can be very inexpensive to implement, as demonstrated by the recent proliferation of microphones into a variety of voice interfaces.

4.5 Input Sensing

For sensing user interactions, a high-bandwidth vibration sensor must be acoustically coupled to the Surface I/O object. For this, we use a 7mm diameter piezo element (PUI Audio, Inc. AB1070B-LW100-R) affixed to the underside of our stimuli with cyanoacrylate glue. We shielded the piezo in a grounded layer of copper tape. The signal is read off differentially by a custom pre-amplifier (identical to that used in TriboTouch [57]), which buffers, integrates, and conditions the signal from the piezo before it is digitized by a ZOOM UAC-2 USB audio interface at 192kHz (Figure 4). We use the PyAudio API to receive and process 2048 audio buffers (at 192kHz, this results in a classification frame rate of ∼ 94Hz). If the signal exceeds a background noise threshold, we begin to classify audio frames.
For classification, we use sklearn’s ExtraTrees implementation (100 estimators with gini impurity criterion for splitting). As input features, we compute the FFT of the signal (128 bins), along with the FFT’s standard deviation, center of mass, skew, median, percentiles (10th percentiles), number of peaks (scipy.signal.findpeaks; default parameters other than min distance 5, prominence 0.23, plateau size 1, relative height 0.5), mean distance between peaks, and index of the first peak. Additionally, we also calculate a 3 × 20 MFCC spectrogram, and on these 20 spectral bands we compute standard deviation, min, max, median, and kurtosis. In total, this yields an input vector of size 246. To help stabilize our classification output, we use a five-frame voting window.

5 Evaluation

We designed a series of studies to investigate a core set of research questions. Our first study focuses on design primitives (shape and texture) and if they can be discerned by a user, express specific tactile qualities, and offer any intrinsic affordances. Our second study combines primitives into basic widgets, which were presented to participants in thematic sets. This allowed us to study the efficacy of conveying function (e.g., complexity, intuitiveness). Study three quantifies the classification accuracy of four micro-scale features used for input tracking. Finally, study four evaluates the baseline durability of Surface I/O micro-scale features over time and wear. We now describe these four studies in greater detail.

5.1 Study 1: Efficacy of Primitives

To understand the haptic efficacy of the surface primitives we designed, we ran a semi-structured interview study. Specifically, the research questions we had were: 1) Are the surfaces tactilely discernible? 2) Are they tactilely interesting? 3) Can users pick up on basic tactile signifiers? 4) Can Surface I/O features communicate to users how to move or interact with their fingers on the surface?
We created a representative stimuli set containing 17 surface primitives, ranging from macro- to meso- scale (seen in Figure 2). We included 5 meso-scale stimuli (A – E). All of them were textural, with one that conveyed directional guidance (E), while others did not. There were 12 macro-scale samples of which 8 (F – M) conveyed a point of interest and the rest (N – Q) conveyed directional guidance. Stimuli were presented in a box that prevented participants to see the stimuli, but allowed them to comfortably reach inside and explore it. An opening in the back of the box allowed the experimenter to place and change stimuli easily.
We recruited 9 participants for Study 1 that ranged in age between 19-32 years old (mean=24.78, sd=3.87); 5 participants were male. Participants were compensated $10 for the 40-60 minute study. All participants had some experience interacting with computer interfaces. Before the study began, the participants were asked to wash and dry their hands. The participants were then seated in front of the box and were asked to feel different tactile stimuli and explore these stimuli with the index finger of their dominant hand at varying speeds, pressures, and in all directions. The experimenter first presented all 17 stimuli one at a time and in random order for the participants to familiarize themselves. After that, the experimenter presented the same set of stimuli in random order one at a time and asked the participants to imagine the particular stimulus was part of a designed interface, such as a car interior, and try to imagine what the designer might be communicating to them through the stimulus.
Participants were then given a set of open-ended questions (Table 1, Q1 – Q3) describing their preference and their tactile impression. Subsequently, participants were asked to rate the textures on eight aspects on a five-point Likert scale ranging from 1-5. Four of the Likert scale questions evaluate the tactile form (Q6, Q7, Q8, and Q9), and two of them evaluate the tactile signifier (Q10 and Q11), with two evaluate the overall tactile impression (Q4 and Q5). Participants were allowed to feel the samples as they answered the questions. We took the average Likert scale ratings and standard deviations of each of the stimuli for analysis. For our semi-structured user interviews, we analyzed participants’ quotes and synthesized key features of each stimulus based on percentages of participants that mentioned the particular feature. We also crossed tactile pleasantness with tactile forms and computed the R2.
Table 1:
Table 1: Questions from Study 1 (Q1 – Q3) and 5-point Likert scale rating questions (Q4 – Q11). Q4 evaluates intensity and Q5 evaluates pleasantness. Q6 – Q9 evaluate specific tactile forms. Q10 – Q11 evaluate tactile signifiers.

5.2 Study 2: Functional Associations

To evaluate the efficacy of widgets built upon Surface I/O primitives in conveying functional affordances, we ran a second user study focusing on functional associations. Our high-level questions were: 1) Can users recognize more complex tactual forms? 2) Can they associate these forms with widget functions? 3) Do they have preferences about complexity?
To study more complex, widget-like examples, we included 18 new stimuli designs. We included some stimuli from our Study 1 for a total of 25 distinct stimuli (Figure 2). These were presented to participants in 18 thematic sets, exploring different design facets (Figure 3). Each set had 2-5 stimuli, and some stimuli appeared in several sets. The presentation order of the sets was randomized, as were the stimuli inside of a given set. The type of design variations included: geometry of locating features, size of locating features, detents vs. hills, effectiveness of detents, geometry of guiding features, macro vs. meso guiding features, number of rails, rail features and variations, linear vs. rotational input, geometry transition, and spiral vs. concentric dial. All stimuli in Study 2 were macro-scale.
We recruited eight participants for Study 2, all of which had previously taken Study 1. The experimental setup was the same as Study 1 and participants were compensated $10 for the 40-60 minute study. Same as in Study 1, they were asked to first wash and dry their hands. The participants were then asked to feel different tactile stimuli and explore these stimuli with the index finger of their dominant hand at different speeds, pressures, and in all directions. The experimenter presented the participants with 18 sets of stimuli one set at a time, in random order, and asked the participant to familiarize themselves with the all stimuli in the set. The participants were then given two questions regarding a particular set of stimuli (Q12 and Q13) as seen in Table 2. The participants were then asked to rate each stimulus in the set individually on a five-point Likert scale ranging from 1-5 on the perceived intuitiveness and complexity (Q14 and Q15). The participants were allowed to explore the stimuli as they answer questions in Study 2. We computed the average Likert scale ratings and standard deviations. For our semi-structured user interview, we analyzed participants’ quotes and synthesized key features of each stimulus based on percentages of participants that mentioned the particular feature.
Table 2:
Table 2: Questions from Study 2 regarding stimuli sets (Q12 – Q13) and 5-point Likert scale questions evaluating intuitiveness and complexity (Q14 – Q15).
Figure 3:
Figure 3: In Study 2, participants were presented with 18 thematic stimuli sets. Here, we document which stimuli appeared in what sets, along with the design parameter of interest.

5.3 Study 3: Input Accuracy

Figure 4:
Figure 4: Four example micro-scale textures mounted to an acrylic plate (copper-shielded piezo element is affixed to the center of the underside). For each texture, we provide a microscopic view (common scale) along with an example spectrogram created when swiping the texture (note the distinctive harmonics).
As a proof-of-concept, we created four exemplary micro-scale textures for input sensing using machine learning classification (Figure 4). No meso- or macro-scale geometry was included in these designs. All samples were 2.5 × 10cm in size, with a surface of repeating ridges of varying pitches (which in turn create distinctive vibro-acoustic spectra, as discussed in Section 2.3). Two (B and D) were created via 3D printing (with pitches of 35μm and 175μm) and two (A and C) were created using a laser cutter (with pitches of 20μm and 80μm). All four stimuli were affixed to a common sheet of 3mm Acrylic, with a single piezo sensor located in the middle of the underside (see Figure 4). See Section 4.5 for hardware and machine learning implementation details.
We recruited 10 participants for this study (mean age 25.4), which lasted approximately 15 minutes. Participants were instructed to swipe downwards in a single, continuous motion with one finger. To encourage our participants to employ a typical swiping speed and pressure, our script included the prompt: "swipe normally as if you were scrolling on your phone". Note that participants still accelerated and decelerated during their swiping motions, leading to variable velocities across the surfaces. The experimenter requested trials verbally (labeled A through D), in a random order, with each micro-texture being repeated 30 times, for a total of 120 trials collected per participant. The experimenter used a laptop to segment each trial, which was written to disk.

5.4 Study 4: Durability Test

In this study, our goal was to evaluate the baseline resistance to wear of our micro-scale Surface I/O features (our larger meso- and macro-scale features should only be more durable and reliable). We selected seven common product materials that could adopt Surface I/O designs in the future: photopolymer resin (same as our 3D printed samples), polycarbonate (PC), acrylic, polyethylene terephthalate (PET), acetal copolymer (delrin), anodized aluminum, and nylon. The photopolymer resin sample was produced via 3D printing with a texture pitch of 175μm, while the rest of the samples were produced via surface laser etching with a pitch of 200μm.
To simulate user wear, we produced artificial finger probes according to the ASTM F1597-02 standard for testing membrane switches [34]. Each finger probe consists of a metal nail in the center of a soft cylinder (diameter 1.5cm) with a hemispherical tip. The soft part of the finger probe was made with Ecoflex 00-50 and coated with a layer of soft paint (Behr Marquee Ultra Pure White Satin Enamel Paint & Primer) to better mimic the stiffness of the skin. Each finger probe was loaded with a spring to touch the samples at a typical human touching force, and connected to a reciprocating cycle linear actuator (JGB37-520 high-torque DC motor). We connected this actuator to a DC power supply at 3.8V and the actuator moved at approximately 300 rpm when connected to our spring-loaded finger probes.
For each material sample, we recorded the vibroacoustic spectra upon finger swipes using piezo sensors, both at the beginning of our experiment and every 40,000 cycles thereafter (1 cycle = 2 swipes). We also took photos of surface wear under a microscope at the same intervals. In total, we ran all of our samples up to 120,000 cycles (240,000 swipes). We found that the soft primer paint on the finger probes wore down and had to be re-applied every 40,000 cycle interval. The photopolymer resin and PET samples were tinted with black marker for better surface feature visibility under the microscope.

6 Results & Discussion

Figure 5:
Figure 5: Average Likert scale ratings for each stimulus presented in Study 1 for questions Q4 – Q11 (see Table 1). Error bars are standard deviation.
Figure 6:
Figure 6: Likert scale ratings from Study 1 plotted against one another (A – E). Confusion matrix of ML results from Study 3 (F).

6.1 Study 1: Efficacy of Primitives

6.1.1 Tactile Intensity and Pleasantness.

When crossing average ratings of tactile intensity (Q4) vs. pleasantness (Q5) for primitives in Study 1, we found that generally, the more intense a tactile stimulus, the less pleasant users felt (R2 = 0.558; Figure 6). Of all the specific tactile form metrics (smooth vs. rough (Q6), slippery vs. sticky (Q7), flat vs. bumpy (Q8), and dull vs. sharp (Q9)), pleasantness was more correlated to the sharpness of texture (R2 = 0.595).

6.1.2 Textures (Meso-Scale).

Meso-scale stimuli (A – E; see Figure 2) showed a multitude of interesting tactile associations, summarized in Figure 5. Both stimuli "smooth" (A) and "velvet" (B) were rated as the least intense, yet A was reported smoother, more slippery, flatter, and duller than B. The average rating was consistent with findings from our semi-structured interview. For stimulus A, 87.5% of the participants mentioned that it was smooth (see Table 4 in the Appendix). Participants reported that stimulus A: "feels like a panel made out of premium plastic" (P1); "very smooth surface, [...] like a glass" (P3); "it feels like a very clean and clear floor cladding" (P7). 75% of the participants mentioned that stimulus A communicated that it was a basic texture with no information. In particular, participants mentioned that A "is building the basic surface for the product" (P1); "conveying nothing important here, [...] like a negative space" (P5); and "not trying to communicate anything, and is a filler surface" (P8). While B was also on the smoother and less intense side, it had more texture compared to A. Participants tended to associate stimulus B with surfaces that were textured, but still very smooth to move around on (mentioned by 87% of the participants): "texture on playing cards" (P1), "wall cladding but with slightly more friction" (P4), "some cloth [...] canvas sofa" (P6), "silk" (P7), "exterior of things like vinyl plastics, [...] on toys" (P8).
Both stimuli "rough" (C) and "grippy" (D) had similar tactile intensity and tactile form ratings (Figure 5). Participants also described both C and D as frictional and rough. C was rough, but the finger could still move across it (mentioned by 75% of the participants) whereas D signified applying pressure to it rather than moving (mentioned by 50% of the participants). Participants felt that C: "does not feel as grippy [...] as the one [D] before, yet not as smooth, very in between" (P1); "like a piece of wood and is very rough [...] and bumpy" (P4); "rough but uniformly rough, texture pattern is uniform and close enough [...] I can definitely move my finger" (P8). On the other hand, for stimulus D, participants reported: "I feel like pressuring it to see how grippy it is and how much resistance is there [...] it feels like sandpaper that does not encourage people to slide" (P1); "it feels like velcro tape [...] it tells me where I should hold" (P2); "I need to apply more force and should not move around" (P3); "it feels really rough and difficult to navigate, and sticky [...] moving creates joint pain [...] it is not supposed to be swiped on very much on" (P5).
All stimuli in the meso-scale category, except for "subtle guide" (E), were reported by most, if not all participants, to have no movement affordance, which was consistent with the finding that E had the highest "no direction vs. strong direction" Likert scale rating (Figure 5). While all the stimuli within the meso-scale category felt more like textures rather than the functional parts of interfaces, E conveyed some directional affordance by creating different tactile feedback upon moving horizontally vs. vertically (mentioned by 87.5% of the participants).

6.1.3 Points of Interest (Macro-Scale).

Almost all participants perceived the stimuli in this category (F – M; see Figure 2) as conveying a point of interest (Table 4 in the Appendix). Among the circular stimuli set (F – K), large and simple shape stimuli (F and H) were more salient and were mentioned to be button-like by most participants (87.5% and 75%, respectively).
Within all the circular stimuli (F – K), larger circular stimuli (F "large hole", H "large bump", and J "large ring") were reported to be more pleasant than their smaller counterparts (G "small hole", I "small bump", and K "small ring"; see Figure 5 and Table 4). Our qualitative findings from semi-structured user interviews suggested that users felt more positive emotions as their entire finger fit into/onto the larger circular stimuli: "It’s (F) keeping my finger in there aesthetically and comfortably" (P1); "I just want to put my finger in the dent (F) as it almost perfectly fits the finger" (P3). Larger circular stimuli also implied directions more strongly and implied more frequent interaction (Table 4). Smaller stimuli, on the other hand, were deemed more subtle, and not as intentional: "this (G) is made without any thought, error in the manufacture" (P1); "the indentation (G) is very small and difficult to understand what is going on [...]; it is a very subtle interaction, if I wasn’t looking at it, I would not be able to tell what’s going on" (P5).
The types of interaction commonly mentioned were "press" and "move around" for all macro-scale point-of-interest stimuli. Participants seemed to associate circular stimuli (F – K) with pressing movement the most, especially for F, H, J, and K (62.5%, 50%, 87.5%, 75% of the participants mentioned, respectively). For ring-shaped and smaller circular stimuli (I – K), participants still associated them with pressing motions, yet more with moving around (62.5%, 62.5%, 25% of participants mentioned, respectively). Specifically, participants mentioned: "(I) feels like the computer stick on ThinkPad, I need to press it in some direction" (P6); "(J) feels like a dial, [...] and I would rotate in clockwise to increase some value and counterclockwise to decrease some value, [...] feels like continuous control" (P6); "(K) I prefer pressing from above and control in a circular motion" (P2).
Stimuli "barrier" (L) and "detent" (M) were also associated with buttons, yet by fewer participants. While some participants still had a preference for moving around on stimulus L, it seemed to be more due to the avoidance of the sharp extrusion: "As I move my finger, I immediately encounter the sharp edge, and I circled my finger around it try not to get my finger cut on the surface" (P5). 50% of the participants mentioned that they found L as a barrier to avoid or stop moving over it: "I need to stop in the middle, [...] some blocking effect" (P6); "it feels abrupt and very sharp, I will be in a lot of pain if I accidentally reach over [...] I don’t think people are supposed to interact with it" (P5). The quotes were consistent with the ratings where L was rated the most intense, the least pleasant, the bumpiest, the sharpest, and the most discrete stimulus of all the macro point of interest stimuli (Figure 5). Stimulus M, on the other hand, was perceived as a more subtle stimulus with a preference to be rubbed horizontally or vertically (62.5%). Several participants mentioned that M felt like the locator bumps on the home row of a keyboard (25%).

6.1.4 Guidance (Macro-Scale).

All of the stimuli in this category (N – Q) had a higher average rating for directionality compared to other stimuli in Study 1 (Figure 5). These stimuli were also on average perceived as more continuous than the macro-scale point of interest stimuli discussed in Section 6.1.3.
Among all the stimuli within this category, pleasantness for "valley" (N) was the most positive, which coincided with findings for macro-scale point-of-interest stimuli — users generally had more positive emotion towards surface features that fit their fingers: "I really like to swipe in the valley because it has the same size with my finger [...] fits right in" (P7). N had the greatest horizontal movement affordance, with 87.5% of the participants mentioning that they felt it was trying to communicate to them to move horizontally. In general, horizontal movement guidance was more commonly perceived than vertical movement guidance in almost all stimuli within this category according to the percentages of participants mentioned (Table 4 in the Appendix), with an exception of "one-way" (Q), which had an asymmetric "one-way" texture. Participants reported that the strong haptic feedback when swiping across these stimuli (vertically) made them confused about which direction to move: "For comfort, I would prefer swiping (P "multiple rails") horizontally, yet for more information, I would swipe vertically" (P5). In general, the stronger the horizontal movement affordance perceived, the more participants associated the stimuli with sliders, and the stronger the vertical movement affordance perceived, the more participants associated the stimuli with a division element separating areas of an interface.
For Q, which was designed as a one-way asymmetric stimulus, an equal percentage of users mentioned that the texture was trying to convey movement in horizontal as well as vertical directions (62.5%). 62.5% of the participants also reported that they felt the stimuli to be asymmetric and easier to navigate from left to right.

6.2 Study 2: Functional Associations

6.2.1 Geometry of Locating Features.

We tested stimuli "large hole" (F), "large bump" (H), and "large ring" (J) in this thematic set (Figure 3). All participants were able to distinguish between the stimuli in terms of concaveness and shapes. All participants mentioned that these stimuli felt like buttons for controlling important or frequent functions on a computer interface: "the first (F) and the second (J) texture are either power button or touch ID. [...] (H) is just a multipurpose button" (P1); "they felt like important buttons that the user has to distinguish very easily [...] for important decisions" (P2); "they all could be on/off button for a monitor" (P3). Among all the stimuli, F scored the highest in intuitiveness (mean=4.25, sd=0.89) and had the lowest standard deviation. H was deemed the simplest stimulus with the lowest standard deviation (mean=1.13, sd=0.35).

6.2.2 Size of Locating Features.

In Sets 2 – 4 (see Figure 3) that included this design variation, participants were all able to distinguish stimuli in terms of size. We found that the larger stimuli (F, H, and J) were rated as more intuitive and less complex than their smaller counterparts (G, I, and K), and generally more salient. This was consistent with the qualitative data from our semi-structured user interviews. There was more consensus on the stimuli being a functional button for bigger stimuli than smaller stimuli. From our participants’ comments, we found that the bigger sizes were perceived as more accessible and associated with more frequent actions: "the big one is really accessible, easy to press like a power button; whereas the small one asks the user not to touch it unless the user really mean it, such as the reset button. Feels like something that you have to do it maybe once a year, never even" (P1); "the big one feels like an on/off button for a computer; needs to be touched pretty often [...] the small one would control some minor thing or is used under limited space; like the button at the edge of the monitor to control the brightness of the screen" (P3); "the big one can be for important notifications [...] the small one since I can miss it because it is smaller than the finger. Can be for unimportant notifications" (P6).

6.2.3 Detents vs. Hills.

We tested this design variation on both our valley (R "valley detents" and T "hilly valley") and rails (U "rails detents" and V "hilly rails"; see Figure 3). All participants were able to tell that the only difference between the two pairs of stimuli was that one had detents and the other one had a sinusoidal effect. For Sets 5 and 6, there were participants that explicitly brought up associations of all the stimuli with slider-type of controls. More participants (62.5% of them) mentioned the slider association for the valley set (R and T), than for the rails set (U and V). This implied that the valley set (R and T) was somehow more salient, and was backed by slightly higher mean intuitiveness Likert scale ratings (3.88 and 3.53, respectively) than the rails set (U and V) (3.0 and 3.38, respectively; Figure 5). Stimuli with detents (R and U) were perceived more as discrete controls whereas stimuli with hills (T and V) were perceived more as continuous controls (Table 4): "(T) is very smooth and reminds me of a color palette and feels like I can control the color value. The wavy part reminds me of a color palette like how it flows" (P1); "(T) there were wiggly dents that feel smooth and more comfortable. It gives me a sense of flow and direction. Each concave can be a signifier for value, say you can use it to control brightness and this concave signifies 25% of brightness" (P3); "(T) can be a continuous 4-state thing where you drag between states 1 to 4 back and forth" (P8); "both of them (R and T) induce me to drag in the horizontal direction. (T) can control the youtube progress bar, the lower part would induce my finger, and says that I need to see this part in this video. [...] (R) says something like ’adjust integer value like date or year’" (P6).

6.2.4 Effectiveness of Detents.

All of our participants were able to tell geometrical differences (Figure 5) of stimuli within Set 7 (Figure 3), reporting that everything between the two stimuli (R "valley detents" and S "valley intense detents") was the same, except that S had taller and sharper detents. R, which had a lower z-height for its detents, was perceived as less sharp and useful for more continuous control (50% of our participants mentioned) and more slider-like (37.5% of our participants mentioned; Table 5 in the Appendix). S, on the other hand, was commented as being more discrete by 62.5% of our participants and more button-like (62.5% of our participants). Interesting quotes on R and S include: "(R) has more subtle bumps [...] if interact as buttons, I would prefer the right one (S), if interact as sliders, I would prefer the left one (R)" (P5). "(S) feels more for mode switching because the stops are too sharp to glide over" (P7); "they can all be 4-state systems. The left one (S) can be more discrete. [...] Can be the pressing one. The right one (R) can be the dragging one because of its subtlety. [...] (S) has sharper and more discrete ridges. (R) has shallow and subtle ridges so you can easily move your finger back and forth but you can feel the ridges" (P8). Both stimuli had the same rating in terms of intuitiveness, although the sharper stimulus (S) had a slightly larger mean complexity rating (Table 6 in the Appendix).

6.2.5 Geometry of Guiding Features.

All of our participants were able to tell the geometrical differences between three of our stimuli (P "multiple rails", Z "rough strip", and N "valley"; Table 5 in the Appendix). Some participants (37.5%) mentioned that they were all swipe-related interaction widgets that had a specific direction to them when asked to imagine them as a type of control for a computer interface. Among all stimuli in this set, N had the highest mean intuitiveness rating (mean=3.75, sd=1.28) and the lowest mean complexity rating across participants (mean=1.88, sd=0.64; Table 6). N was also explicitly brought up as continuous and with clear directions by the highest percentage of participants (25%), compared to other stimuli in this category (12.5% for P and 12.5% for Z) when participants were asked to compare and contrast stimuli in this set open-endedly. P, on the other hand, was perceived as a division or barrier by 37.5% of our participants. Specifically, participants found it to be too sharp for a slider, though with a guiding function: "I understand immediately that it is telling me to move along rails, [...] yet it was sharp" (P5); "it is sharp and not encouraging people to touch on, almost like dividing lines" (P4). Z was perceived more as an ambiguous rough texture rather than some widgets (mentioned by 62.5% of participants): "I cannot think of what it would control [...] feels like somewhere to grip on" (P2); "it does not feel quite for continuous control, there is low contrast between this area and the smooth part" (P3); "it is not that clear that I should move horizontally, but I can feel a strip of texture" (P6).

6.2.6 Macro vs. Meso Guiding Features.

The geometrical differences in Set 9 were clear to all participants and all mentioned that stimulus "multiple rails" (P) felt more intense and sharper than stimulus "subtle rails" (Y). 62.5% of participants mentioned that P felt more functional and intense, whereas Y felt less functional and more subtle (Table 5 in the Appendix): "(P) says don’t touch or cross the two regions very often, as if they are two countries with a clear border, [...] (Y) feels like a dividing line that I can cross often [...] like friendly countries" (P4); "the most natural interaction is to go left and right, the difference is how much you feel the surface beneath you. (Y) feels more subtle, feels like gliding naturally; (P) feels more prominent, and I would know exactly what I’m doing." (P5).

6.2.7 Number of Rails.

All participants were able to distinguish between "multiple rails" (P) and "single rail stimuli" (O) (Table 5 in the Appendix). Several participants were able to tell exactly the number of rails in P. In general, participants interpreted the interaction guidance in both ways — horizontal and vertical. When interacting vertically and swiping across the rail, they tended to perceive the rail as a divider or as a 2-state switch: "when swiping across, it (O) can be power button to switch on and off" (P2); "(O) can be 2 different buttons where one ridge separates them. You press on two divided regions to turn on and off" (P8). When swiping along the rails horizontally, participants tend to associate them with sliders: "they can control something accurately, like slide bars" (P6); "(P) is a slider that controls sound, [...] like a radio channel controller" (P7). Stimuli P and Q did not seem too different functionally, although 12.5% of participants mentioned that P felt more ambiguous, but conveyed more purpose when translated vertically. In terms of Likert scale ratings, P was rated as slightly more intuitive (mean=2.63, sd=1.60), yet also slightly more complex than O (mean=2.38, sd=0.52) (Table 6).

6.2.8 Rail Features and Variations.

75% of the participants were able to tell the differences between all stimuli in Set 11 accurately, except for P ("multiple rails") and X ("narrowed rails"), which they deemed to be the same or felt like X was somehow a little bit rougher than P, without mentioning the geometrical differences (Table 5 in the Appendix). Only 25% of the participants correctly described all the similarities and differences between stimuli, and clearly mentioned that X had a converging set of rails. 87.5% of participants explicitly commented that all the stimuli in this section felt like slider-type controls: "I can imagine them as sliders, [...] maybe something like a midi board for things to slide on and off" (P1); "they control things that are continuous and have a sense of gradient" (P3); "like sliding control scrolling from left to right [...] such as slider you control when you are editing your photo in photoshop" (P3); "everything here induced me to drag horizontally" (P6). V ("hilly rails") was often compared with W ("variable width rails") when participants answered the open-ended compare and contrast question (Q12) and 75% mentioned that V was more intense than W. Participants’ comments suggested that: 1) P was the vanilla normal slider ("(P) feels like a normal slider" (P8); "(P) is a normal slide bar" (P6)), 2) U ("rails detents") and V could be slide bar for discrete state change or continuous slide bar with haptic indicators ("(V) for state slider where you can slide between the states, similar to (U)" (P8); "(V) reminds me of the video progress bar, if I strongly recommend seeing a specific part, the bumps would indicate where I strongly recommend [...] (U) is the discrete tick slide bar for changing date and years" (P6)), 3) W could be for more subtle interactions ("it is also a video progress bar, signifies places where I slightly recommend" (P6)). Stimulus X on the other hand, because its tactile form was not that salient to most participants, felt like "2-state changes" (P8) according to only 12.5% of participants.

6.2.9 Linear vs. Rotational Input.

All participants were able to articulate clearly the linear and rotational differences between stimuli in the same set (Table 5 in the Appendix). In general, participants associated linear stimuli (N "valley", O "single rail", P "multiple rails", and Z "rough strip") more with continuous control sliders: "you can use this (N) to control something with a sense of gradient" (P3); "(O) can be a slider for volume or temperature" (P2); "(O) can choose or adjust the channel on a radio" (P7). Linear stimuli were also mentioned to have end stops to interactions: "it (N) can control continuous value-changing things, but this one has a stop" (P7). Rotational stimuli (Ξ "circular rough strip", Θ "circular rails", Δ "circular rail", and Γ "circular valley") were more associated with dials. While participants still associated rotational stimuli with continuous changes, they associated rotational stimuli more with mode/function changes: "(Ξ) really feels like the thing for the iPod nano, like a dial" (P3); "(Δ) really feels like a control for setting" (P2); "(Θ) can be for selection for colors or turning on/off — they feel pretty cool" (P1); "(Θ) feels like controlling something needs fine control and interaction, like audio control, audio mixer [...] they don’t really have to be continuous, but really fine interaction". In contrast to linear stimuli, which were reported to represent interaction with endpoints, rotational dials were also mentioned to match interactions without an endpoint: "feels like you can change some continuous values infinitely" (P7).

6.2.10 Geometry Transition.

All participants described the abruptness of transition to be the main difference between the two stimuli in Sets 16 and 17 (Table 5 in the Appendix). Most participants explicitly stated that they felt two points of interest with a connector in between, and agreed that there were functional associations between the stimuli with two connected points in Sets 16 and 17: "(Ψ "rail switch") the bar serves as a connector between buttons and indicates the same category of function for 2 buttons" (P1); "(Ψ) the two buttons feel related. They can be buttons for binary choices" (P4). In both sets, smoothly transitioned stimuli were rated higher in terms of intuitiveness (Φ "smooth valley switch": mean=3.38, sd=1.41; Ω "smooth rail switch": mean=3.63, sd=1.41). The smoothly transitioned stimuli were also more associated with connected, two-state functional control systems: "(Ω) feels more like interface, like I’m choosing between two modes" (P2); "(Ω) feels like a slider in the sense that the rail seems to be guiding my finger; however, it feels like less of a functional slider because the important information is the button here" (P5); "(Ψ) feels like just two states, [...] I cannot slide back and forth and the ridge serves no purpose [...] (Ω) feels more like a two-state system, the left button is for one state, like a dimmer switch" (P8).

6.2.11 Spiral vs. Concentric Dial.

Most participants (75%) could distinguish between spiral (Λ "spiral rails") and concentric (Θ "circular rails") patterns (Table 5 in the Appendix), however, it took substantial tactile exploration. They were both perceived by most users (75%) as dials, yet the main difference reported between these two stimuli was that Λ had a clear start and end point, whereas Θ had not: "(Λ) has a start point and an endpoint [...] I will move my finger inside out or from outside to inside" (P2); "I’m more likely to use this one (Λ) to set colors values in photoshop because it has an end — like color RGB values that are from 0-255. [...] I’m going to use Θ to switch color modes in Photoshop" (P7).

6.3 Study 3: Input Accuracy

Using a leave-one-participant-out cross-validation procedure, we found a mean texture classification accuracy of 90.1%. The confusion matrix can be found in Figure 6, F. While there is room to improve accuracy, we believe this initial result demonstrates the feasibility of input sensing. While the micro-scale textures with pitches of 35μm and 175μm performed the best, there was no substantive difference among the four geometries.
Figure 7:
Figure 7: Photographs taken under a microscope, along with spectrograms of the vibroacoustic signal generated upon finger swipes, for each sample material captured every 40,000 cycles. Our test materials were (A) photopolymer resin, (B) polycarbonate (PC), (C) acrylic, (D) polyethylene terephthalate (PET), (E) acetal copolymer (delrin), (F) anodized aluminum, and (G) nylon. Note that (A) and (D) were tinted for better visibility under the microscope.

6.4 Study 4: Durability Test

Based on photos of the surfaces taken under a microscope (Figure 7), we found no visible signs of abrasion on our micro-scale textures across any of our test materials during the 120,000 cycles (240,000 swipes). We also found that after 120,000 cycles, all materials preserved identifiable harmonic vibroacoustic spectra (spectrograms in Figure 7). In fact, we had to replace the paint coating on our finger probes every 40,000 cycles because the coating was wearing out much faster than our surfaces. While still only an initial experiment, it does suggest that these fine-grained surface features can be reasonably durable.

7 Summary of User Study Findings & Design Recommendations

In this section, we highlight key takeaways and design recommendations, drawing on the results of our two user studies that explore the efficacy of Surface I/O primitives (see Section 6.1) and the functional associations of Surface I/O designs (see Section 6.2). Our key findings and design suggestions are as follows:
We recommend using less intense stimuli for more pleasant Surface I/O interactions (see Section 6.1.1).
Textures (meso-scale) can generate rich tactile associations and affordances on how to interact with a widget (see Section 6.1.2). Structured textures can also convey a desired axis of motion (see Section 6.2.6).
Macro-scale geometries that are roughly the size of fingerpads are effective for conveying points of interest and are functionally associated with button-like interactions (see Section 6.1.3). Such features also rate highly on pleasantness (see Section 6.2.2).
Long, narrow macro-scale geometries (valleys and rails) convey a sense of continuity (see Section 6.1.4) and can be used for guidance or division on surfaces depending on interaction direction (see Section 6.2.5).
To create slider widgets, valleys appear to be slightly more intuitive than rails (see Section 6.2.3 and Section 6.2.5).
Both detent and hill features provide useful feedback for changing values. However, detents are perceived as more discrete and modal, while hills are perceived as more continuous (see Section 6.2.3). We recommend using hill features in sliders/dials for continuous value manipulation (e.g., changing brightness or volume), and detents for modal manipulation (e.g., switching between brightness and volume controls).
The more vertically-exaggerated a detent (which increasingly resists and even precludes finger translation), the more modal it feels (see Section 6.2.4).
Simpler designs tend to be more effective. Users can become confused if too many macro-scale features are employed simultaneously. Only include tactile elements if there is a clear interaction to be achieved (see Section 6.2.7 and Section 6.2.8).
Users associate linear macro-textures with sliders, while circles and spirals are associated with dials (see Section 6.2.9). We recommend using linear sliders for interactions with endpoints, and rotational dials for interactions without endpoints. Spiral dials are useful for interactions needing endpoints, but where physical space or reach of the user (e.g., thumbs) is limited.
Tactile features (such as rails or valleys) connecting two (or more) points of interest are generally interpreted as a single complex interface element, such as a two-state toggle switch (see Section 6.2.10).
Spiral and concentric dials are not readily distinguished by users upon tactile exploration (only after interaction). For this reason, we recommend not placing spiral dials close to concentric dials (see Section 6.2.9).

8 Example Uses

Surface I/O can bring particular value to scenarios where users have an impairment or situational disability in vision. This can include situations where users are unable to see the interface on a wearable, are performing tasks that require a high level of visual attention elsewhere, or have visual impairments. In addition, Surface I/O can help simplify the design and manufacturing process of hard goods that rely on mechanical or basic touch controls, at the same time unleash new interactivity on product surfaces (both flat surfaces and curved complicated surfaces, as this technique can be applied on any part of a good using current mass manufacturing methods). We do not propose to use our technique as a replacement, but rather as a method that can be combined with traditional interfaces — for instance, overlaying Surface I/O features on mechanical buttons for haptic affordance and input sensing, just as how conventional mechanical buttons can also be paired with capacitive sensing; or having Surface I/O features guide users’ hands towards a mechanical control of interest.
To illustrate how different Surface I/O elements can be brought together into an integrative design, we created three example interfaces. These not only highlight different use domains, but also underscore different strengths of Surface I/O as a design technique.

8.1 Head-Worn Wearables

Touch interfaces on head-worn wearables — such as headphones, AR glasses, and VR headsets — are particularly challenging as the user cannot see the interface. This means that locating features and other tactile elements are critically important for easy and accurate input. As an example interface, we augmented an Oculus Quest 2 with a linear slider on one of the arms of an Elite Head Strap (Figure 1). We note that this is an injection molded component, and in the future, for essentially zero extra cost, it could include our Surface I/O design.
More specifically, along the length of the slider, we included three line detents to help the user perceive the rate of travel. At both ends of the slider, there is a 5mm rough-textured section, indicating an extreme value has been reached. To track absolute user position, we vary the ratio of two different micro-scale textures between each pair of detents (Figure 1, D – F). A low-cost piezo sensor is affixed behind the one-piece, 3D-printed widget; in a commercial implementation, it may be possible for the piezo element to reside inside the headset, with vibro-acoustic signal propagating from the rigid strap. Likewise, it is conceivable the existing internal microphone could even pickup the vibroacoustic signal.
We envision users utilizing this slider as an alternative to in-air, free-hand navigation of VR interfaces. For instance, users can scroll through menus by sliding back and forth, and tapping the slider to select. To go back or escape, users can run their fingers perpendicular to the slider, pulling over the lip, which is vibro-acoustically distinctive.
Figure 8:
Figure 8: Photo of our Surface I/O-augmented steering wheel (B), with closeup views of a four-way directional control made out of four tear-drop finger wells (A) and small spiral dial (B) where a thumb could be used to manipulate a continuous value, such as volume or temperature.

8.2 Automobile

There is an ongoing trend to replace mechanical buttons in automobiles with more seamlessly designed smart surfaces, yet these surfaces are often not rich in haptic feedback [11]. Driving an automobile is a common example of a task with high visual distraction. For this reason, good eye-free interface design is critical. As an example in this domain, we created a Surface I/O-augmented steering wheel (Figure 8). On the left, we included a four-way directional control made out of four tear-drop finger wells, each with different micro-scale textures (Figure 8, A). On the right, we included a small spiral dial (Figure 8, C). These convenient thumb controls could be used, for example, to navigate among a series of modes (cabin temperature control, music player, seat adjustments, mirror position), with values that can be continuously manipulated by circling on the spiral dial. Increasing vs. decreasing in magnitude is inherently conveyed by the increasing or decreasing diameter of the spiral. Such controls could replace mechanical controls currently found on steering wheels, helping to reduce cost and improve durability.
Figure 9:
Figure 9: The left three images (A – C) show the top layers of an Alexa Echo. The right two images (D – E) show our single-piece, Surface I/O alternative top plate.
Figure 10:
Figure 10: Spectrogram of finger swipe signals captured using a piezo attached to our Surface I/O surface (A). Spectrogram of music playing in the same environment (B; sound source not directly coupled to the Surface I/O surface).

8.3 Hard Goods

Surface I/O could be incorporated into a wide range of hard goods, from kitchen appliances to toys. One category that could especially benefit from Surface I/O is low-cost consumer electronics relying on mechanical or basic touch controls (but no screen). Surface I/O could further reduce cost by reducing manufacturing complexity and/or eliminating components. The Amazon Echo is a great exemplar to consider: its top side features four mechanical buttons, which are individual, spring-loaded components heat-staked to the top plate. These buttons deflect downward upon being pressed, triggering surface mount tactile switches on a PCB (Figure 9, A – C).
We took inspiration from this device and created an alternative top plate for a smart speaker (Figure 9, D – E). Our one-piece design features a large repeated-rail-style dial for continuous manipulation (chiefly volume, but also potentially to scrub through a podcast or audiobook). We used cylindrical indents across all the rails to provide additional haptic feedback while scrolling. In contrast to our VR headset slider example, where we tracked absolute position, we only care about relative movement on this dial. For this, we place a 1cm micro-texture before each cylindrical indent, which itself produces a distinctive signal. We know that we are moving clockwise if we see our high-frequency micro-texture before the cylindrical indent, and vice versa for counterclockwise. Our design also features a toggle-like widget, also constructed using rails, with stops on the left and right. We use a different micro-texture on the left, such that if we detect its presence immediately followed by the stop, we know the user swiped left. Using a piezo attached to our Surface I/O input surface, we captured the spectra of multiple swipes on the input surface and the spectra of music playing in the environment. Note that the spectra of finger swipes differs from the spectra of music (Figure 10).

9 Limitations

The design landscape of possible object surface geometries is nearly limitless. Thus, our explorations in Surface I/O can only serve as an entry point. We sought, above all else, to provide exemplary stimuli and implementations to illustrate core concepts and the breadth of options. However, much work remains to more fully investigate the potential of this device design and fabrication approach.
Although the sensing mechanisms of our micro-scale features are primarily designed for swipe-based interaction (such as sliders), tap-based interactions are compatible with Surface I/O augmented objects drawing from existing approaches in tap-sensing [28, 45]. We note that these tap-sensing approaches could be a potential implementation for button-like interactions.
That being said, we do not claim that the Surface I/O technique is a replacement for mechanical controls or traditional flat touchscreens. We did not explicitly perform studies to compare Surface I/O interfaces with traditional mechanical interfaces and touch interfaces. Rather, we propose Surface I/O as an additional tool for designing the outermost touch surfaces for functionality, decoupling the surface from the underlying structure. For example, the top surfaces of a set of mechanical buttons can be layered with distinct Surface I/O features to provide haptic affordances — now they can be augmented to a set of buttons that tell you which button to press based on textures. Just like how mechanical controls can be coupled with capacitive sensors to reach new levels of interactivity, traditional interfaces can be integrated with Surface I/O designs for more seamless and expressive interaction.
There are also more immediate limitations in our present work. For instance, we prototyped four micro-scale textures used for input. This small set is sufficient for a proof of concept, but is insufficient to support full-featured user interfaces. Even with this smaller set, our recognition accuracy was 90.1% — perhaps good enough for a proof of concept, but not enough for consumers who will demand 99%+ input accuracies. That said, this pipeline could be improved and more advanced methods can be brought to bear on this problem to achieve greater accuracies and support larger input sets.
We also note that the long-term robustness of our interfaces is unknown. We know from equivalent micro-patterned technologies, such as analog vinyl records, that ultra-fine patterns can be susceptible to scratches and even dust. Although there was no visible change in surface features and vibro-acoustic harmonic signals of our micro-scale features after our abrasion wear test, it remains to be seen whether damage beyond abrasions will render the interfaces unusable or merely less accurate. Our larger macro- and meso-scale features can also suffer wear, but should be considerably more durable. Deformations at the macro-scale would probably indicate object destruction, and devices with mechanical inputs or a touchpad would likely share a similar fate.
Finally, although we speculate our methods are transportable to other fabrication methods, including those used at industrial scales, we do not demonstrate this capability ourselves. This was primarily due to cost, e.g., commissioning a single injection mold costs around $5,000 USD. That said, having consulted experts, we are confident it is indeed possible, and there are many companies producing tools and methods that operate at the scale of our proposed features.

10 Conclusion

We proposed and explored a user input tracking approach that provides rich haptic affordances, but without dedicated mechanical components, by superimposing surface features of macro-, meso-, and micro- scales, which we collectively call Surface I/O. Our approach entails the mix-and-match of: macro-scale surface geometries between roughly 5cm and 1mm that create discernible fingertip deformations; meso-scale repetitive features between roughly 1mm and 200μm that instigate textural perception when moving a finger across the surface; micro-scale features below 200μm that cannot be readily felt, but produce acoustic vibrations that can be sensed and used as input. We also presented a workflow in the design and manufacturing pipeline that frees designers from mechanical mechanisms and allows them to rapidly prototype and iterate on geometrical form, haptic feeling and sensing functionality.
We developed our representative set of stimuli that explored the interesting facets of Surface I/O. Our Study 1 validated the efficacy of surface primitives and synthesized the key features of each stimulus in terms of tactile qualities and intrinsic affordances. We built basic widgets out of the surface primitives and examined the efficacy of conveying functions in Study 2. We were able to compare, contrast and characterize different implications for surfaces that have different tactile properties and design variations, and further detailed design recommendations for Surface I/O-augmented interfaces. As a proof-of-concept, we also demonstrated the feasibility and durability of using micro-scale surface features for input sensing through Studies 3 and 4. Finally, we illustrated several potential interfaces integrating different Surface I/O elements and underscored the potential and feasibility of the approach.
Our approach is unique in that it functionalizes surfaces with haptic affordances and input sensing. Surface I/O offers a design technique for rapidly creating rich haptic input objects without the limitations of mechanical mechanisms. Importantly, interfaces built up from Surface I/O elements can be made without discrete components or processes, which means that these designs can be produced with cost-effective "single-shot" fabrication methods such as injection molding and stamping.

Acknowledgments

We would like to express the deepest appreciation to Karan Ahuja for his invaluable help in creating our machine learning pipeline and associated experiments. We also thank Karan Ahuja and Vimal Mollyn for clarifying the signal processing and featurization pipeline, without whom, this work cannot be made possible. Finally, we thank our anonymous reviewers for their detailed feedback, which greatly improved this paper.

Appendix

Table 3:
Table 3: Descriptions of the 35 representative stimuli that we came up with for Surface I/O.
Table 4:
Table 4: We analyzed participants’ quotes and synthesized key features of each stimulus in Study 1. In this table, we demonstrate synthesized key features based on the percentages of participants mentioned.
Table 5:
Table 5: We analyzed participants’ quotes and synthesized key features of each thematic set of stimuli as well as individual stimuli within each set in Study 2. In this table, we demonstrate synthesized key features based on the percentages of participants mentioned.
Table 6:
Table 6: This table documents the average Likert scale ratings and standard deviation across participants on each stimulus presented in Study 2 for intuitiveness (Q14) and complexity (Q15).

Supplementary Material

a423-ding-corrigendum (a423-ding-corrigendum.pdf)
Corrigendum to "Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input" by Ding et al., Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23).
MP4 File (3544548.3581037-video-preview.mp4)
Video Preview
MP4 File (3544548.3581037-talk-video.mp4)
Pre-recorded Video Presentation
MP4 File (3544548.3581037-video-figure.mp4)
Video Figure

References

[1]
Continental AG. 2018. Continental Automotive. http://www.continental-automotive.com/en-gl/Passenger-Cars/User-Experience/Display-Solutions/Flat-New/3D-touch-Surface-Display
[2]
Roland Aigner, Mira Alida Haberfellner, and Michael Haller. 2022. spaceR: Knitting Ready-Made, Tactile, and Highly Responsive Spacer-Fabric Force Sensors for Continuous Input. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology(UIST ’22). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3526113.3545694
[3]
Jason Alexander, Anne Roudaut, Jürgen Steimle, Kasper Hornbæk, Miguel Bruns Alonso, Sean Follmer, and Timothy Merritt. 2018. Grand Challenges in Shape-Changing Interface Research. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3173873
[4]
Cagatay Basdogan, Frederic Giraud, Vincent Levesque, and Seungmoon Choi. 2020. A Review of Surface Haptics: Enabling Tactile Effects on Touch Surfaces. IEEE Transactions on Haptics 13, 3 (July 2020), 450–470. https://doi.org/10.1109/TOH.2020.2990712 Conference Name: IEEE Transactions on Haptics.
[5]
Olivier Bau, Uros Petrevski, and Wendy Mackay. 2009. BubbleWrap: a textile-based electromagnetic haptic display. In CHI ’09 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’09). Association for Computing Machinery, New York, NY, USA, 3607–3612. https://doi.org/10.1145/1520340.1520542
[6]
Olivier Bau, Ivan Poupyrev, Ali Israr, and Chris Harrison. 2010. TeslaTouch: electrovibration for touch surfaces. In Proceedings of the 23nd annual ACM symposium on User interface software and technology(UIST ’10). Association for Computing Machinery, New York, NY, USA, 283–292. https://doi.org/10.1145/1866029.1866074
[7]
Hrvoje Benko, Andrew D. Wilson, and Ravin Balakrishnan. 2008. Sphere: multi-touch interactions on a spherical display. In Proceedings of the 21st annual ACM symposium on User interface software and technology(UIST ’08). Association for Computing Machinery, New York, NY, USA, 77–86. https://doi.org/10.1145/1449715.1449729
[8]
Sliman BensmaÏa, Mark Hollins, and Jeffrey Yau. 2005. Vibrotactile intensity and frequency information in the Pacinian system: A psychophysical model. Perception & Psychophysics 67, 5 (July 2005), 828–841. https://doi.org/10.3758/BF03193536
[9]
David T. Blake, Steven S. Hsiao, and Kenneth O. Johnson. 1997. Neural Coding Mechanisms in Tactile Pattern Recognition: The Relative Contributions of Slowly and Rapidly Adapting Mechanoreceptors to Perceived Roughness. Journal of Neuroscience 17, 19 (Oct. 1997), 7480–7489. https://doi.org/10.1523/JNEUROSCI.17-19-07480.1997 Publisher: Society for Neuroscience Section: Articles.
[10]
S Bolanowski, George Gescheider, R Verrillo, and C Checkosky. 1988. Four channels mediate the mechanical aspects of touch. The Journal of the Acoustical Society of America 84 (Dec. 1988), 1680–94. https://doi.org/10.1121/1.397184
[11]
Stefan Josef Breitschaft and Claus-Christian Carbon. 2021. Function Follows Form: Using the Aesthetic Association Principle to Enhance Haptic Interface Design. Frontiers in Psychology 12 (2021). https://www.frontiersin.org/articles/10.3389/fpsyg.2021.646986
[12]
Stefan Josef Breitschaft, Stella Clarke, and Claus-Christian Carbon. 2019. A Theoretical Framework of Haptic Processing in Automotive User Interfaces and Its Implications on Design and Engineering. Frontiers in Psychology 10 (2019). https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01470
[13]
Anke M. Brock, Philippe Truillet, Bernard Oriola, Delphine Picard, and Christophe Jouffrais. 2015. Interactivity Improves Usability of Geographic Maps for Visually Impaired People. Human–Computer Interaction 30, 2 (March 2015), 156–194. https://doi.org/10.1080/07370024.2014.924412
[14]
Craig Brown and Amy Hurst. 2012. VizTouch: automatically generated tactile visualizations of coordinate spaces. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction(TEI ’12). Association for Computing Machinery, New York, NY, USA, 131–138. https://doi.org/10.1145/2148131.2148160
[15]
Patrick Bruns, Carlos J. Camargo, Humberto Campanella, Jaume Esteve, Hubert R. Dinse, and Brigitte Röder. 2014. Tactile Acuity Charts: A Reliable Measure of Spatial Acuity. PLOS ONE 9, 2 (Feb. 2014), e87384. https://doi.org/10.1371/journal.pone.0087384
[16]
David Arthur Burns, Roberta L. Klatzky, Michael A. Peshkin, and J. Edward Colgate. 2021. Spatial perception of textures depends on length-scale. In 2021 IEEE World Haptics Conference (WHC). 415–420. https://doi.org/10.1109/WHC49131.2021.9517265
[17]
Lightmotif B.V.2022. Lightmotif — Mold texturing. https://www.lightmotif.nl/mold-texturing
[18]
Marcelo Coelho, Hiroshi Ishii, and Pattie Maes. 2008. Surflex: a programmable surface for the design of tangible interfaces. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’08). Association for Computing Machinery, New York, NY, USA, 3429–3434. https://doi.org/10.1145/1358628.1358869
[19]
Marcelo Coelho and Pattie Maes. 2008. Sprout I/O: a texturally rich interface. In Proceedings of the 2nd international conference on Tangible and embedded interaction(TEI ’08). Association for Computing Machinery, New York, NY, USA, 221–222. https://doi.org/10.1145/1347390.1347440
[20]
Julie Ducasse, Anke M. Brock, and Christophe Jouffrais. 2018. Accessible Interactive Maps for Visually Impaired Users. In Mobility of Visually Impaired People: Fundamentals and ICT Assistive Technologies, Edwige Pissaloux and Ramiro Velazquez (Eds.). Springer International Publishing, Cham, 537–584. https://doi.org/10.1007/978-3-319-54446-5_17
[21]
Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and Hiroshi Ishii. 2013. inFORM: dynamic physical affordances and constraints through shape and object actuation. In Proceedings of the 26th annual ACM symposium on User interface software and technology(UIST ’13). Association for Computing Machinery, New York, NY, USA, 417–426. https://doi.org/10.1145/2501988.2502032
[22]
Giovanni Fusco and Valerie S. Morash. 2015. The Tactile Graphics Helper: Providing Audio Clarification for Tactile Graphics Using Machine Vision. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility(ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 97–106. https://doi.org/10.1145/2700648.2809868
[23]
Cagatay Goncu, Simone Marinai, and Kim Marriott. 2014. Generation of accessible graphics. In 22nd Mediterranean Conference on Control and Automation. 169–174. https://doi.org/10.1109/MED.2014.6961366
[24]
Anhong Guo, Jeeeun Kim, Xiang ’Anthony’ Chen, Tom Yeh, Scott E. Hudson, Jennifer Mankoff, and Jeffrey P. Bigham. 2017. Facade: Auto-generating Tactile Interfaces to Appliances. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, New York, NY, USA, 5826–5838. https://doi.org/10.1145/3025453.3025845
[25]
Timo Götzelmann. 2016. LucentMaps: 3D Printed Audiovisual Tactile Maps for Blind and Visually Impaired People. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility(ASSETS ’16). Association for Computing Machinery, New York, NY, USA, 81–90. https://doi.org/10.1145/2982142.2982163
[26]
Chris Harrison and Scott E. Hudson. 2008. Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces. In Proceedings of the 21st annual ACM symposium on User interface software and technology(UIST ’08). Association for Computing Machinery, New York, NY, USA, 205–208. https://doi.org/10.1145/1449715.1449747
[27]
Chris Harrison and Scott E. Hudson. 2009. Providing dynamically changeable physical buttons on a visual display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Boston MA USA, 299–308. https://doi.org/10.1145/1518701.1518749
[28]
Chris Harrison, Julia Schwarz, and Scott E. Hudson. 2011. TapSense: enhancing finger interaction on touch surfaces. In Proceedings of the 24th annual ACM symposium on User interface software and technology(UIST ’11). Association for Computing Machinery, New York, NY, USA, 627–636. https://doi.org/10.1145/2047196.2047279
[29]
Chris Harrison, Robert Xiao, and Scott Hudson. 2012. Acoustic barcodes: passive, durable and inexpensive notched identification tags. In Proceedings of the 25th annual ACM symposium on User interface software and technology(UIST ’12). Association for Computing Machinery, New York, NY, USA, 563–568. https://doi.org/10.1145/2380116.2380187
[30]
Stefan Heijboer, Josef Schumann, Erik Tempelman, and Pim Groen. 2019. Physical fights back: introducing a model for bridging analog digital interactions. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings(AutomotiveUI ’19). Association for Computing Machinery, New York, NY, USA, 93–98. https://doi.org/10.1145/3349263.3351510
[31]
Megan Hofmann, Kelly Mack, Jessica Birchfield, Jerry Cao, Autumn G Hughes, Shriya Kurpad, Kathryn J Lum, Emily Warnock, Anat Caspi, Scott E Hudson, and Jennifer Mankoff. 2022. Maptimizer: Using Optimization to Tailor Tactile Maps to Users Needs. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3491102.3517436
[32]
Leona Holloway, Kim Marriott, and Matthew Butler. 2018. Accessible Maps for the Blind: Comparing 3D Printed Models with Tactile Graphics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173772
[33]
Charles Hudin and Sabrina Panëels. 2018. Localisation of Vibrotactile Stimuli with Spatio-Temporal Inverse Filtering. In Haptics: Science, Technology, and Applications(Lecture Notes in Computer Science), Domenico Prattichizzo, Hiroyuki Shinoda, Hong Z. Tan, Emanuele Ruffaldi, and Antonio Frisoli (Eds.). Springer International Publishing, Cham, 338–350. https://doi.org/10.1007/978-3-319-93399-3_30
[34]
ASTM International. 2010. Standard Test Method for Determining the Actuation Force and Contact Force of a Membrane Switch (Withdrawn 2008). https://www.astm.org/f1597-02.html
[35]
Hiroo Iwata, Hiroaki Yano, Fumitaka Nakaizumi, and Ryo Kawamura. 2001. Project FEELEX: adding haptic surface to graphics. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques(SIGGRAPH ’01). Association for Computing Machinery, New York, NY, USA, 469–476. https://doi.org/10.1145/383259.383314
[36]
Ryosuke Kawakatsu and Shigeyuki Hirai. 2018. Rubbinput: An Interaction Technique for Wet Environments Utilizing Squeak Sounds Caused by Finger-Rubbing. In 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). 512–517. https://doi.org/10.1109/PERCOMW.2018.8480335
[37]
Dagmar Kern and Bastian Pfleging. 2013. Supporting interaction through haptic feedback in automotive user interfaces. Interactions 20, 2 (March 2013), 16–21. https://doi.org/10.1145/2427076.2427081
[38]
Dagmar Kern and Albrecht Schmidt. 2009. Design space for driver-based automotive user interfaces. In Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications(AutomotiveUI ’09). Association for Computing Machinery, New York, NY, USA, 3–10. https://doi.org/10.1145/1620509.1620511
[39]
Jeeeun Kim, Hyunjoo Oh, and Tom Yeh. 2015. A Study to Empower Children to Design Movable Tactile Pictures for Children with Visual Impairments. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction(TEI ’15). Association for Computing Machinery, New York, NY, USA, 703–708. https://doi.org/10.1145/2677199.2688815
[40]
KraussMaffei Company Ltd.2019. Generation of sophisticated surfaces via injection molding and dynamic mold heating. https://www.youtube.com/watch?v=pcmyc_NZ7u0
[41]
Susan J Lederman and Roberta L Klatzky. 1987. Hand movements: A window into haptic object recognition. Cognitive Psychology 19, 3 (July 1987), 342–368. https://doi.org/10.1016/0010-0285(87)90008-9
[42]
S. J. Lederman and R. L. Klatzky. 2009. Haptic perception: A tutorial. Attention, Perception, & Psychophysics 71, 7 (Oct. 2009), 1439–1459. https://doi.org/10.3758/APP.71.7.1439
[43]
Daniel Leithinger and Hiroshi Ishii. 2010. Relief: a scalable actuated shape display. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction(TEI ’10). Association for Computing Machinery, New York, NY, USA, 221–222. https://doi.org/10.1145/1709886.1709928
[44]
Vincent Levesque, Louise Oram, Karon MacLean, Andy Cockburn, Nicholas D. Marchuk, Dan Johnson, J. Edward Colgate, and Michael A. Peshkin. 2011. Enhancing physicality in touch interaction with programmable friction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). Association for Computing Machinery, New York, NY, USA, 2481–2490. https://doi.org/10.1145/1978942.1979306
[45]
Pedro A. Lopes, Alfredo Ferreira, and J. A. Madeiras Pereira. 2010. Multitouch interactive DJing surface. In Proceedings of the 7th International Conference on Advances in Computer Entertainment Technology(ACE ’10). Association for Computing Machinery, New York, NY, USA, 28–31. https://doi.org/10.1145/1971630.1971639
[46]
Louise R. Manfredi, Hannes P. Saal, Kyler J. Brown, Mark C. Zielinski, John F. Dammann, Vicky S. Polashock, and Sliman J. Bensmaia. 2014. Natural scenes in tactile texture. Journal of Neurophysiology 111, 9 (May 2014), 1792–1802. https://doi.org/10.1152/jn.00680.2013 Publisher: American Physiological Society.
[47]
Sara Mlakar, Mira Alida Haberfellner, Hans-Christian Jetter, and Michael Haller. 2021. Exploring Affordances of Surface Gestures on Textile User Interfaces. In Designing Interactive Systems Conference 2021(DIS ’21). Association for Computing Machinery, New York, NY, USA, 1159–1170. https://doi.org/10.1145/3461778.3462139
[48]
Sara Mlakar and Michael Haller. 2020. Design Investigation of Embroidered Interactive Elements on Non-Wearable Textile Interfaces. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3313831.3376692
[49]
Roderick Murray-Smith, John Williamson, Stephen Hughes, Torben Quaade, and Steven Strachan. 2008. Rub the stane. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’08). Association for Computing Machinery, New York, NY, USA, 2355–2360. https://doi.org/10.1145/1358628.1358683
[50]
Mathias Müller, Anja Knöfel, Thomas Gründer, Ingmar Franke, and Rainer Groh. 2014. FlexiWall: Exploring Layered Data with Elastic Displays. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces(ITS ’14). Association for Computing Machinery, New York, NY, USA, 439–442. https://doi.org/10.1145/2669485.2669529
[51]
Don Norman. 2013. The Design of Everyday Things: Revised and Expanded Edition. Basic Books. Google-Books-ID: nVQPAAAAQBAJ.
[52]
Oliver Nowak, René Schäfer, Anke Brocker, Philipp Wacker, and Jan Borchers. 2022. Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile Sliders. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3491102.3517473
[53]
D. T. Pham, Z. Ji, O. Peyroutet, M. Yang, Z. Wang, and M. Al-kutubi. 2006. - Localisation of impacts on solid objects using the Wavelet Transform and Maximum Likelihood Estimation. In Intelligent Production Machines and Systems, D. T. Pham, E. E. Eldukhri, and A. J. Soroka (Eds.). Elsevier Science Ltd, Oxford, 541–547. https://doi.org/10.1016/B978-008045157-2/50095-X
[54]
Ivan Poupyrev, Nan-Wei Gong, Shiho Fukuhara, Mustafa Emre Karagozler, Carsten Schwesig, and Karen E. Robinson. 2016. Project Jacquard: Interactive Digital Textiles at Scale. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 4216–4227. https://doi.org/10.1145/2858036.2858176
[55]
Majken K. Rasmussen, Esben W. Pedersen, Marianne G. Petersen, and Kasper Hornbæk. 2012. Shape-changing interfaces: a review of the design space and open research questions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’12). Association for Computing Machinery, New York, NY, USA, 735–744. https://doi.org/10.1145/2207676.2207781
[56]
Anne Roudaut, Henning Pohl, and Patrick Baudisch. 2011. Touch input on curved surfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). Association for Computing Machinery, New York, NY, USA, 1011–1020. https://doi.org/10.1145/1978942.1979094
[57]
Craig Shultz, Daehwa Kim, Karan Ahuja, and Chris Harrison. 2022. TriboTouch: Micro-Patterned Surfaces for Low Latency Touchscreens. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3491102.3502069
[58]
Craig Shultz, Michael Peshkin, and J. Edward Colgate. 2018. The Application of Tactile, Audible, and Ultrasonic Forces to Human Fingertips Using Broadband Electroadhesion. IEEE Transactions on Haptics 11, 2 (April 2018), 279–290. https://doi.org/10.1109/TOH.2018.2793867 Conference Name: IEEE Transactions on Haptics.
[59]
St. Paul Engraving, Inc.2016. Laser Texturing. https://www.youtube.com/watch?v=bB47_QzyuxQ
[60]
Abigale Stangl, Chia-Lo Hsu, and Tom Yeh. 2015. Transcribing Across the Senses: Community Efforts to Create 3D Printable Accessible Tactile Pictures for Young Children with Visual Impairments. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility(ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 127–137. https://doi.org/10.1145/2700648.2809854
[61]
Ryo Suzuki, Abigale Stangl, Mark D. Gross, and Tom Yeh. 2017. FluxMarker: Enhancing Tactile Graphics with Dynamic Tactile Markers. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility(ASSETS ’17). Association for Computing Machinery, New York, NY, USA, 190–199. https://doi.org/10.1145/3132525.3132548
[62]
Brandon T. Taylor, Anind K. Dey, Dan P. Siewiorek, and Asim Smailagic. 2015. TactileMaps.net: A Web Interface for Generating Customized 3D-Printable Tactile Maps. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility(ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 427–428. https://doi.org/10.1145/2700648.2811336
[63]
U-NICA Solutions AG. 2023. intraGRAM®. https://www.u-nica.com/solutions/intragram
[64]
Mark Weiser. 1999. The computer for the 21st century. ACM SIGMOBILE Mobile Computing and Communications Review 3, 3 (July 1999), 3–11. https://doi.org/10.1145/329124.329126
[65]
Michael Wiertlewski, José Lozada, and Vincent Hayward. 2011. The Spatial Spectrum of Tangential Skin Displacement Can Encode Tactual Texture. IEEE Transactions on Robotics 27, 3 (June 2011), 461–472. https://doi.org/10.1109/TRO.2011.2132830 Conference Name: IEEE Transactions on Robotics.
[66]
Laura Winfield, John Glassmire, J. Edward Colgate, and Michael Peshkin. 2007. T-PaD: Tactile Pattern Display through Variable Friction Reduction. In Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC’07). 421–426. https://doi.org/10.1109/WHC.2007.105
[67]
Jung-Han Woo and Jeong-Guon Ih. 2015. Vibration rendering on a thin plate with actuator array at the periphery. Journal of Sound and Vibration 349 (Aug. 2015), 150–162. https://doi.org/10.1016/j.jsv.2015.03.031

Cited By

View all
  • (2024)Demo of FlowRing: Seamless Cross-Surface Interaction via Opto-Acoustic RingAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686744(1-3)Online publication date: 13-Oct-2024
  • (2024)MagneSwift: Low-Cost, Interactive Shape Display Leveraging Magnetic MaterialsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642058(1-11)Online publication date: 11-May-2024
  • (2024)Point-Wise Vibration Pattern Production via a Sparse Actuator Array for Surface Tactile Feedback2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10611362(9659-9665)Online publication date: 13-May-2024

Index Terms

  1. Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Check for updates

    Author Tags

    1. design
    2. haptics
    3. input devices
    4. tangible interfaces

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,376
    • Downloads (Last 6 weeks)163
    Reflects downloads up to 03 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Demo of FlowRing: Seamless Cross-Surface Interaction via Opto-Acoustic RingAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686744(1-3)Online publication date: 13-Oct-2024
    • (2024)MagneSwift: Low-Cost, Interactive Shape Display Leveraging Magnetic MaterialsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642058(1-11)Online publication date: 11-May-2024
    • (2024)Point-Wise Vibration Pattern Production via a Sparse Actuator Array for Surface Tactile Feedback2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10611362(9659-9665)Online publication date: 13-May-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media