Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3580691acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Handheld Tools Unleashed: Mixed-Initiative Physical Sketching with a Robotic Printer

Published: 19 April 2023 Publication History

Abstract

Personal fabrication has mostly focused on handheld tools as embodied extensions of the user, and machines like laser cutters and 3D printers automating parts of the process without intervention. Although interactive digital fabrication has been explored as a middle ground, existing systems have a fixed allocation of user intervention vs. machine autonomy, limiting flexibility, creativity, and improvisation. We explore a new class of devices that combine the desirable properties of a handheld tool and an autonomous fabrication robot, offering a continuum from manual and assisted to autonomous fabrication, with seamless mode transitions. We exemplify the concept of mixed-initiative physical sketching with a working robotic printer that can be handheld for free-hand sketching, can provide interactive assistance during sketching, or move about for computer-generated sketches. We present interaction techniques to seamlessly transition between modes, and sketching techniques benefitting from these transitions to, e.g., extend (upscale, repeat) or revisit (refine, color) sketches. Our evaluation with seven sketchers illustrates that RoboSketch successfully leverages each mode’s strengths, and that mixed-initiative physical sketching makes computer-supported sketching more flexible.
Figure 1:
Figure 1: We created RoboSketch as an example of a new class of devices that combines the desirable properties of a computer-assisted handheld tool and an autonomous fabrication robot. The robotic high-resolution printer on wheels offers fluent transitions between manual, assisted, and autonomous modes. It assists the user in free-hand sketching (a). When triggered to switch to “autonomous mode” (b), it can roam freely to autonomously extend, complete, or refine the sketch (c). Application areas include fabricating electronic circuits, textile accessories, and woodworking (d).

1 Introduction

Handheld tools, ranging from brushes and sculpting tools to cutter blades, offer the creative and practical maker an undisputed level of directness. Yet, purely manual work practices using such tools can be repetitive and cumbersome, and are often constrained by the user’s manual skills, precision, and physical abilities. With the emergence of ever more sophisticated means of digital fabrication, machines are taking over such tasks. While these machines have proven to be extremely useful to process users’ intents without live intervention, delegating fabrication to the device in this way inhibits the inherently iterative nature of making.
Two streams of research have set out to tackle this problem from different directions: the first has proposed to add digital assistance to manual fabrication practices by augmenting handheld fabrication tools. Examples include hybrid carving [80], computer-assisted sketching [26, 46], 3D modeling [42], augmented airbrushing [55], and hybrid fabrication on the human body [16, 45]. While most of these approaches integrate directly with manual fabrication practices, their assistance suffers from a significant restriction: It is limited to the reach of the human arm, which prevents the device from carrying out fabrication tasks autonomously. It is always the user who has to lead the fabrication task.
The second research direction has investigated means to increase the interactivity of standard digital fabrication machines, for instance, by adding options for real-time design interventions to laser cutters [38] or 3D printers [41]. While these augmented machines can work more independently of the user and benefit from the precision and speed of high-end fabrication tools, they lack the ease and directness of in-situ physical practice with handheld tools.
We set out to integrate these worlds and propose a new class of devices that can be all three: a hand-operated manual tool, a computer-assisted handheld tool, and an autonomous fabrication robot. Such devices can assist the user where needed while in their direct proximity, but they can also be unleashed and roam freely, in order to solve some tasks independently. When done or called back, they return to the user and can again be operated in a manual or assisted mode.
To explore the potential of such “handheld tools unleashed” collaboration between humans and machines in the design and fabrication process, we created RoboSketch: a robotic printer on wheels with a joystick controller for manual sketching, capable of creating large-scale, high-resolution prints. It can be operated completely manually, inspired by a handheld brush (manual mode), but it can also provide interactive assistance during sketching (assisted mode). In addition, it can turn into an autonomous robotic device moving about for computer-generated sketches (autonomous mode). It is capable of operating on many surface materials, such as fabrics, paper, and wood, and with various inks including multi-color, UV, and conductive inks.
In the remainder of this paper, we first introduce the approach of “handheld tools unleashed”: mixed-initiative physical sketching in which humans and machines work together proactively and fruitfully, unleashing the creative and unique benefits of handheld tools and robotic autonomy in concert. We discuss the emerging range of fabrication modes, from manual and assisted to autonomous, and highlight why seamless mode transitions are key in this context.
Next, we present interaction techniques to control such seamless mode transitions that are based on simple interactions well-compatible with sketching. These techniques support user-initiated and robot-initiated transitions between all modes, even while sketching a continuous trace. We also introduce a set of sketching techniques that benefit from these transitions to help the designer extend manual sketches, for instance, by repeating elements or upscaling a design, and to help revisit a sketch, for instance, to refine or color it.
We then contribute a proof-of-concept implementation of a functional robotic device, comprising a high-resolution print head that is capable of operating with a variety of surface materials and inks. It is based on a commercial handheld inkjet printer and a robotic platform equipped with various input controllers and sensors to be context-aware.
Finally, to validate that our approach is technically feasible and useful for physical sketching, and to illustrate that it can be applied in a wide variety of fabrication contexts, we present three application examples: (1) creating electronic circuitry, (2) creating sewing patterns on fabric, and (3) woodworking. In addition, we present our findings from a case study with seven sketchers. It uncovers flexible patterns of use, and illustrates that mixed-initiative physical sketching can make computer-supported sketching more powerful and flexible.
In summary, the main contributions of this paper are:
the concept of a mixed handheld and autonomous device for mixed-initiative human–robot collaborative physical sketching that includes manual, assisted, and autonomous modes;
interaction techniques to seamlessly move between modes and make use of the robot’s autonomous capabilities to extend and revisit a sketch in the making;
RoboSketch, a working prototype of the first computer-assisted robotic printer that supports mixed-initiative physical sketching across all its three modes, with capabilities to create error-preventing constraints, and validated to enable dynamic, context-aware sketching at high resolution and large scale.

2 Related Work

Our contribution builds on prior work on interactive fabrication, sketching interfaces, and drawing tools for 2D surfaces.

2.1 Interactive and Bidirectional Fabrication

Digital design and fabrication technology have revolutionized the way we create and interact with objects. With modern technology, the design process can be done entirely digitally using computer-aided design (CAD) software, while the fabrication process is completed using computer-controlled machines (e.g., 3D printer, inkjet printer, laser cutter). This improves speed and accuracy. However, creative activities often require user engagement during the fabrication process [3, 27]. Inspired by traditional crafting tools, interactive fabrication [72] allows humans to participate throughout both the design and the fabrication process. This allows manipulating the fabricated workpiece in real-time. As an example, Constructables [38] enables users to manipulate the workpiece directly with a proxy laser, while a cutting laser creates the results instantly. Further research has explored this concept for various fabrication activities such as creating 3D models [41, 42], fabricating e-textiles [29] directly controlling fabrication machines [15, 35, 59] and creating interfaces around the body [16, 45]. To leverage the advantages of direct manipulation and automated fabrication systems, researchers have explored mixed-initiative systems that allow machines to act like collaborative partners and contribute to problem solving [18]. In this work, we build on this background and introduce mixed-initiative physical sketching as an instance of mixed-initiative fabrication that supports user-initiated and robot-initiated interaction and interweaves direct control and autonomous sketching.
In particular, our approach takes up the concept of concurrent interactive fabrication, where design and fabrication occur simultaneously. Prior work has realized this using computer-assisted handheld tools. For instance, FreeD [80] proposed a handheld milling tool to shape and carve 3D models with computer-assisted guidance. Augmented Airbrush [55] guides the user in spraying a painting using a computer-controlled airbrush system, dePENd [78] offers support for sketching using pen and paper, and Shaper Origin [49, 62] assists precise 2D cutting. More recently, Print-A-Sketch [46] presented an interactive handheld printer for the physical sketching of electronic interfaces. While these devices support manual and assisted modes of interaction, due to their handheld form factors, they cannot roam autonomously.
Bidirectional fabrication is another form of interactive fabrication that enables iterative manipulation of objects through digital and physical inputs [28, 71]. For instance, ReForm [70] presents a system that fabricates 3D objects based on on-the-fly modification of digital models and updates digital models after the physical deformation of objects. With this paper, we contribute to this vision of bidirectional fabrication by introducing a system that records manually sketched traces and prints digitally modified designs in real-time.
RoboSketch combines the idea of real-time interactivity between humans and machines with the ability to print high-resolution marks and presents the first robotic printer that supports manual, assisted, and autonomous sketching.

2.2 Sketching Interfaces

Sketching is a fundamental part of any design process. It is a quick and easy way to communicate ideas and concepts. It can be used to explore new ideas, create prototypes, and convey design concepts in an incremental and iterative way. Since sketching requires a certain level of skill, a variety of sketching interfaces have been developed to improve the accuracy of sketches, making them more accessible to a wider range of people. Physical sketching practices can be augmented with visual guidance, which for instance can take the form of a projected overlay that adds information to the surface [16, 45, 56]. Another approach is to provide haptic support during sketching to help users create better sketches. For example, dePENd [78] actuates a ballpoint pen by using a permanent magnet to provide directional force feedback. Langerak et al. [30] show how a variable force can be generated using an electromagnet and explore algorithms to minimize tracing errors. Phasking on Paper uses friction-based haptic guides to investigate shared control between user and system during sketching [26]. However, these devices cannot print high-resolution marks and do not support autonomous sketching.
While many of these interfaces center around sketching with pen and paper, a significant number of studies have instead concentrated on supporting users and enhancing their skills within digital environments. SketchPad [61], considered one of the pioneers of Computer-Aided Design (CAD) software, transformed traditional drawing using a display and a light pen. Building upon this, sketching tools such as DesignScript [2], DressCode [23], and Dynamic Brushes [22], made drawing easier for users with a more intuitive interface. Other approaches focused on providing guidance [33], tactile feedback [31], beautifying the strokes [20, 77], or enabling dynamic brushes and strokes [34, 68, 76]. With recent advances in artificial intelligence (AI), collaborative design with an AI agent enables iterative ideation [8, 40] and mixed-initiative content creation [10, 14]. We drew inspiration from these works for the implementation of our design tool and sketching techniques.
RoboSketch expands upon these ideas by linking sketches in the physical and virtual worlds and supporting physical sketching of circuits, textile patterns, and markings for woodworking.

2.3 Drawing Tools for 2D surfaces

Commonly used drawing tools include pen and paper. Previous work has explored various ways to make drawings at large scale and on arbitrary surfaces more autonomous and accessible. Common examples are the use of XY pen plotters [5, 54, 64], hanging V-plotters [6, 9, 39], or robotic arms [63, 75] to automate the drawing process on horizontal and vertical surfaces. Their use of pens or markers limits these devices to printing vector graphics at low speed. In contrast, making use of inkjet heads to replace the marker allows printing raster graphics at high resolution and higher speed [66]. However, with all of these devices, the drawing area is limited to the dimensions of the device as they do not move freely.
To solve this issue, researchers have proposed the use of wheeled robots that can move freely and print on any size and shape of the surface. Lee et al. [32] introduced one of the earliest examples of sketching robots. Cobbie [36] and DIY Omni Wheel Plotter [37] are other examples of mobile plotters. Sustainabot [50] is a small robot printer that uses everyday materials to create shapes. Kino [24] generates temporary patterns by etching fabrics. There also exist several commercial sketching and printing robots for education [21] and construction [51, 52]. However, these machines usually print predefined designs. They are not designed for interactive fabrication and do not support on-the-fly modification of the design.
RoboSketch makes use of a robotic printer on wheels. It leverages its advantages to enable mixed-initiative physical sketching with a robotic printer. It enables manual, assisted, and autonomous sketching.

3 Handheld Tools Unleashed

Figure 2:
Figure 2: a) Transition of control sharing when the user initiates the transition. Examples of user-initiated transitions during a single stroke: b) Sketching a circle in assisted mode, overriding the constraints to add details manually, and then switching back to assisted mode to finish the first half of the sketch, before giving over the control to the robot to mirror the sketch across the vertical axis. c) Drawing a straight line in assisted mode, sketching a heart shape in manual mode, and then giving over the control to the robot to scale the design. Finally, taking over the control to finish the sketch with a straight line in assisted mode.
We envision a new class of handheld devices to expand the scope of collaboration between humans and machines in creative design and fabrication processes, by combining the desirable properties of handheld tools with autonomous fabrication. Expanding upon Horwitz’s notion of mixed-initiative interaction [18], we aim for tools that proactively contribute to the manual fabrication process whenever needed, while allowing the user to continue working in a natural manner, but that can also contribute to the fabrication process completely autonomously if desired. With RoboSketch, we contribute a first and fully functional instantiation of this concept, demonstrating how a robot on wheels with a high-resolution color inkjet printhead can be used as a handheld tool for manual sketching, support assisted sketching, and can be "unleashed" to act as an intelligent robotic partner for autonomous drawing. RoboSketch addresses two key challenges:
Leveraging human and robotic skill sets. Humans and robots partnering in a design and fabrication process would ideally leverage the unique skill set of each partner: While humans excel at generating creative ideas and can more easily adapt to a dynamic context and unforeseen events, robotic tools are capable of creating precise, high-resolution output and exact replicates at high speed. RoboSketch enables a variety of physical sketching techniques that demonstrate how human and robotic skill sets can complement each other. Using RoboSketch as a handheld tool, the user sketches out their creative vision before "unleashing" the device. RoboSketch is then able to autonomously expand upon the user’s drafts by repeating patterns (e.g., leveraging symmetry), refining drafts (e.g., adding details), or by filling sketched-out regions with color. In addition, RoboSketch can offer the user to auto-complete their sketches (e.g., completing polygons), or offer creative completion by making use of AI to artistically elaborate on the user’s input. Hereby, RoboSketch transcends the function range of existing computer-assisted fabrication tools such as FreeD [80] and Phasking [26]. While those tools allow the user to ‘seize control’ by overriding computer assistance, e.g., with a button-press, or by applying force, they still require the user’s guiding hand to fabricate. They cannot fabricate autonomously beyond the confines of the user’s reach. In contrast, our proposed approach enhances the scalability of the resulting designs and considerably enhances the extent to which the machine can act as a co-creator.
Flexibly shifting control back and forth. Mixed-initiative physical sketching requires control shifts across the entire range from manual, where the user is in full control, over assisted with various levels of shared control, to autonomous mode, where the robotic tool is fully unleashed and sketches independently. To enable natural and efficient co-creation with both human and robotic tools iteratively contributing to the fabrication process, mode transitions need to be seamless. To this end, we developed a series of simple interaction techniques (Fig. 2) that enable fluent, user-initiated mode transitions throughout all modes at fabrication time: releasing the handle and giving the robot a gentle push signals the robot to continue on in autonomous mode (e.g., for elaborating on a user-created draft). In contrast, the user can solidify their grip (‘Hold Firmly’) to remain in control when in manual or assisted mode, or seize control by grabbing the robot’s handle when in autonomous mode. Robot-initiated control shifts are necessary when the robot encounters contextual or environmental ambiguity and requires human assistance in autonomous mode. Here, the robot stops and blinks. Moreover, in manual or assisted mode, the robot proactively offers to take over control by making context-aware suggestions (e.g., auto-complete a shape) or by enforcing constraints (e.g., to prevent short circuits when sketching electronic traces with conductive ink). In summary, these techniques let the user access the full range from handheld sketching tools to autonomous sketching robots with a single device, and even within a single stroke.

4 Sketching Techniques

RoboSketch offers a variety of sketching techniques and supporting tools to help designers, makers, and artists sketch out their initial idea, iteratively extend their idea, and revisit the composition to complete details. To enable natural and efficient co-creation by the human and the robot, these techniques fluently integrate manual sketching with computer-assisted handheld fabrication and autonomous fabrication.

4.1 Extending a Sketch

A human sketcher may require help when a design needs to be precise or symmetrical, contains repetitive elements, or when the canvas is large. Partnering with RoboSketch can help sketchers extend their creative vision while still maintaining a high level of precision and control.

4.1.1 Repeating pattern.

Many sketches contain repeating patterns, which can be tedious and time-consuming to realize manually. The Repeat technique combines the expressiveness of manual drawing with support for repetitive sketching. Having selected the Repeat technique on the device’s screen, the user starts by sketching the pattern (Figure 3,Ia) and then, in a seamless movement, pushes the robot toward the desired direction. This triggers the Autonomous mode; the robot takes over and continues printing the pattern autonomously and repetitively (see Figure 3,Ib), until the user takes back control by grasping the handle to continue sketching manually, or by holding the hand in front of the robot to stop the repetition at the desired position (Figure 3,Ic).
One of the main principles of design is achieving balance. This can be done by using symmetrical patterns. There are different manual techniques that can be used to create symmetrical drawings. For example, an artist may use tracing paper to trace a sketch and then flip it over to create the mirrored part. We provide assistance for creating repeated designs that are symmetrical around a central point or across an axis. As an example, to draw a precise polygon, the user first activates assistance to draw a straight line in manual mode (Figure 3,IIa). Inspired by [46], this makes the robot cancel out lateral hand jitter. After selecting the Polygon function on the device screen, the user draws the first polygon segment, then defines the number of sides of the polygon by tapping the robot the corresponding number of times (e.g., five taps to make a pentagon, see Figure 3,IIb), and then pushes the robot. The robot then sketches the desired shape autonomously (Figure 3,IIc).
For creating multi-axial symmetries, the user sketches the desired design in manual mode (Figure 3,IIIa) and then taps on the robot (or selects from the displayed menu, see Figure 3,IIIb) to set the number of radial axes across which the sketch is repeated. The user pushes the robot, and the robot finishes the sketch (Figure 3,IIIc).
Figure 3:
Figure 3: Repeating patterns: I) along a path, II) around a central point to create polygons, and III) across an axis to create symmetrical patterns.

4.1.2 Auto-completing shapes.

To assist users to complete the current sketch quickly and precisely, RoboSketch provides an Auto-complete feature. When the user is sketching in manual or assisted mode, if the system detects the current shape, the prediction is shown on the display. If the prediction is correct and the user wishes to hand over control to the robot, the user simply releases the handle. The robot then autonomously completes the user’s current sketch. Otherwise, the user continues sketching, and the predicted shape disappears or is updated with a new prediction. Our current implementation can recognize basic shapes (e.g., line, circle, square, and triangle) by inspecting the robot’s movement trajectory. In the future, we will extend this feature to predict more complex shapes using a neural network [17, 36].

4.1.3 Creative completion.

Sketching is a medium for humans to visually express their thoughts, ideas, and emotions, often in an artistic way. On the other hand, recent advances in AI algorithms [53] have proven that they are capable of creating original visuals based on initial text and image input. By combining the advantages of both methods, humans and machines can co-create content and produce unique and personalized results. Pushing toward the machine end of co-creation, RoboSketch can realize new ideas based on the user’s existing sketches (Figure 4 b). We, therefore, use the recent implementation of the stable-diffusion model1, based on [53], for image-to-image synthesis guided by a text prompt. In our current implementation, the user selects a Creative Completion function and starts sketching. We then query the stable-diffusion model with the user’s current sketch as an initial image and with the text prompt “line art miro style” (100 steps of interference, prompt strength 85%) regularly. We post-process the resulting image with a standard auto trace algorithm (with center line option) [4] to create the paths for RoboSketch to print. Then, we show the result on the screen. When satisfied, the user pushes the robot to trigger autonomous mode, and the printer prints the AI-created image (Figure 4 c).
Figure 4:
Figure 4: Creative completion: a) sketching the initial idea, b) accepting RoboSketch’s AI-generated suggestion for completion, shown on the display, c) robot sketches an artistic overlay over the existing sketch.

4.1.4 Routing traces.

Sketching is an incremental and iterative practice. It is important to be able to pause and review a sketch, or return and add more detail. This implies that new traces oftentimes need to connect to existing traces and marks, and need to be precisely aligned. Some examples are closing a shape precisely, connecting elements in flowcharts and diagrams, or sketching conductive traces for electronic circuits. The Routing trace function assists the user in this task.
When a user intends to connect a current trace to a previously printed mark, they manually sketch the trace toward the direction of the previous mark, and then push the robot while letting go of the handle (Figure 5 a). This triggers the autonomous mode. Now the system uses the built-in camera to monitor the surface and detect visual marks using blob-detection. After detecting the position of a printed mark, the robot fine-adjusts its direction so that the printer nozzles are aligned with the mark (Figure 5 b) and keeps printing until it reaches the mark, precisely aligning the trace ending with the existing mark. If the user decides to take over control at any point (for example, to connect the current trace to another printed mark), they can grab the handle and continue sketching in manual mode. If the robot detects multiple marks, it connects to the closest mark by default, unless the user selects a different mark on the display. Optionally, to prevent undesired connection to previously printed traces in case of creating electronic circuits, the system alerts the user on the display when it is getting close to a printed trace. By default, the robot stops printing before reaching the trace and continues after crossing (Figure 5 c). The user can select crossing or routing around the detected trace on the display.
Figure 5:
Figure 5: Routing toward existing mark: a) guiding the robot toward the desired direction, b) the robot detects the mark and connects the trace, c) the robot stops printing before reaching a trace and continues afterward, or can autonomously route around small visual elements.

4.1.5 Scaling.

It is common to scale a design to its intended size during sketching or do it even more flexibly using digital design tools. However, it can be difficult to scale a design when the final size is unclear or the canvas is large. RoboSketch enables creating sketches at a large scale, yet in place. For example, the user activates the Scale function, draws a small-scale design in manual mode (see Figure 6 a), and then positions the robot at the desired location on the canvas. Then the user moves the robot from the lower left to the lower right of the desired bounding box to define the scale (Figure 6 b). The user now releases the handle and pushes the robot. The robot switches to autonomous mode and draws the design at the specified scale (Figure 6 c). Scaling down works similarly.
Figure 6:
Figure 6: Scaling a design: a) sketching a small scale design in manual mode, b) defining the desired scale in assisted mode, c) robot sketches the scaled-up design in autonomous mode.

4.1.6 Stamping.

Similar to prior work [46], the user can upload a vector graphic in the design tool, and then print the graphic by placing the device on the canvas and manually moving it in the desired direction. Extending beyond such manual stamping, we propose autonomous stamping in two variations: Firstly, the device can stamp a graphic along an existing contour, using line detection. Secondly, it can use stamping to extend an already existing marking with a graphic. To do so, the user places the device somewhere near the end of the existing marking. Using blob detection, the device identifies the marking’s end, moves accordingly, and starts printing the graphic such that it connects to the existing marking. In all cases, the scale of the stamped graphic can be adjusted flexibly, provided it does not get wider than the printhead width.

4.2 Revisiting a Sketch

RoboSketch supports not only the creation of the overall structure, but also the refinement and embellishment of a sketch.

4.2.1 Refining.

While it is fast and expressive to draw the overall structure and design of a sketch with a pen or brush, digital tools (notably, high-resolution printers) tend to be better at realizing detailed patterns and fine embellishments. Using this analogy, RoboSketch allows users to sketch the overall structure, before the device autonomously adds details to the design. For example, the user first sketches in manual mode (Figure 7 a), then selects a desired pattern from the list of patterns on the LCD menu, places the robot on the sketch (Figure 7 b), and pushes it to trigger the Autonomous mode. The robot detects the trace using the built-in camera and prints the selected pattern along the trace (Figure 7 c). At any time, the user can simply grab the handle to take over control and continue sketching. In our prototype, we considered that the robot follows a single trace to add details. In future work, we will consider more complex designs.
Figure 7:
Figure 7: Refining: a) sketching a trace in light color, b) robot revisits and refines the trace with the desired pattern, c) close-up view of the pattern.

4.2.2 Beautification.

Sketching is a natural way to create initial designs in the early stages of the design process. However, it is difficult to create precise shapes such as circles and right angles when sketching freehand. Beautification is the process of translating the hand-drawn and imprecise sketch to a regular and geometrically accurate design [67]. Inspired by sketch recognition research [60, 74], we used a $1 unistroke recognizer [73] to detect simple hand-drawn shapes and beautify them. The user first selects the Beautification feature from the display menu. They can then either use the inking or non-inking mode of RoboSketch to manually sketch the design; the system will beautify the design, and the robot will print a geometrically accurate result with a wider trace and darker color on top.

4.2.3 Coloring shapes.

After having created an initial line sketch, the sketcher may continue with painting to fill some shapes. RoboSketch supports coloring a shape with different tints and patterns. In doing so, the user selects the Painting mode from the display menu. Next, the user places the robot on a desired color or visual pattern; the robot records the pattern that is in its camera view (Figure 8 a). Then, the user places the robot on the contour of a previously sketched shape, releases the handle, and pushes the robot to trigger autonomous mode (Figure 8 b). The robot will then scan the shape’s contour with the built-in camera, if required calculate a closed polygon, and paint the inner region by repeatedly printing the scanned pattern (Figure 8 c). Our current implementation simply juxtaposes the scanned pattern; future implementation could use visual computing techniques to create a seamless pattern.
Figure 8:
Figure 8: Coloring a shape: a) scanning the desired color/pattern, b) placing the robot on the shape’s contour, c) robot fills the shape with the color/pattern.

4.3 Supporting Tools

In addition to the sketching techniques for extending and revisiting a sketch, RoboSketch offers several supporting tools to enhance creativity, improve precision, and speed up fabrication:

4.3.1 Dynamic Custom Brushes.

Artists use different techniques of brush movement to smoothly create different effects in a painting. They move the brush faster to create faded color, press the brush on the canvas to create a wider trace or choose a different color from the palette. Similarly, RoboSketch supports users to integrate these techniques into their sketching. For example, when the robot is in Autonomous mode, the user can take control for a brief moment by grabbing the handle to dynamically change the brush. Pressing the handle gradually will print wider marks (Figure 9 a), moving the robot faster, by pushing the handle forward, will fade the colors (Figure 9 b), and pointing the handle at the desired color while the color circle is displayed (see Figure 9 c) will change the color. When satisfied, the user lets the robot continue sketching with the newly defined brush. Similar to other digital painting tools, RoboSketch also supports custom brushes (e.g. serpentines and zigzags).
Figure 9:
Figure 9: Custom brushes: a) pressing the handle creates wider traces, b) moving the device faster fades the color, c) and pointing the handle at the desired color changes the color on the fly.

4.3.2 Measurement Tool.

To control the robot’s motion, we use two encoders and continuously monitor their data. This data can also be used for measuring the length of the path traveled (linear measurement) (Figure 10 a) or to print corners with precise angles (angular measurement) (Figure 10 b). For example, to print marks on a certain distance (e.g., placeholders for screw holes), the user activates the linear Measurement tool, prints the first mark in manual mode, and then moves the robot while observing the distance traveled on the display, before printing the second mark at the desired position. To draw a corner with a precise angle, the user can grab the handle at any point, activate the angular Measurement tool, rotate the handle to define the desired angle, and then push the robot to continue drawing.
Figure 10:
Figure 10: RoboSketch as a tool for a) linear measurement, b) angular measurement, c) and drawing guidelines.

4.3.3 Guidelines.

Drawing guidelines offer valuable assistance for creating accurate and proportional sketches, provide guidance for outlining the design, and ensure that drawings are symmetrical and evenly balanced. RoboSketch supports designers in creating accurate guides by providing basic shapes (e.g., line, circle, polygon) and radial symmetry. As an example, the user can create a guide in the form of a circle by selecting the circle from the display menu, specifying the center point and radius with a stroke in manual mode, and then releasing the handle. The robot will then complete the task in Autonomous mode (Figure 10 a). By using UV ink to print guides and then UV light to continue sketching, we can make the guides invisible in natural light. Alternatively, guidelines can be printed with a very fine width and light color. In the future, advances in ink technology may make it possible to erase the printed traces or print sketches that fade after a while.

5 Implementation

We now present the proof-of-concept implementation of RoboSketch. We first discuss the hardware system, then the user interface for controlling RoboSketch, and finally the implementation of interactions.
Figure 11:
Figure 11: RoboSketch includes a mobile robot, a handheld printer, and sensors for sensing and tracking. A handle on top enables physical sketching with the device.

5.1 Hardware System

Robotic Base. The main components of RoboSketch are shown in Figure 11. Two micro metal gear motors (HP 6 V, 250:1) [43], controlled by two DRV8838 motor drivers, are used to move the robot in differential drive mode. The motors are equipped with magnetic encoders (12 CPR) [44], used to measure the distance traveled by the robot. They are tethered to an ATmega32U4 AVR microcontroller. The device moves at a maximum speed of 31 cm/s. The body consists of a laser-cut MDF case and measures 164 x 191 x 60 mm. 4 AAA batteries power the robot and provide about 8 hours of operation without recharging.
Sensors. An ultrasonic distance sensor (HC-SR04), tethered to the microcontroller, is used for detecting obstacles. A wide-angle RGB camera (OV5640) mounted on a stand is connected to a Raspberry Pi 4B. With an embedded Linux operating system and the use of OpenCV’s blob detection feature, it monitors the robot’s surroundings, detects previous marks, and provides a real-time video feed for debugging.
Printer & inks.RoboSketch contains a color handheld printer for high-resolution prints. We used COLOP e-mark [7], a commercial handheld thermal inkjet printer, which has a very compact form factor (111 x 76 x 72 mm). It is lightweight (225 g) and able to print on diverse absorbent surfaces (e.g., paper, cardboard, cork, textiles, and wood). With its 14.5 mm wide printhead, it allows for high-resolution prints (600 dpi) at a maximum printing speed of about 30 cm/s. The selected handheld printer allows changing and refilling the printer cartridge with various inks. Commercially available replacement cartridges comprise tricolor, black pigment, and UV ink. In addition, we have successfully printed conductive silver ink, in line with prior work that used inkjet heads for printing conductors [25, 46].

5.2 Software Implementation

To enable a rapid and convenient workflow, users are provided with a two-part user interface. The touch screen user interface (Figure 12 a), embedded on the robot and implemented in Processing, facilitates direct and immediate interaction with RoboSketch. The user can trigger most functionality directly on the robot (e.g., selecting primitives, changing the brushes’ pattern and color). Moreover, the display provides real-time assistance and shows the position of the robot relative to the traversed path. We used a 3.5-inch Raspberry Pi LCD [69], inserted directly into the Raspberry Pi board (Figure 11).
In addition, we implemented a backend interface in Processing that is running on a standard laptop (Intel Core i7-6700HQ CPU 4 cores at 2.60 GHz) with Windows 10 (Figure 12 b). The backend interface allows debugging of the system and establishes a link between all the components: it communicates with the ATmega microcontroller via a Bluetooth connection to receive sensor data and control the motors and uses Wifi to communicate with the inkjet printer and Raspberry Pi.
Figure 12:
Figure 12: RoboSketch user interface: a) the user interacts with the device using the embedded LCD, b) the backend interface links all components and provides additional functionality (e.g., uploading a new design).

5.3 Implementation of Interactions

RoboSketch enables physical sketching and direct manipulation of the robot using a handle. For this purpose, a dual-axis analog joystick module including a push button [1] was used and connected to the base microcontroller (Figure 11). To facilitate interaction, we 3D printed a brush-like handle out of PLA and replaced the original joystick knob with our 3D printed handle. We use relative mapping: pushing the handle more will make the robot move faster. By placing a small force-sensitive resistor (FSR) [57] between the tip of the handle and the push button, the device provides different levels of pressure on the handle, giving the user more flexibility when interacting with the robot. A capacitive sensor on the tip of the handle, made of copper tape, detects the presence of the hand. For detecting tap and push gestures, two square FSR sensors [58] were placed on the top and back of the robot (Figure 11) and then connected to the base microcontroller.
To support controlling the robot from a distance, the user can use a stylus and digitizer tablet [65] or a gaming controller [12] that communicates wirelessly (2.4 GHz) with the backend interface to control the robot remotely.

6 Applications and Case Study

To demonstrate the practical feasibility and versatility of our technique, we present three application examples fabricated with RoboSketch. These show the use of sketching techniques and the transition between different interaction modes, in various domains of fabrication. We also present the results of a hands-on case study with artists and engineers.

6.1 Dandelion Art with Interactive Circuitry

Inspired by Jie Qi’s Dandelion Painting [79] and to demonstrate how RoboSketch can support creative activities and facilitates the fabrication of electronic circuits, we created an interactive wall art that glows from behind (Figure 13 g). The painting is made on three pieces of A3-size cold-pressed sheets and consists of two layers: the front is an artistic layer showing dandelion flowers, while the back contains the electronic circuit and LEDs [47, 48]. We started by manually sketching two lines to create the stalks of two large flowers. For sketching many small dandelion seeds, we uploaded a graphic of the seed and used the Stamping and Scaling features to freely print it at different sizes and orientations. For sketching the stalks of larger seeds, we uploaded a graphic of just the stalk and stamped them freely at different orientations (Figure 13 a). To complete the seeds, we uploaded a graphic containing the seed’s feathery bristles and switched to autonomous mode (Figure 13 b), to let the robot identify the stalk’s endpoints and autonomously print the bristles in the right places (Figure 13 c). To create the electronic layer, we first used Stamping to print the footprint of the LEDs on photo paper [13] with conductive ink (Figure 13 d). We moved to autonomous mod (Figure 13 e), using the Routing Trace feature, to let the robot connect the footprints with conductive traces (Figure 13 f). Finally, the LEDs are placed on the footprints and both layers are attached to a wooden frame. All the traces are connected to LiPo batteries attached to the back of the canvas.
Figure 13:
Figure 13: Interactive dandelion wall art: a) printing the seed’s stalks using Stamping, b) pushing the robot to Autonomous mode, c) the robot autonomously prints the seed’s bristles at the appropriate location, d) printing the footprints of LEDs, e) pushing the robot to Autonomous mode, f) the robots detects and connects the footprints, g) fabricated dandelion artwork glowing from behind.

6.2 Creating Sewing Patterns on Fabric

Transferring a sewing pattern onto fabric can be a tedious task, often done manually with a pen, tracing paper, and previously cut templates because most textiles do not fit into a commodity printer. RoboSketch assists textile makers in creating customized cutting and sewing patterns on the fabric. As an example, we created a clutch bag from a piece of velvet fabric for the outside and linen fabric for the inside (Figure 14 f). We first uploaded a graphic with the cutting and sewing pattern (Figure 14 a). Next, we defined the appropriate scale and position of the pattern directly on the piece of fabric, using the Scale and Measurement features (Figure 14 b). The robot then printed the pattern on the back of the fabric (Figure 14 c). We repeated the previous tasks to create all the pieces. Finally, we cut out the fabric along the traced line (Figure 14 d) and sewed the piece together using a sewing machine (Figure 14 e).
Figure 14:
Figure 14: Creating sewing patterns on fabric: a) uploading the design, b) scaling the design on the fabric, c) the robot prints the pattern on the fabric, d) cutouts with sewing marks, e and f) fabricated clutch bag.

6.3 Assistance in Wood Working

Creating a precise and intricate design on a piece of wood is challenging. Craftsmen sketch the design on the wood with a pencil and use various measuring tools (such as a ruler, protractor, and combination square) to create straight lines and precise shapes. RoboSketch facilitates crafting by assisting to sketch precise shapes and align screw holes on a piece of wood. As an example, we realized a wooden hanger for a crib on 3 mm thick plywood and then attached toys with strings (Figure 15 f). To create the hanger, we selected the Repeating Pattern feature (polygon) and sketched the polygon’s first segment on the wooden sheet in Assisted mode (Figure 15 a). After defining the number of sides to six, the robot completed the polygon in Autonomous mode (Figure 15 b). Next, we added marks for drilling holes where toys will be attached. We created the first and second marks at a 5 cm distance using the Stamping and Measurement tools (Figure 15 c) and then switch to the autonomous Stamping mode, to let the robot repeat stamping the marks following the polygon (Figure 15 d). Finally, we cut the plywood (Figure 15 e) and attached the toys with strings.
Figure 15:
Figure 15: Assistance in woodworking: a) defining the first segment of the polygon, b) the robot completes the polygon, c) defining the mark for drilling holes, d) the robot completes printed the drilling marks, e) hanger cutout, and f) fabricated hanger for a crib.

6.4 Case Study

To gain a better understanding of RoboSketch in use, we conducted a hands-on exploration session with experienced artists, sketchers, and novices.

6.4.1 Participants.

We recruited 7 participants: 3 artists from the College of Fine Arts, all female, aged 30 (A1 and A2) and 33 (A3), and experienced in a wide range of arts, including sketching, drawing, and painting with physical tools. The other 4 participants were engineers with backgrounds in embedded systems (P1, female, 22), e-textile (P2, female, 28), robotics (P3, male, 30), and soft robotics (P4, male, 31). Two participants were left-handed.

6.4.2 Procedure.

We began the study with an introduction to the project, basic functionalities, and interaction with the robot, and gave participants time to practice sketching with our tool. They also tried different supporting tools, such as custom brushes and measurement tools (Figure 16 a). Then, we continued the study by explaining the Manual, Assisted, and Autonomous modes and introducing the gestures for transitioning between these modes. Participants were then asked to perform a series of tasks to familiarize themselves with the transition of shared control: 1) repeating pattern (linear, polygon, and symmetry), 2) scaling, and 3) sketching in Assisted mode (straight line and within boundaries). We also gave them time to explore other features that interested them. We then discussed their experiences and the challenges they faced in a semi-structured interview. We continued the study by asking participants to create a drawing (one result is shown in Figure 16 b) using their preferred sketching techniques (two participants did not finish this task due to lack of time). Finally, all participants were asked to complete a questionnaire about their experience and possible use cases of the tool. The sessions lasted about two hours, were audio-recorded, and photos and videos of key situations were taken.
Figure 16:
Figure 16: a) Participant during hands-on exploration session, b and c) drawing created by two of the participants.

6.4.3 Results & Discussion.

All participants were able to interact with the device and provided valuable feedback. Most importantly, after less than one hour of exploration, they were able to sketch with the device without our intervention. In the following, we summarize the central findings.
Likert scale. As part of the questionnaire, we asked participants to rate the following questions on a five-point Likert scale: How easy was it to use the device, and how likely would they use manual, assisted, and autonomous modes, and sketching techniques? Overall, responses were positive to very positive (30 out of 35 responses were "likely" and "very likely"). Participants valued sketching techniques and various modes of interaction with the device, with autonomous (5 out of 7 very likely, 1 likely, and 1 neutral responses) and manual modes (3 out of 7 very likely, 3 likely, and 1 neutral response) being favored most.
Manual mode. Participants liked the ability to manually move the device, draw very consistent lines, and change the color, width, and patterns of traces quickly and on a large scale. P2 liked the idea of controlling a handle like a brush; P3 mentioned that “the joystick design is comfortable to interact with”, and A2 enthusiastically said, “you only need one tool instead of many pencils”. Interestingly, A1 wished for a longer handle to control the robot on the floor, to print sketches during an on-stage performance, and then requested to control the robot remotely with the remote control joystick. Similarly, A3 expressed her interest in sketching street art from far away with a remote controller. While all participants liked the concept of Manual mode, they also pointed out that the current size of the device is rather large for a handheld device. From our observation, after a few minutes of practice, the artists were able to move the device confidently and make freehand sketches; in contrast, the engineers were careful about parts of sketches that were hidden underneath the device and indicated that they needed more time to practice.
Assisted mode. All found Assisted mode very helpful, especially for drawing straight lines, geometric shapes, and keeping boundaries: “Seemed like magic, merging of real-world and virtual borders” [P4]. P2 stated that the assistance in keeping boundaries allowed her to focus on sketching without worrying about crossing boundaries. P3 decided to sketch a car when we asked him to create a drawing with the device. He mentioned that he is very untalented and uneasy in drawing by hand; however, using Assisted mode for drawing straight lines and basic shapes, he managed to draw a large-scale car on a piece of paper (150 x 110 cm). At the end of the session, he was satisfied that he could draw by hand for the first time (Figure 16 c). P4 found this mode useful for drawing graphics and 2D CAD drawings that are difficult to sketch by hand. All participants except one (A1) preferred to be notified before receiving assistance from the robot.
Autonomous mode. All participants were enthusiastic about the robot moving autonomously and expanding their hand-drawn sketch: “I liked Autonomous mode (...) I can just observe my drawing expand” [A1]. They also mentioned that the Autonomous mode will allow them to repeat difficult shapes and patterns that were difficult or tedious to do by hand. For instance, based on her experience in drawing comics, A1 found this mode very helpful in scaling and repeating visual elements in comics faster and more accurately. P3 mentioned that for his project on metamaterials, he has to replicate similar patterns (e.g., cells) in different angles and scales to be able to analyze them. This device allows him to make faster and more accurate sketches for ideation and further discussion. He then continued sketching one of the cells and repeated it in a different direction. P1, who has been drawing mandalas for several years, expressed that autonomous mode can help her create a more customized and precise design. She then sketched half of a butterfly and used the Repeat function to mirror it.
Transition of control sharing. Participants valued the tangible interaction with the device and preferred to touch the robot to initiate the task rather than pressing a button on the UI. A1 said, “I like the tangible interaction with the robot, it was a fluid movement between me and the robot” and she continued “I feel connected to the robot when I touch it”. P3 indicated that the gesture metaphors are memorable. Participants learned the gestures quickly. We frequently observed that they began sketching in manual or assisted mode, then pushing the robot to extend their sketch, and then grabbing the handle to change the color, width, and pattern of the trace and then continued sketching (Figure 16 b). At the end of the session, A2 and P3 suggested using another type of interaction (e.g., voice command) to stop the robot and take over the control in urgent situations. We will consider this for future iterations of our prototype.
Application and use cases. Overall, our device will improve creativity, according to the artists, and productivity, according to the engineers. Participants also suggested various use cases for the device, such as education, architecture (e.g., drawing floor plans), textile design, rapid prototyping (e.g., website wireframe), creating floor signs for temporary events, and generating navigation patterns for other robots.

7 Limitations and Future Work

Below, we summarize the limitations of our current implementation and identify opportunities for future work.
Position tracking and precision. In the current setup, we use magnetic encoders to measure the distance traveled by the robot, which provides relative positioning information and is not reliable on uneven surfaces. In the future, we plan to investigate alternative techniques for position tracking (e.g., using a camera system such as the OptiTrack) that allow for absolute positioning on a wider range of surface geometries. Improving the position tracking would also allow us to sketch more complex shapes, for instance, a large raster graphic that is printed in adjacent strips. In addition, improving position tracking helps increase sketching precision, which is a major issue with plotter robots.
Form factor. The size and form factor of our robot are constrained by the size of the handheld printer. Therefore, our robot occludes part of the design during sketching. Advances in printer technology would allow us to reduce the size of the device so that it is closer to the size of physical brushes. A simple alternative would be to place a second camera underneath the case and visualize the live camera view on the display.
Manual sketching. Currently, we use a commercial dual-axis analog joystick and relative mapping to control the robot in manual mode. In future work, we plan to investigate alternative input techniques that use absolute position mapping, which would more closely resemble painting with a brush. We plan to include an omnidirectional platform with Mecanum wheels [11, 19] and backdrivable mechanism, so the joystick can be replaced by a fixed brush handle. While controlling the joystick limits manual sketching to wrist movement, replacing the joystick with a fixed brush handle would also allow movement of the entire arm. Future work should also consider integrating haptic feedback directly on the handle.
Interaction. In our current implementation, our robot immediately transitions from autonomous to manual mode when the user grabs the handle. While this approach provides convenience when the robot is in close proximity, alternative methods of interaction, such as voice commands and mid-air gestures, are being considered to address scenarios when the robot is not easily accessible. To help predict the transition time from autonomous to manual mode, in future iterations we will visualize the robot’s position relative to printed marks on the device screen and backend interface.
Currently, the speed of the robot in manual mode is adjusted using the joystick. We are considering other types of interaction such as voice commands and mid-air gestures to adjust the speed in autonomous mode.
While we did not observe a split of attention between sketching and viewing the device screen during the user study, we are considering in-situ projection on the canvas to further improve the interaction with the device.
Collaborative control of the robot. Multiple users can also collaborate to control the robot. Examples include crowd participation in the creation of artwork or remote control of the robot by multiple users. This is an interesting aspect we are considering for follow-up work that opens up exciting research questions, e.g., defining the type of interaction and modality, ownership, the priority of received input, and resolving input conflicts.
Different fabrication tools. Our robot is equipped with a printer for sketching, however, it is possible to change the design of the robot and develop a modular fabrication tool. For example, the printer can be replaced with a marker, a cutter, a miniature laser engraver, or a miniature iron for sintering conductive traces. This will not only enlarge the set of fabrication tasks that can be accomplished using “handheld tools unleashed”. It will also open up possibilities for new autonomous fabrication devices that collaborate with each other to accomplish a task (e.g., one robot draws a design on a fabric and the second follows the traces and cuts out the fabric).

8 Conclusion

So far, personal fabrication has mostly centered around handheld tools as an embodied extension of the user, or digital fabrication machines automating parts of the fabrication process without much direct user intervention. In this paper, we explored Mixed-Initiative Fabrication for sketching as a continuum ranging from manual via assisted to autonomous fabrication, that enables seamless transitions between each mode during fabrication. As a first example of this vision, we presented RoboSketch, a robotic printer on wheels capable of creating large-scale, high-resolution prints. With a joystick controller, RoboSketch can be used for manual sketching. It also provides interactive assistance during sketching, and it can turn into an autonomous robotic device moving about for computer-generated sketches. We introduced a set of easy-to-learn interaction techniques to seamlessly transition between all three modes, along with sketching techniques that benefit from flexible transitions, e.g., to extend or revisit a sketch. Our results show that RoboSketch’s concept was positively received by artists and engineers, and that mixed-initiative physical sketching succeeds in making computer-supported sketching more versatile and flexible.

Acknowledgments

This project received funding from the German Research Foundation (DFG project 425869111 within the Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments). We thank COLOP e-mark for supplying us with a handheld printer and Alice Haynes for her assistance in proofreading the text. We also thank the reviewers for their insightful comments.

Footnote

Supplementary Material

MP4 File (3544548.3580691-video-preview.mp4)
Video Preview
MP4 File (3544548.3580691-video-figure.mp4)
Video Figure

References

[1]
Adafruit. 2022. 2-Axis Joystick. https://www.adafruit.com/product/245 Accessed: 2022-06-02.
[2]
Robert Aish. 2012. DesignScript: Origins, Explanation, Illustration. In Computational Design Modelling, Christoph Gengnagel, Axel Kilian, Norbert Palz, and Fabian Scheurer (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 1–8.
[3]
Kristina Andersen, Ron Wakkary, Laura Devendorf, and Alex McLean. 2019. Digital Crafts-Machine-Ship: Creative Collaborations with Machines. Interactions 27, 1 (dec 2019), 30–35. https://doi.org/10.1145/3373644
[4]
AutoTrace. 2022. Converts bitmap to vector graphics. http://autotrace.sourceforge.net/ Accessed: 2022-06-02.
[5]
BOTSY. 2007. BOTSY- THE WALL DRAWING ROBOT. https://www.botsy.com/ Accessed: 2022-06-02.
[6]
Tingyu Cheng, Bu Li, Yang Zhang, Yunzhi Li, Charles Ramey, Eui Min Jung, Yepu Cui, Sai Ganesh Swaminathan, Youngwook Do, Manos Tentzeris, Gregory D. Abowd, and HyunJoo Oh. 2021. Duco: Autonomous Large-Scale Direct-Circuit-Writing (DCW) on Vertical Everyday Surfaces Using A Scalable Hanging Plotter. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 3, Article 92 (sep 2021), 25 pages. https://doi.org/10.1145/3478118
[7]
[7] Colop.2021. https://emark.colop.com/. Accessed: 2021-06-15.
[8]
Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent. In Proceedings of the 21st International Conference on Intelligent User Interfaces (Sonoma, California, USA) (IUI ’16). Association for Computing Machinery, New York, NY, USA, 196–207. https://doi.org/10.1145/2856767.2856795
[9]
Scribit Design. 2019. Scribit. https://scribit.design/ Accessed: 2022-06-02.
[10]
Sebastian Deterding, Jonathan Hook, Rebecca Fiebrink, Marco Gillies, Jeremy Gow, Memo Akten, Gillian Smith, Antonios Liapis, and Kate Compton. 2017. Mixed-Initiative Creative Interfaces. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 628–635. https://doi.org/10.1145/3027063.3027072
[11]
Olaf Diegel, Aparna Badve, Glen Bright, Johan Potgieter, and Sylvester Tlale. 2002. Improved mecanum wheel design for omni-directional robots. In Proc. 2002 Australasian conference on robotics and automation, Auckland. Proceedings, Auckland, 117–121.
[12]
EASYSMX. 2022. ESM-9013 Wireless Controller. https://www.easysmx.com/en-de/collections/game-controllers/products/easysmx-esm-9013-wireless-gaming-controller Accessed: 2022-06-02.
[13]
Epson. 2022. Photo Paper Glossy. https://www.epson.eu/en_EU/products/ink-and-paper/paper-and-media/photo-paper-glossy—a3—20-sheets/p/12480 Accessed: 2022-06-02.
[14]
Kenneth D. Forbus, Ronald W. Ferguson, and Jeffery M. Usher. 2001. Towards a Computational Model of Sketching. In Proceedings of the 6th International Conference on Intelligent User Interfaces (Santa Fe, New Mexico, USA) (IUI ’01). Association for Computing Machinery, New York, NY, USA, 77–83. https://doi.org/10.1145/359784.360278
[15]
Frikk Fossdal, Rogardt Heldal, and Nadya Peek. 2021. Interactive Digital Fabrication Machine Control Directly Within a CAD Environment. In Proceedings of the 6th Annual ACM Symposium on Computational Fabrication (Virtual Event, USA) (SCF ’21). Association for Computing Machinery, New York, NY, USA, Article 8, 15 pages. https://doi.org/10.1145/3485114.3485120
[16]
Madeline Gannon, Tovi Grossman, and George Fitzmaurice. 2016. ExoSkin: On-Body Fabrication. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). ACM, New York, NY, USA, 5996–6007. https://doi.org/10.1145/2858036.2858576
[17]
David Ha and Douglas Eck. 2017. A Neural Representation of Sketch Drawings. https://doi.org/10.48550/ARXIV.1704.03477
[18]
Eric Horvitz. 1999. Uncertainty, action, and interaction: In pursuit of mixed-initiative computing. IEEE Intelligent Systems 14, 5 (1999), 17–20.
[19]
HowToMechatronics.com. 2022. Arduino Mecanum Wheels Robot. https://howtomechatronics.com/projects/arduino-mecanum-wheels-robot/ Accessed: 2022-06-02.
[20]
Takeo Igarashi, Satoshi Matsuoka, Sachiko Kawachiya, and Hidehiko Tanaka. 1997. Interactive Beautification: A Technique for Rapid Geometric Design. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology (Banff, Alberta, Canada) (UIST ’97). Association for Computing Machinery, New York, NY, USA, 105–114. https://doi.org/10.1145/263407.263525
[21]
iRobot. 2022. Root. https://robots.ieee.org/robots/root/ Accessed: 2022-06-02.
[22]
Jennifer Jacobs, Joel Brandt, Radomír Mech, and Mitchel Resnick. 2018. Extending Manual Drawing Practices with Artist-Centric Programming Tools. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174164
[23]
Jennifer Jacobs, Mitchel Resnick, and Leah Buechley. 2014. Dresscode: supporting youth in computational design and making. In Proceedings of Constructionism 2014 Conference. Vienna, Austria, Vol. 10. Proceedings, Auckland, 10 pages.
[24]
Hsin-Liu (Cindy) Kao, Deborah Ajilo, Oksana Anilionyte, Artem Dementyev, Inrak Choi, Sean Follmer, and Chris Schmandt. 2017. Exploring Interactions and Perceptions of Kinetic Wearables. In Proceedings of the 2017 Conference on Designing Interactive Systems (Edinburgh, United Kingdom) (DIS ’17). Association for Computing Machinery, New York, NY, USA, 391–396. https://doi.org/10.1145/3064663.3064686
[25]
Yoshihiro Kawahara, Steve Hodges, Benjamin S. Cook, Cheng Zhang, and Gregory D. Abowd. 2013. Instant Inkjet Circuits: Lab-Based Inkjet Printing to Support Rapid Prototyping of UbiComp Devices. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Zurich, Switzerland) (UbiComp ’13). Association for Computing Machinery, New York, NY, USA, 363–372. https://doi.org/10.1145/2493432.2493486
[26]
Soheil Kianzad, Yuxiang Huang, Robert Xiao, and Karon E. MacLean. 2020. Phasking on Paper: Accessing a Continuum of PHysically Assisted SKetchING. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376134
[27]
Jeeeun Kim, Haruki Takahashi, Homei Miyashita, Michelle Annett, and Tom Yeh. 2017. Machines as Co-Designers: A Fiction on the Future of Human-Fabrication Machine Interaction. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 790–805. https://doi.org/10.1145/3027063.3052763
[28]
Jeeeun Kim, Clement Zheng, Haruki Takahashi, Mark D Gross, Daniel Ashbrook, and Tom Yeh. 2018. Compositional 3D Printing: Expanding & Supporting Workflows towards Continuous Fabrication. In Proceedings of the 2nd Annual ACM Symposium on Computational Fabrication (Cambridge, Massachusetts) (SCF ’18). Association for Computing Machinery, New York, NY, USA, Article 5, 10 pages. https://doi.org/10.1145/3213512.3213518
[29]
Konstantin Klamka, Raimund Dachselt, and Jürgen Steimle. 2020. Rapid Iron-On User Interfaces: Hands-on Fabrication of Interactive Textile Prototypes. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376220
[30]
Thomas Langerak, Juan José Zárate, Velko Vechev, David Lindlbauer, Daniele Panozzo, and Otmar Hilliges. 2020. Optimal Control for Electromagnetic Haptic Guidance Systems. Association for Computing Machinery, New York, NY, USA, 951–965. https://doi.org/10.1145/3379337.3415593
[31]
Johnny C. Lee, Paul H. Dietz, Darren Leigh, William S. Yerazunis, and Scott E. Hudson. 2004. Haptic Pen: A Tactile Feedback Stylus for Touch Screens. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (Santa Fe, NM, USA) (UIST ’04). Association for Computing Machinery, New York, NY, USA, 291–294. https://doi.org/10.1145/1029632.1029682
[32]
Kang-Hee Lee and Jong-Hwan Kim. 2006. Multi-robot cooperation-based mobile printer system. Robotics and Autonomous Systems 54, 3 (2006), 193–204. https://doi.org/10.1016/j.robot.2005.11.005
[33]
Yong Jae Lee, C. Lawrence Zitnick, and Michael F. Cohen. 2011. ShadowDraw: Real-Time User Guidance for Freehand Drawing. ACM Trans. Graph. 30, 4, Article 27 (jul 2011), 10 pages. https://doi.org/10.1145/2010324.1964922
[34]
Jingyi Li, Joel Brandt, Radomír Mech, Maneesh Agrawala, and Jennifer Jacobs. 2020. Supporting Visual Artists in Programming through Direct Inspection and Control of Program Execution. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376765
[35]
Jingyi Li, Jennifer Jacobs, Michelle Chang, and Björn Hartmann. 2017. Direct and Immediate Drawing with CNC Machines. In Proceedings of the 1st Annual ACM Symposium on Computational Fabrication (Cambridge, Massachusetts) (SCF ’17). Association for Computing Machinery, New York, NY, USA, Article 11, 2 pages. https://doi.org/10.1145/3083157.3096344
[36]
Yuyu Lin, Jiahao Guo, Yang Chen, Cheng Yao, and Fangtian Ying. 2020. It Is Your Turn: Collaborative Ideation With a Co-Creative Robot through Sketch. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376258
[37]
lingib. 2019. Omni Wheel CNC Plotter V2. https://www.instructables.com/Omni-Wheel-CNC-Plotter-V2/ Accessed: 2022-11-22.
[38]
Stefanie Mueller, Pedro Lopes, and Patrick Baudisch. 2012. Interactive Construction: Interactive Fabrication of Functional Mechanical Devices. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology(Cambridge, Massachusetts, USA) (UIST ’12). ACM, New York, NY, USA, 599–606. https://doi.org/10.1145/2380116.2380191
[39]
Sandy Noble. 2019. Polargraph. http://www.polargraph.co.uk/ Accessed: 2022-11-22.
[40]
Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174223
[41]
Huaishu Peng, Jimmy Briggs, Cheng-Yao Wang, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, and François Guimbretière. 2018. RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, Article 579, 12 pages. https://doi.org/10.1145/3173574.3174153
[42]
Huaishu Peng, Amit Zoran, and François V. Guimbretière. 2015. D-Coil: A Hands-on Approach to Digital 3D Models Design. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 1807–1815. https://doi.org/10.1145/2702123.2702381
[43]
Pololu. 2022. 250:1 Micro Metal Gearmotor HP 6V. https://www.pololu.com/product/995 Accessed: 2022-06-02.
[44]
Pololu. 2022. Magnetic Encoder Pair Kit for Micro Metal Gearmotors. https://www.pololu.com/product/3081 Accessed: 2022-06-02.
[45]
Narjes Pourjafarian, Marion Koelle, Bruno Fruchard, Sahar Mavali, Konstantin Klamka, Daniel Groeger, Paul Strohmeier, and Jürgen Steimle. 2021. BodyStylus: Freehand On-Body Design and Fabrication of Epidermal Interfaces. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 504, 15 pages. https://doi.org/10.1145/3411764.3445475
[46]
Narjes Pourjafarian, Marion Koelle, Fjolla Mjaku, Paul Strohmeier, and Jürgen Steimle. 2022. Print-A-Sketch: A Handheld Printer for Physical Sketching of Circuits and Sensors on Everyday Surfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 270, 17 pages. https://doi.org/10.1145/3491102.3502074
[47]
Jie Qi, Leah Buechley, Andrew "bunnie" Huang, Patricia Ng, Sean Cross, and Joseph A. Paradiso. 2018. Chibitronics in the Wild: Engaging New Communities in Creating Technology with Paper Electronics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3173826
[48]
Jie Qi, Andrew "bunnie" Huang, and Joseph Paradiso. 2015. Crafting Technology with Circuit Stickers. In Proceedings of the 14th International Conference on Interaction Design and Children (Boston, Massachusetts) (IDC ’15). Association for Computing Machinery, New York, NY, USA, 438–441. https://doi.org/10.1145/2771839.2771873
[49]
Alec Rivers, Ilan E. Moyer, and Frédo Durand. 2012. Position-Correcting Tools for 2D Digital Fabrication. ACM Trans. Graph. 31, 4, Article 88 (July 2012), 7 pages. https://doi.org/10.1145/2185520.2185584
[50]
Simon Robinson, Jennifer Pearson, Mark D. Holton, Shashank Ahire, and Matt Jones. 2019. Sustainabot - Exploring the Use of Everyday Foodstuffs as Output and Input for and with Emergent Users. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300456
[51]
Dusty Robotics. 2022. The World’s Leading Robotic Layout System. https://www.dustyrobotics.com/ Accessed: 2022-11-22.
[52]
Rugged Robotics. 2022. Building Better. https://www.rugged-robotics.com/ Accessed: 2022-11-22.
[53]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New Orleans, LA, USA, 10684–10695. https://doi.org/10.48550/arXiv.2112.10752
[54]
Evil Mad Scientist. 2022. AxiDraw: Writing and Drawing Macines. https://www.axidraw.com/ Accessed: 2022-11-22.
[55]
Roy Shilkrot, Pattie Maes, Joseph A. Paradiso, and Amit Zoran. 2015. Augmented Airbrush for Computer Aided Painting (CAP). ACM Trans. Graph. 34, 2, Article 19 (March 2015), 11 pages. https://doi.org/10.1145/2699649
[56]
Hyunyoung Song, Tovi Grossman, George Fitzmaurice, François Guimbretiere, Azam Khan, Ramtin Attar, and Gordon Kurtenbach. 2009. PenLight: Combining a Mobile Projector and a Digital Pen for Dynamic Visual Overlay. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). Association for Computing Machinery, New York, NY, USA, 143–152. https://doi.org/10.1145/1518701.1518726
[57]
Sparkfun. 2022. Force Sensitive Resistor - Small. https://www.sparkfun.com/products/9673 Accessed: 2022-06-02.
[58]
Sparkfun. 2022. Force Sensitive Resistor - Square. https://www.sparkfun.com/products/9376 Accessed: 2022-06-02.
[59]
Blair Subbaraman and Nadya Peek. 2022. P5.Fab: Direct Control of Digital Fabrication Machines from a Creative Coding Environment. In Designing Interactive Systems Conference (Virtual Event, Australia) (DIS ’22). Association for Computing Machinery, New York, NY, USA, 1148–1161. https://doi.org/10.1145/3532106.3533496
[60]
Zhenbang Sun, Changhu Wang, Liqing Zhang, and Lei Zhang. 2012. Query-Adaptive Shape Topic Mining for Hand-Drawn Sketch Recognition. In Proceedings of the 20th ACM International Conference on Multimedia (Nara, Japan) (MM ’12). Association for Computing Machinery, New York, NY, USA, 519–528. https://doi.org/10.1145/2393347.2393421
[61]
Ivan E Sutherland. 1964. Sketchpad a man-machine graphical communication system. Simulation 2, 5 (1964), R–3.
[62]
Shaper Tools. 2022. Shaper Origin. https://www.shapertools.com/ Accessed: 2022-11-22.
[63]
Patrick Tresset and Frederic Fol Leymarie. 2013. Portrait drawing by Paul the robot. Computers & Graphics 37, 5 (2013), 348–363. https://doi.org/10.1016/j.cag.2013.01.012
[64]
Hannah Twigg-Smith, Jasper Tran O’Leary, and Nadya Peek. 2021. Tools, Tricks, and Hacks: Exploring Novel Digital Fabrication Workflows on #PlotterTwitter. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 594, 15 pages. https://doi.org/10.1145/3411764.3445653
[65]
Wacom. 2022. Bamboo. https://cdn.wacom.com/f/manuals/en/bamboo-users-manual.pdf Accessed: 2022-06-02.
[66]
WallPen. 2017. WallPen. https://www.wallpen.com/ Accessed: 2022-06-02.
[67]
Beirong Wang, Jian Sun, and Beryl Plimmer. 2005. Exploring Sketch Beautification Techniques. In Proceedings of the 6th ACM SIGCHI New Zealand Chapter’s International Conference on Computer-Human Interaction: Making CHI Natural (Auckland, New Zealand) (CHINZ ’05). Association for Computing Machinery, New York, NY, USA, 15–16. https://doi.org/10.1145/1073943.1073946
[68]
Keita Watanabe and Michiaki Yasumura. 2007. FlexibleBrush: A realistic brush stroke experience with a virtual nib. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology(UIST ’07). Association for Computing Machinery, New York, NY, USA, 47–48.
[69]
Waveshare. 2022. 3.5inch RPi LCD. https://www.waveshare.com/wiki/3.5inch_RPi_LCD_(A) Accessed: 2022-06-02.
[70]
Christian Weichel, John Hardy, Jason Alexander, and Hans Gellersen. 2015. ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology(Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery, New York, NY, USA, 93–102. https://doi.org/10.1145/2807442.2807451
[71]
Christian Weichel, Manfred Lau, David Kim, Nicolas Villar, and Hans W. Gellersen. 2014. MixFab: A Mixed-Reality Environment for Personal Fabrication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing Machinery, New York, NY, USA, 3855–3864. https://doi.org/10.1145/2556288.2557090
[72]
Karl D.D. Willis, Cheng Xu, Kuan-Ju Wu, Golan Levin, and Mark D. Gross. 2011. Interactive Fabrication: New Interfaces for Digital Fabrication. In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction(Funchal, Portugal) (TEI ’11). ACM, New York, NY, USA, 69–72. https://doi.org/10.1145/1935701.1935716
[73]
Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology (Newport, Rhode Island, USA) (UIST ’07). Association for Computing Machinery, New York, NY, USA, 159–168. https://doi.org/10.1145/1294211.1294238
[74]
Jie Wu, Changhu Wang, Liqing Zhang, and Yong Rui. 2014. Sketch Recognition with Natural Correction and Editing. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (Québec City, Québec, Canada) (AAAI’14). AAAI Press, New York, NY, USA, 951–957.
[75]
Pan-Long Wu, Yi-Chin Hung, and Jin-Siang Shaw. 2022. Artistic robotic pencil sketching using closed-loop force control. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 236, 17 (2022), 9753–9762. https://doi.org/10.1177/09544062221096946 arXiv:https://doi.org/10.1177/09544062221096946
[76]
Jun Xie, Aaron Hertzmann, Wilmot Li, and Holger Winnemöller. 2014. PortraitSketch: Face Sketching Assistance for Novices. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST ’14). Association for Computing Machinery, New York, NY, USA, 407–417. https://doi.org/10.1145/2642918.2647399
[77]
Jun Xing, Li-Yi Wei, Takaaki Shiratori, and Koji Yatani. 2015. Autocomplete Hand-Drawn Animations. ACM Trans. Graph. 34, 6, Article 169 (oct 2015), 11 pages. https://doi.org/10.1145/2816795.2818079
[78]
Junichi Yamaoka and Yasuaki Kakehi. 2013. DePENd: Augmented Handwriting System Using Ferromagnetism of a Ballpoint Pen. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology(St. Andrews, Scotland, United Kingdom) (UIST ’13). Association for Computing Machinery, New York, NY, USA, 203–210. https://doi.org/10.1145/2501988.2502017
[79]
Amit Zoran. 2015. Hybrid Craft: Showcase of Physical and Digital Integration of Design and Craft Skills. In ACM SIGGRAPH Art Gallery (Los Angeles, California) (SIGGRAPH ’15). Association for Computing Machinery, New York, NY, USA, 384–398. https://doi.org/10.1145/2810185.2810187
[80]
Amit Zoran and Joseph A. Paradiso. 2013. FreeD: A Freehand Digital Sculpting Tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). ACM, New York, NY, USA, 2613–2616. https://doi.org/10.1145/2470654.2481361

Cited By

View all
  • (2024)MagicDraw: Haptic-Assisted One-Line Drawing with Shared ControlAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686753(1-3)Online publication date: 13-Oct-2024
  • (2024)Throwing Out Conventions: Reimagining Craft-Centered CNC Tool Design through the Digital Pottery WheelProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642361(1-22)Online publication date: 11-May-2024

Index Terms

  1. Handheld Tools Unleashed: Mixed-Initiative Physical Sketching with a Robotic Printer

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Check for updates

    Author Tags

    1. Sketching
    2. fabrication
    3. mixed-initiative fabrication
    4. prototyping
    5. robotic printer
    6. sketching interfaces

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)700
    • Downloads (Last 6 weeks)86
    Reflects downloads up to 11 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)MagicDraw: Haptic-Assisted One-Line Drawing with Shared ControlAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686753(1-3)Online publication date: 13-Oct-2024
    • (2024)Throwing Out Conventions: Reimagining Craft-Centered CNC Tool Design through the Digital Pottery WheelProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642361(1-22)Online publication date: 11-May-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media