1 Introduction
Handheld tools, ranging from brushes and sculpting tools to cutter blades, offer the creative and practical maker an undisputed level of directness. Yet, purely manual work practices using such tools can be repetitive and cumbersome, and are often constrained by the user’s manual skills, precision, and physical abilities. With the emergence of ever more sophisticated means of digital fabrication, machines are taking over such tasks. While these machines have proven to be extremely useful to process users’ intents without live intervention, delegating fabrication to the device in this way inhibits the inherently iterative nature of making.
Two streams of research have set out to tackle this problem from different directions: the first has proposed to add digital assistance to manual fabrication practices by augmenting handheld fabrication tools. Examples include hybrid carving [
80], computer-assisted sketching [
26,
46], 3D modeling [
42], augmented airbrushing [
55], and hybrid fabrication on the human body [
16,
45]. While most of these approaches integrate directly with manual fabrication practices, their assistance suffers from a significant restriction: It is limited to the reach of the human arm, which prevents the device from carrying out fabrication tasks autonomously. It is always the user who has to lead the fabrication task.
The second research direction has investigated means to increase the interactivity of standard digital fabrication machines, for instance, by adding options for real-time design interventions to laser cutters [
38] or 3D printers [
41]. While these augmented machines can work more independently of the user and benefit from the precision and speed of high-end fabrication tools, they lack the ease and directness of in-situ physical practice with handheld tools.
We set out to integrate these worlds and propose a new class of devices that can be all three: a hand-operated manual tool, a computer-assisted handheld tool, and an autonomous fabrication robot. Such devices can assist the user where needed while in their direct proximity, but they can also be unleashed and roam freely, in order to solve some tasks independently. When done or called back, they return to the user and can again be operated in a manual or assisted mode.
To explore the potential of such “handheld tools unleashed” collaboration between humans and machines in the design and fabrication process, we created RoboSketch: a robotic printer on wheels with a joystick controller for manual sketching, capable of creating large-scale, high-resolution prints. It can be operated completely manually, inspired by a handheld brush (manual mode), but it can also provide interactive assistance during sketching (assisted mode). In addition, it can turn into an autonomous robotic device moving about for computer-generated sketches (autonomous mode). It is capable of operating on many surface materials, such as fabrics, paper, and wood, and with various inks including multi-color, UV, and conductive inks.
In the remainder of this paper, we first introduce the approach of “handheld tools unleashed”: mixed-initiative physical sketching in which humans and machines work together proactively and fruitfully, unleashing the creative and unique benefits of handheld tools and robotic autonomy in concert. We discuss the emerging range of fabrication modes, from manual and assisted to autonomous, and highlight why seamless mode transitions are key in this context.
Next, we present interaction techniques to control such seamless mode transitions that are based on simple interactions well-compatible with sketching. These techniques support user-initiated and robot-initiated transitions between all modes, even while sketching a continuous trace. We also introduce a set of sketching techniques that benefit from these transitions to help the designer extend manual sketches, for instance, by repeating elements or upscaling a design, and to help revisit a sketch, for instance, to refine or color it.
We then contribute a proof-of-concept implementation of a functional robotic device, comprising a high-resolution print head that is capable of operating with a variety of surface materials and inks. It is based on a commercial handheld inkjet printer and a robotic platform equipped with various input controllers and sensors to be context-aware.
Finally, to validate that our approach is technically feasible and useful for physical sketching, and to illustrate that it can be applied in a wide variety of fabrication contexts, we present three application examples: (1) creating electronic circuitry, (2) creating sewing patterns on fabric, and (3) woodworking. In addition, we present our findings from a case study with seven sketchers. It uncovers flexible patterns of use, and illustrates that mixed-initiative physical sketching can make computer-supported sketching more powerful and flexible.
In summary, the main contributions of this paper are:
•
the concept of a mixed handheld and autonomous device for mixed-initiative human–robot collaborative physical sketching that includes manual, assisted, and autonomous modes;
•
interaction techniques to seamlessly move between modes and make use of the robot’s autonomous capabilities to extend and revisit a sketch in the making;
•
RoboSketch, a working prototype of the first computer-assisted robotic printer that supports mixed-initiative physical sketching across all its three modes, with capabilities to create error-preventing constraints, and validated to enable dynamic, context-aware sketching at high resolution and large scale.
2 Related Work
Our contribution builds on prior work on interactive fabrication, sketching interfaces, and drawing tools for 2D surfaces.
2.1 Interactive and Bidirectional Fabrication
Digital design and fabrication technology have revolutionized the way we create and interact with objects. With modern technology, the design process can be done entirely digitally using computer-aided design (CAD) software, while the fabrication process is completed using computer-controlled machines (e.g., 3D printer, inkjet printer, laser cutter). This improves speed and accuracy. However, creative activities often require user engagement
during the fabrication process [
3,
27]. Inspired by traditional crafting tools, interactive fabrication [
72] allows humans to participate throughout both the design and the fabrication process. This allows manipulating the fabricated workpiece in real-time. As an example, Constructables [
38] enables users to manipulate the workpiece directly with a proxy laser, while a cutting laser creates the results instantly. Further research has explored this concept for various fabrication activities such as creating 3D models [
41,
42], fabricating e-textiles [
29] directly controlling fabrication machines [
15,
35,
59] and creating interfaces around the body [
16,
45]. To leverage the advantages of direct manipulation and automated fabrication systems, researchers have explored mixed-initiative systems that allow machines to act like collaborative partners and contribute to problem solving [
18]. In this work, we build on this background and introduce mixed-initiative physical sketching as an instance of mixed-initiative fabrication that supports user-initiated and robot-initiated interaction and interweaves direct control and autonomous sketching.
In particular, our approach takes up the concept of
concurrent interactive fabrication, where design and fabrication occur simultaneously. Prior work has realized this using computer-assisted handheld tools. For instance, FreeD [
80] proposed a handheld milling tool to shape and carve 3D models with computer-assisted guidance. Augmented Airbrush [
55] guides the user in spraying a painting using a computer-controlled airbrush system, dePENd [
78] offers support for sketching using pen and paper, and Shaper Origin [
49,
62] assists precise 2D cutting. More recently, Print-A-Sketch [
46] presented an interactive handheld printer for the physical sketching of electronic interfaces. While these devices support manual and assisted modes of interaction, due to their handheld form factors, they cannot roam autonomously.
Bidirectional fabrication is another form of interactive fabrication that enables iterative manipulation of objects through digital and physical inputs [
28,
71]. For instance, ReForm [
70] presents a system that fabricates 3D objects based on on-the-fly modification of digital models and updates digital models after the physical deformation of objects. With this paper, we contribute to this vision of bidirectional fabrication by introducing a system that records manually sketched traces and prints digitally modified designs in real-time.
RoboSketch combines the idea of real-time interactivity between humans and machines with the ability to print high-resolution marks and presents the first robotic printer that supports manual, assisted, and autonomous sketching.
2.2 Sketching Interfaces
Sketching is a fundamental part of any design process. It is a quick and easy way to communicate ideas and concepts. It can be used to explore new ideas, create prototypes, and convey design concepts in an incremental and iterative way. Since sketching requires a certain level of skill, a variety of sketching interfaces have been developed to improve the accuracy of sketches, making them more accessible to a wider range of people. Physical sketching practices can be augmented with visual guidance, which for instance can take the form of a projected overlay that adds information to the surface [
16,
45,
56]. Another approach is to provide haptic support during sketching to help users create better sketches. For example, dePENd [
78] actuates a ballpoint pen by using a permanent magnet to provide directional force feedback. Langerak et al. [
30] show how a variable force can be generated using an electromagnet and explore algorithms to minimize tracing errors. Phasking on Paper uses friction-based haptic guides to investigate shared control between user and system during sketching [
26]. However, these devices cannot print high-resolution marks and do not support autonomous sketching.
While many of these interfaces center around sketching with pen and paper, a significant number of studies have instead concentrated on supporting users and enhancing their skills within digital environments. SketchPad [
61], considered one of the pioneers of Computer-Aided Design (CAD) software, transformed traditional drawing using a display and a light pen. Building upon this, sketching tools such as DesignScript [
2], DressCode [
23], and Dynamic Brushes [
22], made drawing easier for users with a more intuitive interface. Other approaches focused on providing guidance [
33], tactile feedback [
31], beautifying the strokes [
20,
77], or enabling dynamic brushes and strokes [
34,
68,
76]. With recent advances in artificial intelligence (AI), collaborative design with an AI agent enables iterative ideation [
8,
40] and mixed-initiative content creation [
10,
14]. We drew inspiration from these works for the implementation of our design tool and sketching techniques.
RoboSketch expands upon these ideas by linking sketches in the physical and virtual worlds and supporting physical sketching of circuits, textile patterns, and markings for woodworking.
2.3 Drawing Tools for 2D surfaces
Commonly used drawing tools include pen and paper. Previous work has explored various ways to make drawings at large scale and on arbitrary surfaces more autonomous and accessible. Common examples are the use of XY pen plotters [
5,
54,
64], hanging V-plotters [
6,
9,
39], or robotic arms [
63,
75] to automate the drawing process on horizontal and vertical surfaces. Their use of pens or markers limits these devices to printing vector graphics at low speed. In contrast, making use of inkjet heads to replace the marker allows printing raster graphics at high resolution and higher speed [
66]. However, with all of these devices, the drawing area is limited to the dimensions of the device as they do not move freely.
To solve this issue, researchers have proposed the use of wheeled robots that can move freely and print on any size and shape of the surface. Lee et al. [
32] introduced one of the earliest examples of sketching robots. Cobbie [
36] and DIY Omni Wheel Plotter [
37] are other examples of mobile plotters. Sustainabot [
50] is a small robot printer that uses everyday materials to create shapes. Kino [
24] generates temporary patterns by etching fabrics. There also exist several commercial sketching and printing robots for education [
21] and construction [
51,
52]. However, these machines usually print predefined designs. They are not designed for interactive fabrication and do not support on-the-fly modification of the design.
RoboSketch makes use of a robotic printer on wheels. It leverages its advantages to enable mixed-initiative physical sketching with a robotic printer. It enables manual, assisted, and autonomous sketching.
3 Handheld Tools Unleashed
We envision a new class of handheld devices to expand the scope of collaboration between humans and machines in creative design and fabrication processes, by combining the desirable properties of handheld tools with autonomous fabrication. Expanding upon Horwitz’s notion of mixed-initiative interaction [
18], we aim for tools that proactively contribute to the manual fabrication process whenever needed, while allowing the user to continue working in a natural manner, but that can also contribute to the fabrication process completely autonomously if desired. With
RoboSketch, we contribute a first and fully functional instantiation of this concept, demonstrating how a robot on wheels with a high-resolution color inkjet printhead can be used as a handheld tool for
manual sketching, support
assisted sketching, and can be "unleashed" to act as an intelligent robotic partner for
autonomous drawing.
RoboSketch addresses two key challenges:
Leveraging human and robotic skill sets. Humans and robots partnering in a design and fabrication process would ideally leverage the unique skill set of each partner: While humans excel at generating creative ideas and can more easily adapt to a dynamic context and unforeseen events, robotic tools are capable of creating precise, high-resolution output and exact replicates at high speed.
RoboSketch enables a variety of physical sketching techniques that demonstrate how human and robotic skill sets can complement each other. Using
RoboSketch as a handheld tool, the user sketches out their creative vision before "unleashing" the device.
RoboSketch is then able to autonomously expand upon the user’s drafts by repeating patterns (e.g., leveraging symmetry), refining drafts (e.g., adding details), or by filling sketched-out regions with color. In addition,
RoboSketch can offer the user to auto-complete their sketches (e.g., completing polygons), or offer creative completion by making use of AI to artistically elaborate on the user’s input. Hereby,
RoboSketch transcends the function range of existing computer-assisted fabrication tools such as FreeD [
80] and Phasking [
26]. While those tools allow the user to ‘seize control’ by overriding computer assistance, e.g., with a button-press, or by applying force, they still require the user’s guiding hand to fabricate. They cannot fabricate autonomously beyond the confines of the user’s reach. In contrast, our proposed approach enhances the scalability of the resulting designs and considerably enhances the extent to which the machine can act as a co-creator.
Flexibly shifting control back and forth. Mixed-initiative physical sketching requires control shifts across the entire range from
manual, where the user is in full control, over
assisted with various levels of shared control, to
autonomous mode, where the robotic tool is fully unleashed and sketches independently. To enable natural and efficient co-creation with both human and robotic tools iteratively contributing to the fabrication process, mode transitions need to be seamless. To this end, we developed a series of simple interaction techniques (Fig.
2) that enable fluent,
user-initiated mode transitions throughout all modes at fabrication time: releasing the handle and giving the robot a gentle push signals the robot to continue on in autonomous mode (e.g., for elaborating on a user-created draft). In contrast, the user can solidify their grip (‘Hold Firmly’) to remain in control when in
manual or
assisted mode, or seize control by grabbing the robot’s handle when in
autonomous mode.
Robot-initiated control shifts are necessary when the robot encounters contextual or environmental ambiguity and requires human assistance in
autonomous mode. Here, the robot stops and blinks. Moreover, in
manual or
assisted mode, the robot proactively offers to take over control by making context-aware suggestions (e.g., auto-complete a shape) or by enforcing constraints (e.g., to prevent short circuits when sketching electronic traces with conductive ink). In summary, these techniques let the user access the full range from handheld sketching tools to autonomous sketching robots with a single device, and even within a single stroke.
4 Sketching Techniques
RoboSketch offers a variety of sketching techniques and supporting tools to help designers, makers, and artists sketch out their initial idea, iteratively extend their idea, and revisit the composition to complete details. To enable natural and efficient co-creation by the human and the robot, these techniques fluently integrate manual sketching with computer-assisted handheld fabrication and autonomous fabrication.
4.1 Extending a Sketch
A human sketcher may require help when a design needs to be precise or symmetrical, contains repetitive elements, or when the canvas is large. Partnering with RoboSketch can help sketchers extend their creative vision while still maintaining a high level of precision and control.
4.1.1 Repeating pattern.
Many sketches contain repeating patterns, which can be tedious and time-consuming to realize manually. The Repeat technique combines the expressiveness of manual drawing with support for repetitive sketching. Having selected the Repeat technique on the device’s screen, the user starts by sketching the pattern (Figure
3,Ia) and then, in a seamless movement, pushes the robot toward the desired direction. This triggers the Autonomous mode; the robot takes over and continues printing the pattern autonomously and repetitively (see Figure
3,Ib), until the user takes back control by grasping the handle to continue sketching manually, or by holding the hand in front of the robot to stop the repetition at the desired position (Figure
3,Ic).
One of the main principles of design is achieving balance. This can be done by using symmetrical patterns. There are different manual techniques that can be used to create symmetrical drawings. For example, an artist may use tracing paper to trace a sketch and then flip it over to create the mirrored part. We provide assistance for creating repeated designs that are symmetrical around a central point or across an axis. As an example, to draw a precise
polygon, the user first activates assistance to draw a straight line in manual mode (Figure
3,IIa). Inspired by [
46], this makes the robot cancel out lateral hand jitter. After selecting the Polygon function on the device screen, the user draws the first polygon segment, then defines the number of sides of the polygon by tapping the robot the corresponding number of times (e.g., five taps to make a pentagon, see Figure
3,IIb), and then pushes the robot. The robot then sketches the desired shape autonomously (Figure
3,IIc).
For creating
multi-axial symmetries, the user sketches the desired design in manual mode (Figure
3,IIIa) and then taps on the robot (or selects from the displayed menu, see Figure
3,IIIb) to set the number of radial axes across which the sketch is repeated. The user pushes the robot, and the robot finishes the sketch (Figure
3,IIIc).
4.1.2 Auto-completing shapes.
To assist users to complete the current sketch quickly and precisely,
RoboSketch provides an Auto-complete feature. When the user is sketching in manual or assisted mode, if the system detects the current shape, the prediction is shown on the display. If the prediction is correct and the user wishes to hand over control to the robot, the user simply releases the handle. The robot then autonomously completes the user’s current sketch. Otherwise, the user continues sketching, and the predicted shape disappears or is updated with a new prediction. Our current implementation can recognize basic shapes (e.g., line, circle, square, and triangle) by inspecting the robot’s movement trajectory. In the future, we will extend this feature to predict more complex shapes using a neural network [
17,
36].
4.1.3 Creative completion.
Sketching is a medium for humans to visually express their thoughts, ideas, and emotions, often in an artistic way. On the other hand, recent advances in AI algorithms [
53] have proven that they are capable of creating original visuals based on initial text and image input. By combining the advantages of both methods, humans and machines can co-create content and produce unique and personalized results. Pushing toward the machine end of co-creation,
RoboSketch can realize new ideas based on the user’s existing sketches (Figure
4 b). We, therefore, use the recent implementation of the stable-diffusion model
1, based on [
53], for image-to-image synthesis guided by a text prompt. In our current implementation, the user selects a Creative Completion function and starts sketching. We then query the stable-diffusion model with the user’s current sketch as an initial image and with the text prompt
“line art miro style” (100 steps of interference, prompt strength 85%) regularly. We post-process the resulting image with a standard auto trace algorithm (with center line option) [
4] to create the paths for
RoboSketch to print. Then, we show the result on the screen. When satisfied, the user pushes the robot to trigger autonomous mode, and the printer prints the AI-created image (Figure
4 c).
4.1.4 Routing traces.
Sketching is an incremental and iterative practice. It is important to be able to pause and review a sketch, or return and add more detail. This implies that new traces oftentimes need to connect to existing traces and marks, and need to be precisely aligned. Some examples are closing a shape precisely, connecting elements in flowcharts and diagrams, or sketching conductive traces for electronic circuits. The Routing trace function assists the user in this task.
When a user intends to connect a current trace to a previously printed mark, they manually sketch the trace toward the direction of the previous mark, and then push the robot while letting go of the handle (Figure
5 a). This triggers the autonomous mode. Now the system uses the built-in camera to monitor the surface and detect visual marks using blob-detection. After detecting the position of a printed mark, the robot fine-adjusts its direction so that the printer nozzles are aligned with the mark (Figure
5 b) and keeps printing until it reaches the mark, precisely aligning the trace ending with the existing mark. If the user decides to take over control at any point (for example, to connect the current trace to another printed mark), they can grab the handle and continue sketching in manual mode. If the robot detects multiple marks, it connects to the closest mark by default, unless the user selects a different mark on the display. Optionally, to prevent undesired connection to previously printed traces in case of creating electronic circuits, the system alerts the user on the display when it is getting close to a printed trace. By default, the robot stops printing before reaching the trace and continues after crossing (Figure
5 c). The user can select crossing or routing around the detected trace on the display.
4.1.5 Scaling.
It is common to scale a design to its intended size during sketching or do it even more flexibly using digital design tools. However, it can be difficult to scale a design when the final size is unclear or the canvas is large.
RoboSketch enables creating sketches at a large scale, yet in place. For example, the user activates the Scale function, draws a small-scale design in manual mode (see Figure
6 a), and then positions the robot at the desired location on the canvas. Then the user moves the robot from the lower left to the lower right of the desired bounding box to define the scale (Figure
6 b). The user now releases the handle and pushes the robot. The robot switches to autonomous mode and draws the design at the specified scale (Figure
6 c). Scaling down works similarly.
4.1.6 Stamping.
Similar to prior work [
46], the user can upload a vector graphic in the design tool, and then print the graphic by placing the device on the canvas and manually moving it in the desired direction. Extending beyond such manual stamping, we propose autonomous stamping in two variations: Firstly, the device can stamp a graphic along an existing contour, using line detection. Secondly, it can use stamping to extend an already existing marking with a graphic. To do so, the user places the device somewhere near the end of the existing marking. Using blob detection, the device identifies the marking’s end, moves accordingly, and starts printing the graphic such that it connects to the existing marking. In all cases, the scale of the stamped graphic can be adjusted flexibly, provided it does not get wider than the printhead width.
4.2 Revisiting a Sketch
RoboSketch supports not only the creation of the overall structure, but also the refinement and embellishment of a sketch.
4.2.1 Refining.
While it is fast and expressive to draw the overall structure and design of a sketch with a pen or brush, digital tools (notably, high-resolution printers) tend to be better at realizing detailed patterns and fine embellishments. Using this analogy,
RoboSketch allows users to sketch the overall structure, before the device autonomously adds details to the design. For example, the user first sketches in manual mode (Figure
7 a), then selects a desired pattern from the list of patterns on the LCD menu, places the robot on the sketch (Figure
7 b), and pushes it to trigger the Autonomous mode. The robot detects the trace using the built-in camera and prints the selected pattern along the trace (Figure
7 c). At any time, the user can simply grab the handle to take over control and continue sketching. In our prototype, we considered that the robot follows a single trace to add details. In future work, we will consider more complex designs.
4.2.2 Beautification.
Sketching is a natural way to create initial designs in the early stages of the design process. However, it is difficult to create precise shapes such as circles and right angles when sketching freehand. Beautification is the process of translating the hand-drawn and imprecise sketch to a regular and geometrically accurate design [
67]. Inspired by sketch recognition research [
60,
74], we used a $1 unistroke recognizer [
73] to detect simple hand-drawn shapes and beautify them. The user first selects the Beautification feature from the display menu. They can then either use the inking or non-inking mode of
RoboSketch to manually sketch the design; the system will beautify the design, and the robot will print a geometrically accurate result with a wider trace and darker color on top.
4.2.3 Coloring shapes.
After having created an initial line sketch, the sketcher may continue with painting to fill some shapes.
RoboSketch supports coloring a shape with different tints and patterns. In doing so, the user selects the Painting mode from the display menu. Next, the user places the robot on a desired color or visual pattern; the robot records the pattern that is in its camera view (Figure
8 a). Then, the user places the robot on the contour of a previously sketched shape, releases the handle, and pushes the robot to trigger autonomous mode (Figure
8 b). The robot will then scan the shape’s contour with the built-in camera, if required calculate a closed polygon, and paint the inner region by repeatedly printing the scanned pattern (Figure
8 c). Our current implementation simply juxtaposes the scanned pattern; future implementation could use visual computing techniques to create a seamless pattern.
4.3 Supporting Tools
In addition to the sketching techniques for extending and revisiting a sketch, RoboSketch offers several supporting tools to enhance creativity, improve precision, and speed up fabrication:
4.3.1 Dynamic Custom Brushes.
Artists use different techniques of brush movement to smoothly create different effects in a painting. They move the brush faster to create faded color, press the brush on the canvas to create a wider trace or choose a different color from the palette. Similarly,
RoboSketch supports users to integrate these techniques into their sketching. For example, when the robot is in Autonomous mode, the user can take control for a brief moment by grabbing the handle to dynamically change the brush. Pressing the handle gradually will print wider marks (Figure
9 a), moving the robot faster, by pushing the handle forward, will fade the colors (Figure
9 b), and pointing the handle at the desired color while the color circle is displayed (see Figure
9 c) will change the color. When satisfied, the user lets the robot continue sketching with the newly defined brush. Similar to other digital painting tools,
RoboSketch also supports custom brushes (e.g. serpentines and zigzags).
4.3.2 Measurement Tool.
To control the robot’s motion, we use two encoders and continuously monitor their data. This data can also be used for measuring the length of the path traveled (
linear measurement) (Figure
10 a) or to print corners with precise angles (
angular measurement) (Figure
10 b). For example, to print marks on a certain distance (e.g., placeholders for screw holes), the user activates the linear Measurement tool, prints the first mark in manual mode, and then moves the robot while observing the distance traveled on the display, before printing the second mark at the desired position. To draw a corner with a precise angle, the user can grab the handle at any point, activate the angular Measurement tool, rotate the handle to define the desired angle, and then push the robot to continue drawing.
4.3.3 Guidelines.
Drawing guidelines offer valuable assistance for creating accurate and proportional sketches, provide guidance for outlining the design, and ensure that drawings are symmetrical and evenly balanced.
RoboSketch supports designers in creating accurate guides by providing basic shapes (e.g., line, circle, polygon) and radial symmetry. As an example, the user can create a guide in the form of a circle by selecting the circle from the display menu, specifying the center point and radius with a stroke in manual mode, and then releasing the handle. The robot will then complete the task in Autonomous mode (Figure
10 a). By using UV ink to print guides and then UV light to continue sketching, we can make the guides invisible in natural light. Alternatively, guidelines can be printed with a very fine width and light color. In the future, advances in ink technology may make it possible to erase the printed traces or print sketches that fade after a while.
5 Implementation
We now present the proof-of-concept implementation of
RoboSketch. We first discuss the hardware system, then the user interface for controlling
RoboSketch, and finally the implementation of interactions.
5.1 Hardware System
Robotic Base. The main components of
RoboSketch are shown in Figure
11. Two micro metal gear motors (HP 6 V, 250:1) [
43], controlled by two DRV8838 motor drivers, are used to move the robot in differential drive mode. The motors are equipped with magnetic encoders (12 CPR) [
44], used to measure the distance traveled by the robot. They are tethered to an ATmega32U4 AVR microcontroller. The device moves at a maximum speed of 31 cm/s. The body consists of a laser-cut MDF case and measures 164 x 191 x 60 mm. 4 AAA batteries power the robot and provide about 8 hours of operation without recharging.
Sensors. An ultrasonic distance sensor (HC-SR04), tethered to the microcontroller, is used for detecting obstacles. A wide-angle RGB camera (OV5640) mounted on a stand is connected to a Raspberry Pi 4B. With an embedded Linux operating system and the use of OpenCV’s blob detection feature, it monitors the robot’s surroundings, detects previous marks, and provides a real-time video feed for debugging.
Printer & inks.RoboSketch contains a color handheld printer for high-resolution prints. We used COLOP e-mark [
7], a commercial handheld thermal inkjet printer, which has a very compact form factor (111 x 76 x 72 mm). It is lightweight (225 g) and able to print on diverse absorbent surfaces (e.g., paper, cardboard, cork, textiles, and wood). With its 14.5 mm wide printhead, it allows for high-resolution prints (600 dpi) at a maximum printing speed of about 30 cm/s. The selected handheld printer allows changing and refilling the printer cartridge with various inks. Commercially available replacement cartridges comprise tricolor, black pigment, and UV ink. In addition, we have successfully printed conductive silver ink, in line with prior work that used inkjet heads for printing conductors [
25,
46].
5.2 Software Implementation
To enable a rapid and convenient workflow, users are provided with a two-part user interface. The touch screen user interface (Figure
12 a), embedded on the robot and implemented in Processing, facilitates direct and immediate interaction with
RoboSketch. The user can trigger most functionality directly on the robot (e.g., selecting primitives, changing the brushes’ pattern and color). Moreover, the display provides real-time assistance and shows the position of the robot relative to the traversed path. We used a 3.5-inch Raspberry Pi LCD [
69], inserted directly into the Raspberry Pi board (Figure
11).
In addition, we implemented a backend interface in Processing that is running on a standard laptop (Intel Core i7-6700HQ CPU 4 cores at 2.60 GHz) with Windows 10 (Figure
12 b). The backend interface allows debugging of the system and establishes a link between all the components: it communicates with the ATmega microcontroller via a Bluetooth connection to receive sensor data and control the motors and uses Wifi to communicate with the inkjet printer and Raspberry Pi.
5.3 Implementation of Interactions
RoboSketch enables physical sketching and direct manipulation of the robot using a handle. For this purpose, a dual-axis analog joystick module including a push button [
1] was used and connected to the base microcontroller (Figure
11). To facilitate interaction, we 3D printed a brush-like handle out of PLA and replaced the original joystick knob with our 3D printed handle. We use relative mapping: pushing the handle more will make the robot move faster. By placing a small force-sensitive resistor (FSR) [
57] between the tip of the handle and the push button, the device provides different levels of pressure on the handle, giving the user more flexibility when interacting with the robot. A capacitive sensor on the tip of the handle, made of copper tape, detects the presence of the hand. For detecting tap and push gestures, two square FSR sensors [
58] were placed on the top and back of the robot (Figure
11) and then connected to the base microcontroller.
To support controlling the robot from a distance, the user can use a stylus and digitizer tablet [
65] or a gaming controller [
12] that communicates wirelessly (2.4 GHz) with the backend interface to control the robot remotely.
6 Applications and Case Study
To demonstrate the practical feasibility and versatility of our technique, we present three application examples fabricated with RoboSketch. These show the use of sketching techniques and the transition between different interaction modes, in various domains of fabrication. We also present the results of a hands-on case study with artists and engineers.
6.1 Dandelion Art with Interactive Circuitry
Inspired by Jie Qi’s Dandelion Painting [
79] and to demonstrate how
RoboSketch can support creative activities and facilitates the fabrication of electronic circuits, we created an interactive wall art that glows from behind (Figure
13 g). The painting is made on three pieces of A3-size cold-pressed sheets and consists of two layers: the front is an artistic layer showing dandelion flowers, while the back contains the electronic circuit and LEDs [
47,
48]. We started by manually sketching two lines to create the stalks of two large flowers. For sketching many small dandelion seeds, we uploaded a graphic of the seed and used the Stamping and Scaling features to freely print it at different sizes and orientations. For sketching the stalks of larger seeds, we uploaded a graphic of just the stalk and stamped them freely at different orientations (Figure
13 a). To complete the seeds, we uploaded a graphic containing the seed’s feathery bristles and switched to autonomous mode (Figure
13 b), to let the robot identify the stalk’s endpoints and autonomously print the bristles in the right places (Figure
13 c). To create the electronic layer, we first used Stamping to print the footprint of the LEDs on photo paper [
13] with conductive ink (Figure
13 d). We moved to autonomous mod (Figure
13 e), using the Routing Trace feature, to let the robot connect the footprints with conductive traces (Figure
13 f). Finally, the LEDs are placed on the footprints and both layers are attached to a wooden frame. All the traces are connected to LiPo batteries attached to the back of the canvas.
6.2 Creating Sewing Patterns on Fabric
Transferring a sewing pattern onto fabric can be a tedious task, often done manually with a pen, tracing paper, and previously cut templates because most textiles do not fit into a commodity printer.
RoboSketch assists textile makers in creating customized cutting and sewing patterns on the fabric. As an example, we created a clutch bag from a piece of velvet fabric for the outside and linen fabric for the inside (Figure
14 f). We first uploaded a graphic with the cutting and sewing pattern (Figure
14 a). Next, we defined the appropriate scale and position of the pattern directly on the piece of fabric, using the Scale and Measurement features (Figure
14 b). The robot then printed the pattern on the back of the fabric (Figure
14 c). We repeated the previous tasks to create all the pieces. Finally, we cut out the fabric along the traced line (Figure
14 d) and sewed the piece together using a sewing machine (Figure
14 e).
6.3 Assistance in Wood Working
Creating a precise and intricate design on a piece of wood is challenging. Craftsmen sketch the design on the wood with a pencil and use various measuring tools (such as a ruler, protractor, and combination square) to create straight lines and precise shapes.
RoboSketch facilitates crafting by assisting to sketch precise shapes and align screw holes on a piece of wood. As an example, we realized a wooden hanger for a crib on 3 mm thick plywood and then attached toys with strings (Figure
15 f). To create the hanger, we selected the Repeating Pattern feature (polygon) and sketched the polygon’s first segment on the wooden sheet in Assisted mode (Figure
15 a). After defining the number of sides to six, the robot completed the polygon in Autonomous mode (Figure
15 b). Next, we added marks for drilling holes where toys will be attached. We created the first and second marks at a 5 cm distance using the Stamping and Measurement tools (Figure
15 c) and then switch to the autonomous Stamping mode, to let the robot repeat stamping the marks following the polygon (Figure
15 d). Finally, we cut the plywood (Figure
15 e) and attached the toys with strings.
6.4 Case Study
To gain a better understanding of RoboSketch in use, we conducted a hands-on exploration session with experienced artists, sketchers, and novices.
6.4.1 Participants.
We recruited 7 participants: 3 artists from the College of Fine Arts, all female, aged 30 (A1 and A2) and 33 (A3), and experienced in a wide range of arts, including sketching, drawing, and painting with physical tools. The other 4 participants were engineers with backgrounds in embedded systems (P1, female, 22), e-textile (P2, female, 28), robotics (P3, male, 30), and soft robotics (P4, male, 31). Two participants were left-handed.
6.4.2 Procedure.
We began the study with an introduction to the project, basic functionalities, and interaction with the robot, and gave participants time to practice sketching with our tool. They also tried different supporting tools, such as custom brushes and measurement tools (Figure
16 a). Then, we continued the study by explaining the Manual, Assisted, and Autonomous modes and introducing the gestures for transitioning between these modes. Participants were then asked to perform a series of tasks to familiarize themselves with the transition of shared control: 1) repeating pattern (linear, polygon, and symmetry), 2) scaling, and 3) sketching in Assisted mode (straight line and within boundaries). We also gave them time to explore other features that interested them. We then discussed their experiences and the challenges they faced in a semi-structured interview. We continued the study by asking participants to create a drawing (one result is shown in Figure
16 b) using their preferred sketching techniques (two participants did not finish this task due to lack of time). Finally, all participants were asked to complete a questionnaire about their experience and possible use cases of the tool. The sessions lasted about two hours, were audio-recorded, and photos and videos of key situations were taken.
6.4.3 Results & Discussion.
All participants were able to interact with the device and provided valuable feedback. Most importantly, after less than one hour of exploration, they were able to sketch with the device without our intervention. In the following, we summarize the central findings.
Likert scale. As part of the questionnaire, we asked participants to rate the following questions on a five-point Likert scale: How easy was it to use the device, and how likely would they use manual, assisted, and autonomous modes, and sketching techniques? Overall, responses were positive to very positive (30 out of 35 responses were "likely" and "very likely"). Participants valued sketching techniques and various modes of interaction with the device, with autonomous (5 out of 7 very likely, 1 likely, and 1 neutral responses) and manual modes (3 out of 7 very likely, 3 likely, and 1 neutral response) being favored most.
Manual mode. Participants liked the ability to manually move the device, draw very consistent lines, and change the color, width, and patterns of traces quickly and on a large scale. P2 liked the idea of controlling a handle like a brush; P3 mentioned that “the joystick design is comfortable to interact with”, and A2 enthusiastically said, “you only need one tool instead of many pencils”. Interestingly, A1 wished for a longer handle to control the robot on the floor, to print sketches during an on-stage performance, and then requested to control the robot remotely with the remote control joystick. Similarly, A3 expressed her interest in sketching street art from far away with a remote controller. While all participants liked the concept of Manual mode, they also pointed out that the current size of the device is rather large for a handheld device. From our observation, after a few minutes of practice, the artists were able to move the device confidently and make freehand sketches; in contrast, the engineers were careful about parts of sketches that were hidden underneath the device and indicated that they needed more time to practice.
Assisted mode. All found Assisted mode very helpful, especially for drawing straight lines, geometric shapes, and keeping boundaries:
“Seemed like magic, merging of real-world and virtual borders” [P4]. P2 stated that the assistance in keeping boundaries allowed her to focus on sketching without worrying about crossing boundaries. P3 decided to sketch a car when we asked him to create a drawing with the device. He mentioned that he is very untalented and uneasy in drawing by hand; however, using Assisted mode for drawing straight lines and basic shapes, he managed to draw a large-scale car on a piece of paper (150 x 110 cm). At the end of the session, he was satisfied that he could draw by hand for the first time (Figure
16 c). P4 found this mode useful for drawing graphics and 2D CAD drawings that are difficult to sketch by hand. All participants except one (A1) preferred to be notified before receiving assistance from the robot.
Autonomous mode. All participants were enthusiastic about the robot moving autonomously and expanding their hand-drawn sketch: “I liked Autonomous mode (...) I can just observe my drawing expand” [A1]. They also mentioned that the Autonomous mode will allow them to repeat difficult shapes and patterns that were difficult or tedious to do by hand. For instance, based on her experience in drawing comics, A1 found this mode very helpful in scaling and repeating visual elements in comics faster and more accurately. P3 mentioned that for his project on metamaterials, he has to replicate similar patterns (e.g., cells) in different angles and scales to be able to analyze them. This device allows him to make faster and more accurate sketches for ideation and further discussion. He then continued sketching one of the cells and repeated it in a different direction. P1, who has been drawing mandalas for several years, expressed that autonomous mode can help her create a more customized and precise design. She then sketched half of a butterfly and used the Repeat function to mirror it.
Transition of control sharing. Participants valued the tangible interaction with the device and preferred to touch the robot to initiate the task rather than pressing a button on the UI. A1 said,
“I like the tangible interaction with the robot, it was a fluid movement between me and the robot” and she continued
“I feel connected to the robot when I touch it”. P3 indicated that the gesture metaphors are memorable. Participants learned the gestures quickly. We frequently observed that they began sketching in manual or assisted mode, then pushing the robot to extend their sketch, and then grabbing the handle to change the color, width, and pattern of the trace and then continued sketching (Figure
16 b). At the end of the session, A2 and P3 suggested using another type of interaction (e.g., voice command) to stop the robot and take over the control in urgent situations. We will consider this for future iterations of our prototype.
Application and use cases. Overall, our device will improve creativity, according to the artists, and productivity, according to the engineers. Participants also suggested various use cases for the device, such as education, architecture (e.g., drawing floor plans), textile design, rapid prototyping (e.g., website wireframe), creating floor signs for temporary events, and generating navigation patterns for other robots.
7 Limitations and Future Work
Below, we summarize the limitations of our current implementation and identify opportunities for future work.
Position tracking and precision. In the current setup, we use magnetic encoders to measure the distance traveled by the robot, which provides relative positioning information and is not reliable on uneven surfaces. In the future, we plan to investigate alternative techniques for position tracking (e.g., using a camera system such as the OptiTrack) that allow for absolute positioning on a wider range of surface geometries. Improving the position tracking would also allow us to sketch more complex shapes, for instance, a large raster graphic that is printed in adjacent strips. In addition, improving position tracking helps increase sketching precision, which is a major issue with plotter robots.
Form factor. The size and form factor of our robot are constrained by the size of the handheld printer. Therefore, our robot occludes part of the design during sketching. Advances in printer technology would allow us to reduce the size of the device so that it is closer to the size of physical brushes. A simple alternative would be to place a second camera underneath the case and visualize the live camera view on the display.
Manual sketching. Currently, we use a commercial dual-axis analog joystick and relative mapping to control the robot in manual mode. In future work, we plan to investigate alternative input techniques that use absolute position mapping, which would more closely resemble painting with a brush. We plan to include an omnidirectional platform with Mecanum wheels [
11,
19] and backdrivable mechanism, so the joystick can be replaced by a fixed brush handle. While controlling the joystick limits manual sketching to wrist movement, replacing the joystick with a fixed brush handle would also allow movement of the entire arm. Future work should also consider integrating haptic feedback directly on the handle.
Interaction. In our current implementation, our robot immediately transitions from autonomous to manual mode when the user grabs the handle. While this approach provides convenience when the robot is in close proximity, alternative methods of interaction, such as voice commands and mid-air gestures, are being considered to address scenarios when the robot is not easily accessible. To help predict the transition time from autonomous to manual mode, in future iterations we will visualize the robot’s position relative to printed marks on the device screen and backend interface.
Currently, the speed of the robot in manual mode is adjusted using the joystick. We are considering other types of interaction such as voice commands and mid-air gestures to adjust the speed in autonomous mode.
While we did not observe a split of attention between sketching and viewing the device screen during the user study, we are considering in-situ projection on the canvas to further improve the interaction with the device.
Collaborative control of the robot. Multiple users can also collaborate to control the robot. Examples include crowd participation in the creation of artwork or remote control of the robot by multiple users. This is an interesting aspect we are considering for follow-up work that opens up exciting research questions, e.g., defining the type of interaction and modality, ownership, the priority of received input, and resolving input conflicts.
Different fabrication tools. Our robot is equipped with a printer for sketching, however, it is possible to change the design of the robot and develop a modular fabrication tool. For example, the printer can be replaced with a marker, a cutter, a miniature laser engraver, or a miniature iron for sintering conductive traces. This will not only enlarge the set of fabrication tasks that can be accomplished using “handheld tools unleashed”. It will also open up possibilities for new autonomous fabrication devices that collaborate with each other to accomplish a task (e.g., one robot draws a design on a fabric and the second follows the traces and cuts out the fabric).
8 Conclusion
So far, personal fabrication has mostly centered around handheld tools as an embodied extension of the user, or digital fabrication machines automating parts of the fabrication process without much direct user intervention. In this paper, we explored Mixed-Initiative Fabrication for sketching as a continuum ranging from manual via assisted to autonomous fabrication, that enables seamless transitions between each mode during fabrication. As a first example of this vision, we presented RoboSketch, a robotic printer on wheels capable of creating large-scale, high-resolution prints. With a joystick controller, RoboSketch can be used for manual sketching. It also provides interactive assistance during sketching, and it can turn into an autonomous robotic device moving about for computer-generated sketches. We introduced a set of easy-to-learn interaction techniques to seamlessly transition between all three modes, along with sketching techniques that benefit from flexible transitions, e.g., to extend or revisit a sketch. Our results show that RoboSketch’s concept was positively received by artists and engineers, and that mixed-initiative physical sketching succeeds in making computer-supported sketching more versatile and flexible.
Acknowledgments
This project received funding from the German Research Foundation (DFG project 425869111 within the Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments). We thank COLOP e-mark for supplying us with a handheld printer and Alice Haynes for her assistance in proofreading the text. We also thank the reviewers for their insightful comments.