Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Design Beyond Devices: Creating Multimodal, Cross-Device Experiences
Design Beyond Devices: Creating Multimodal, Cross-Device Experiences
Design Beyond Devices: Creating Multimodal, Cross-Device Experiences
Ebook642 pages5 hours

Design Beyond Devices: Creating Multimodal, Cross-Device Experiences

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Your customer has five senses and a small universe of devices. Why aren't you designing for all of them? Go beyond screens, keyboards, and touchscreens by letting your customer's humanity drive the experience—not a specific device or input type. Learn the techniques you'll need to build fluid, adaptive experiences for multiple inputs, multiple outputs, and multiple devices.

LanguageEnglish
Release dateDec 1, 2020
ISBN9781933820491
Design Beyond Devices: Creating Multimodal, Cross-Device Experiences

Related to Design Beyond Devices

Related ebooks

Computers For You

View More

Related articles

Reviews for Design Beyond Devices

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Design Beyond Devices - Cheryl Platz

    CHAPTER 1

    Creating the World We Want to Live In

    The Criticality of Context

    Defining Multimodality

    Fluidity over Rigidity

    Fix the System, Not the Person

    Leading with Anti-neutrality and Anti-racism

    A Gender Equity Lens

    Where No Design Has Gone Before

    Apply It Now

    The world as you know it is experiencing (yet another) seismic shift on all levels of society, and the field of human-computer interaction is no exception. You must broaden your understanding of what interaction truly means in order to adjust to this brave new world. When you do, you’ll find that definition is now inextricably linked with some of the forces that drive that transformed society. With great power comes great responsibility. (Thanks, Uncle Ben.)

    Until the advent of the smartphone, the shape of a human-computer interaction was fairly well defined. Your computer whirred to life and began the conversation with pictures and visual representations of words on a screen—and occasionally, even sound effects. The human expressed their intent, or response, with the help of a keyboard and a mouse.

    Thankfully, the available palette of interaction technologies has become far more sophisticated. The devices of today and tomorrow are capable of supporting far more than a single input and output. And yet so many of our devices continue to party as if it’s 1999!

    These new technologies allow today’s designers to broaden their toolkits and to choose the inputs and outputs best suited to their customers’ needs. Some systems already allow customers to choose between parallel input and output modalities on demand—a far more inclusive approach often referred to as multimodality.

    But as interactive technology became a darling of the consumer market, it also became indispensable to the systems that power human society. Your understanding of your customer’s lived experience is more critical than ever, as your experiences affect a majority of their waking lives. Your design decisions aren’t just about inputs and outputs anymore. Your decisions are about impacts and outcomes. Technologists have far more power than anyone anticipated years ago. How can we use our collective skills to create the world we owe to each other?

    The Criticality of Context

    I was fortunate enough to grow up in a tech enthusiast household (see Figure 1.1). From an early age, I developed an affection for the blinking black boxes scattered around the house. I even wrote my college essay about the topic. But I’m clearly no Nostradamus, as I didn’t see the blinking black cylinders of my Alexa-powered future looming in the mists of foreshadowing.

    Mind you, the devices in my home weren’t always blinking, particularly after I started reading the manuals. My parents had to contend with a precocious pre-kindergartner who set the VCR’s clock and corrected them when recordings weren’t set correctly. (As a service for most people reading this book, VCR stands for video cassette recorder. You didn’t miss much.)

    FIGURE 1.1

    The author, already drawn to the device her father used for computer programming at Amtrak.

    For decades, the experience I had as a child was the accepted model for interacting with technology:

    1. Purchase expensive hardware (or software).

    2. Read the manual.

    3. Sit within a few feet of the hardware’s input channels (buttons, keyboard, mouse, etc.).

    4. Perform a complex ritual of steps you memorized from the manual.

    Humans were—and in many ways, still are—expected to adapt their world and their life to their technology. (This is particularly true for those living with disabilities—we’ll talk more about this later in the chapter.)

    At the risk of rash generalization, surprisingly little consideration was given to contextual design questions like, Where is the device located? or even Can we assume our customers have 20/20 eyesight? We needed physical peripherals to interact with those devices, so it was safe to assume our customers were within arm’s reach of the device. Beyond that, most devices were heavy and needed power, so it was also safe to assume that a device was always in the vicinity of the same power outlet.

    Nowadays, everyone theoretically knows that things have changed. But has that knowledge changed the way that products are conceived, pitched, and designed? Not as much as you might think. Product teams are still reflexively applying all sorts of shortcuts and assumptions that aged out once laptops, batteries, touchscreens, cell signals, and hands-free devices entered the market.

    Context has become a critical component in any designer’s work. You can’t even assume that customers can see the blinking black box (or cylinder) anymore. Whether you’re designing for devices that travel (laptops, smartphones, fitness trackers, smartwatches, etc.) or designing for dynamic environments (open offices, family kitchens, and public spaces), the context in which interactions occur shapes the perceived utility and success of that experience.

    Part of my mission in this book is to encourage you to treat the customer and their context as stakeholders in your work. As interactions become more sophisticated, it’s risky and in some cases downright dangerous to disregard the tremendous impact a customer’s surroundings, relationships, and circumstances will have on the success of your experience.

    LEARN MORE

    Chapters 2, Capturing Customer Context, and 3, Understanding Busy Humans, provide you with tools for exploring and defining your customer’s context so that you can keep the human at the center of your human-computer interactions.

    Defining Multimodality

    What do the terms modality and multimodality even mean? Those words might be part of your motivation for reading this book. But have designers and technologists ever stopped to consider whether they’re all talking about the same thing?

    It turns out that this is a tricky question to answer. Merriam-Webster defines multimodal as having or involving several modes, modalities, or maxima. That’s fairly general, to say the least. In order for that term to mean anything, a useful definition of the concept of a mode will be required. As it turns out, the terms mode and multimodality have been interpreted in many ways, even when limited to digital experiences:

    One input, many interpretations: The Nielsen/Norman group defines modes as different interpretations of the user input by the system, depending on the state which is active. Same input, different results.¹ The example they provide is the Caps Lock key on a keyboard: when Caps Lock mode is engaged, the way a device interprets an alphabetic keystroke changes. As originally defined, multimodal didn’t apply to system output.

    Capability or functionality: The term mode is also more widely used to describe different capabilities or states, like airplane mode on a smartphone. Psychologists have defined different sensory modalities for humans: visual, auditory (sound), haptic (touch, movement), and proprioceptive (movement and orientation in space).

    Type of communication: Outside of the technology industry, the concept of a "mode of communication" is rather widely accepted. Modes of communication can include the written word, pictures, speech, movement, touch, and more.

    So where does this leave the industry? In the end, none of these definitions are inherently wrong. The word mode has many modes. In his 2019 keynote Moment Prisons at Interaction Latin America, Lou Rosenfeld challenged the industry to avoid becoming overattached to specific definitions, saying that it’s the context that provides meaning. The relationship between humans and their devices has evolved, and so has the context in which multimodal will be interpreted.

    In the context of this book, my definition of a multimodal interaction is an exchange between a device and a human being where multiple input or output modalities may be used simultaneously or sequentially depending upon context and preference.

    To ease future discussions, it makes sense to establish a common set of modalities that can be applied to both human-initiated input and device-initiated output (see Table 1.1).

    TABLE 1.1 COMMUNICATION MODALITIES

    Take note that the input and output modalities at any given moment in an interaction do not have to match. In fact, in some cases a mismatch might be desirable. For decades, our digital interactions have technically been out of sync: humans communicating via haptic input (mouse and keyboard), but receiving system output via visual and auditory modalities. We tapped and typed while computers glowed and beeped. And that’s not necessarily bad! It’s just not always aligned with the natural way that humans have evolved to interact.

    LEARN MORE

    You’ll learn more about the specific input and output manifestations for each of these interaction modalities in Chapter 5, The Language of Devices and Chapter 6, Expressing Intent.

    Fluidity over Rigidity

    When I first began working with smart speakers, I discovered a peculiar tension. On one hand, smart speakers were receiving rave reviews from customers and media alike as a new, inclusive step forward.

    And yet, in the first few weeks of my time as a voice designer on the Alexa voice user interface team at Amazon, I was confronted with evidence that contradicted that perception. In fact, early Echo devices weren’t fully accessible to all customers with disabilities. Certain key features (like onboarding) were only controllable via the app. These visual-only settings excluded customers with extremely limited vision and members of the Blind community.

    That led me to ponder a broader question: If even a smart speaker struggles with accessibility, how on earth are multimodal interfaces desirable? Ethical? Wouldn’t we be excluding more customers as we required the use of multiple input technologies, increasing the pool of folks who might encounter an obstacle?

    And there’s the trick. Requiring multiple inputs does exclude more potential customers. Those early Echo devices were implicitly requiring both auditory and haptic interaction to complete some core scenarios.

    But multimodality in its strongest form shouldn’t be about requiring multiple modes of input at all. The most powerful multimodal systems are those that support multiple inputs and outputs and allow fluid transitions between those modes of interaction as a customer’s context and needs change.

    LEARN MORE

    Transitions between modalities are so important that there’s a whole chapter devoted to the concept of transitions—see Chapter 9, Lost in Transition.

    Fix the System, Not the Person

    Disabled customers have lived in a paradoxical state: they’ve long been pioneers of multimodality, using voice and haptic controls in unique ways when the default way of working wasn’t sufficient. But these multimodal experiences were almost always an afterthought; a pervasive check-the-box mentality led to subpar and disjointed multimodal interactions.

    We can’t afford to make that mistake again. We now have a chance to build new systems with disabled customers in mind from the ground up! What an exciting opportunity for a discipline of practitioners who love solving problems and a market powered by discovering new customers. However, this requires a desire to deeply understand the nature of disabilities and the disability culture.

    (Re)Defining Disability

    The definition of disability has evolved considerably in the past few decades. In the 1980s, a disability was largely defined in terms of impairment and missing capabilities. But the definition now acknowledges that a disability is the result of both a person’s physical capabilities and the systems that person must interact with on a regular basis. From the World Health Organization: Today, disability is understood to arise from the interaction between a person’s health condition or impairment and the multitude of influencing factors in their environment.²

    This shift restores dignity to those who were made to feel like less of a person. It shifts the focus to fixing systemic exclusion rather than putting the burden on already burdened individuals to overcome the system.

    NOTE DISABILITY AND DIGNITY WITH LABELS

    Different individuals prefer different terminology when discussing a personal disability. Some prefer disabled person, while others prefer person with disabilities or even person living with disabilities. Some of my sources intentionally vary their language, but when dealing with a specific population, follow their lead. Avoid the use of the word handicapped in reference to a person, because it is outdated and not endorsed by the disability community.

    Accepting Constraint as a Gift

    As you explore input and output modalities in the next few chapters, you’ll see a number of notes about the potential customers who would be excluded from that mode of interaction due to situational, temporary, or permanent disabilities.

    Rather than accept these potential exclusions as limitations, look at them as invitations: invitations to lead with inclusivity. Where you find an exclusion, ask yourself this question: How might a robust multimodal experience use a different interaction to allow those excluded customers to complete a task?

    Your designs will be stronger if you begin these considerations at the start of your project when you’re prioritizing which modalities and support systems will be supported.

    Instead of viewing design for inclusion as a task, reframe the constraints presented by inclusive design as a gift. Don’t designers thrive on constraint? Aren’t most products considered to be solutions intended to solve a problem? There are many documented instances where inclusive design bore unexpected dividends well past those with disabilities. Here are a few examples:

    Curb cuts: The ramps cut into concrete curbs at intersections were designed to allow wheelchairs to access the crosswalk, but were also transformative for strollers, delivery carts, and bicycles.³

    Captioning on streaming media: Captions created for hearing impaired individuals led to the ability to watch video without headphones on a crowded bus and the ability to watch streams in noisy environments.

    Text messaging: Originally designed as a tool for hearing impaired individuals, text messaging fundamentally changed the way that humans interact around the globe.

    But don’t lose sight of the reason for the accommodations. The curb cut effect may benefit all of society, but it does bring with it the risk of re-marginalizing the people for whom the original accommodations were made. In a blog post, wheelchair user Scott Crawford wrote: I’ve photographed curb ramps that are little more than ‘decorative accents’ in an otherwise impassable sidewalk that isn’t safe for a mountain goat, much less a disabled person. Wheelchair users use such ramps only to find an obstacle down the sidewalk. We have to backtrack, feeling like a ‘rat in a maze.’

    Design Questioning for Systems Design

    Along these lines, members of the disability community rightly raise concerns about the way that design for disability has been framed in the past. In her 2019 talk at Interaction Week in Seattle, Liz Jackson shone a light on the hollow motivation behind some of today’s design work for people with disabilities.

    Liz called out the pathological altruism embodied by some of today’s design for disabilities and pointed out that the narrative often told is of a savior (the company or designer) and the beneficiary (the disabled person). That narrative diminishes the disabled person and removes their agency, sending a message that they were not competent enough to help themselves. In her words, We are nothing more than a prop in your stories.

    As an alternative, Liz encouraged design questioning. Rather than focusing on fixing the person and restoring a missing capability, you should question the systems that require the use of that capability in the first place.

    That’s tough medicine to swallow, I know. But this is one of the many reasons multimodal design is so exciting. Multimodal systems allow you to stop treating disabled folks as a prop or a checkbox. By focusing on a flexible multimodal approach that allows people to use different human capabilities to solve a single problem, you’re taking important steps toward shifting the narrative from accessibility for people who lack a capability to amplifying everyone’s inherent capabilities.

    Nothing About Us Without Us

    In her eye-opening book Giving Voice: Mobile Communication, Disability, and Inequality, Meryl Alper points out that the many projects that lead with technological adaptations for disabled people are missing broader systemic issues.

    My Disability Journey

    I now self-identify as disabled. After a work-related injury in 2018 (office ergonomics matter—even in temporary workspaces), I was referred to a physiatrist who diagnosed me with a genetic connective tissue disorder in addition to my injury.

    Suddenly, so much of my life snapped into place. My history of surprisingly traumatic joint injuries. Migraines, tumors, and other strange symptoms. And most importantly, it provided an explanation for my chronic pain and its resulting impact on my life, interfering with and sometimes preventing my participation in activities like driving and writing.

    The process of fully exploring this diagnosis will take a long time. Ironically, I’ve been living with that disability my whole life and just didn’t know that it had a name. There’s such a tendency—and even I have been guilty of this—of othering folks with disabilities.

    But we are all humans with varying spectrums of ability dealing with environmental conditions, known medical conditions, and as-yet unknown constraints. I encourage you to think about how you empower people of any capabilities, rather than trying to compensate for the loss of a standard capability.

    One popular attempt to honor the disability community’s rallying cry Nothing about us without us is to arrange accessibility design sprints, where customers who live with a relevant disability are invited for a whirlwind week of participatory design. But to Liz’s point earlier, those design sprints generally still frame disability as ancillary to the process, rather than integrating disability as a core consideration.

    As an individual who is either actively developing technology or curious about the process, your viewpoint is but one valuable perspective among many at the design table. To better equip your team with the creative perspectives needed to challenge existing systems, consider pursuing more durable representation from disabled voices as consultants, vendors, or full-time team members on your projects.

    LEARN MORE

    For examples of potential exclusion risks in multimodality, see Chapters 5 and 6. For more about inclusion in the design process, see Chapter 15, Should You Build It?

    Leading with Anti-neutrality and Anti-racism

    We live in a time where brave, radical inclusion is required of all of us in the technology sector. In this liminal state between the injustices of the past and the changes we yearn for—changes which will take years, decades—technology remains disproportionately influential.

    Darnella Frazier, the young Black woman who had the presence of mind to capture what became the tragic murder of George Floyd with her smartphone—changed the world. Technology both enabled that brave act and enabled the subsequent harassment she endured at the hands of racists and those numb to the intersection of police brutality and racism. It took weeks for Facebook employees to speak out when the President encouraged violence on their platform without objection from Facebook’s CEO. That same indifference led to the environment that empowered Darnella’s tormentors.

    It’s Time for Designers to #causeascene

    For too long, those with the luxury of power and platform (myself included) have failed to acknowledge that it’s not enough to assume neutrality fixes forms of discrimination and oppression woven into the fabric of our culture.

    Anti-racist economist and tech ethics advocate Dr. Kim Crayton created the #causeascene (Cause a Scene) hashtag on Twitter and the Cause a Scene podcast to deconstruct the harm that tech neutrality causes. The #causeascene framework provides four clear but critical guiding principles to help you minimize the harm your own work causes moving forward in these difficult times.

    The #CauseAScene Guiding Principles:

    Technology is not neutral. A lack of action is the same as inaction, and it reinforces existing systemic bias. On Twitter, heads of state and other people in power are allowed to abuse the platform in direct violation of terms of service that are applied to the rest of the platform’s audience. Every product decision you make can help or harm—but true neutrality is an illusion.

    Intention without strategy is chaos. How often have you heard I meant well after something goes wrong? Positive intent doesn’t neutralize actual harm. Strategy can take the form of steps to proactively mitigate potential harm, as well as a plan to listen to your customers, monitor impact, and respond immediately if unexpected harm is identified.

    Lack of inclusion is a risk management issue. This point is best illustrated by the 2020 Oscars, where despite earlier campaigns for greater racial diversity in nominees, both women and minorities were shut out of key categories—leading to the lowest ratings in Oscars’ history. Notably, awareness of the racial disparity in Oscar nominations was driven by the #OscarsSoWhite hashtag on Twitter as created by Black media activist and former lawyer April Reign.

    Prioritize the most vulnerable. It’s not enough to assume good intentions, and it’s not enough to stop with equal treatment. Today’s technologies and societal norms are built on a complex history of colonialism, discrimination, and oppression. Active effort must be taken in order to break these cycles—and it is what we owe each other.

    The principles outlined in Dr. Crayton’s #causeascene framework provide a useful framing mechanism for those who intend to move the needle on the trickiest, largest-scale problems facing society. Could the lives of trans men and women be saved if social media platforms prioritized their safety as some of the most vulnerable members of society? How would the 2016 U.S. elections have been different if Facebook had acknowledged that technology is not, in fact, neutral? Rather than wait until your idea is fully formed, work these principles into your daily practice.

    In Chapter 15, you’ll explore a lightweight framework to help you probe your own work for ethical red flags. But this point is important enough to make before you even begin your journey into multimodality: It is your responsibility as a technologist not just to ensure the systems you envision cause no harm, but that they actively reinforce the society you wish to see. From not-racist to anti-racist; from accessibility to radical inclusion.

    • How might your technology reinforce negative stereotypes and empower abusers?

    • How might your technology be used to silence the less powerful?

    • Do Black, indigenous, other minorities, and disabled people have equal access to your technology? What happens if they don’t? How are they harmed by exclusion?

    • Would the introduction of this technology harm or disrupt support systems already in place?

    These are hard questions, and I’m no expert. In general, you should be talking to the members of these communities to ascertain the answers to these questions, rather than trying to guess as an outsider. Our record in the first few decades of the millennium proves that we, as a collective, are not great at acting in the best interests of those unlike ourselves. Inclusion means working together, not simply opening the door.

    LEARN MORE

    For more about potential sources of bias in artificial intelligence—some of the biggest risk areas in technology today—see Chapter 13, Beyond Devices: Human + AI Collaboration. For a lightweight framework with which to explore the feasibility, desirability, and potential impact of your project, see Chapter 15.

    Reconnecting with the Past

    Another important part of cultivating an anti-racist mindset is recognizing the relationship between colonialist practices of the past that led many of us to be where we are today—quite literally. Just as with the concept of reparations, the growing desire to confront our past is not an attempt to demonize today’s descendants many generations removed, but rather one tiny first step toward making those whose families were harmed by these practices whole again. It’s about learning from the past, not reliving it.

    Much of this book will talk about centering human context, systems design, and ethical design—practices well-aligned with the oft-challenging reflections about how to reorient our thinking away from outdated colonialist models. There are few corners of the world that are not affected by these models in some way. Designers looking to affect large-scale change or improve equity should include these reflections in their work, and can start by investigating the history of their own hometowns.

    This book was written within the traditional territory of Coast Salish peoples, specifically the Duwamish Tribe. Despite hundreds of years of broken treaties and setbacks, the Duwamish people continue to shine a light here in their ancestral lands as the host tribe of Seattle, all while continuing to petition for formal government recognition. I am grateful to the Duwamish tribe, past and present, and honor their land itself.

    A Gender Equity Lens

    Gender is a difficult topic. Some designers find themselves trapped: you want to do the right thing, but doesn’t gender equity mean making gender invisible? Not really—at least, not yet. It’s impossible to deny that the gender gap still exists in systems that surround us, from income to legislation. Ignoring gender during the design process risks perpetuating harms represented within those systems.

    NOTE WHAT IS GENDER?

    The Gates Gender Equality Toolbox defines gender as The socially and culturally constructed ideas of what it is to be male or female in a specific context.

    But it is important to note that gender and gender identity, which includes all forms of nonbinary identities, are very personal topics. For many of you, the products you’re designing don’t necessarily need to request that information. That doesn’t mean you don’t need to understand the impact these factors may have on your customer’s experience. Remember, humans don’t put their identities on pause when they interact with digital experiences.

    At my current employer—the Bill & Melinda Gates Foundation—we are redoubling our efforts to look at all investments through a gender equity lens. I encourage you to do the same on your products.

    • How might your experience be perceived differently based on gender?

    • In what ways might your work unintentionally increase or perpetuate existing gender gaps?

    • Is there anything your work could do to mitigate or reduce existing gender gaps?

    Constraint is a gift—this mindset is not only the right thing to do from an inclusive design perspective, but it also increases the market viability of your work. A rising tide lifts all boats.

    Where No Design Has Gone Before

    With the advent of smart speakers, some in the industry have rallied behind the cries of Voice First or Voice Only—implying that all interactions should inherently shift to become voice-dominant experiences. I love the passion represented by those hashtags—and certainly, voice interfaces represent a significant portion of the untapped opportunity in the industry. But in my perspective, focusing on one interaction modality at the exclusion of others is just changing the problem your customers face, not solving that problem.

    While voice user interfaces do extend a newly welcoming hand to customers with mobility and vision impairments, they risk leaving others behind. This is why I don’t believe the future is truly voice-only or even voice-first. There’s no one interaction model that includes everyone equally. When you overemphasize one type of interaction, designs become siloed, and other interactions and customers suffer.

    The future is multimodal—support for fluid transitions between various input and output modalities will ensure that everyone is an equal participant in our technological future.

    But that shouldn’t be disappointing. Actually, this is fantastic news for those energized by interesting, complex design problems. The future is as unwritten as it’s ever been. There are so many possibilities, and you’re at the helm of a ship pointed toward that seemingly limitless horizon. Which way will you sail? Consider this book as your navigational guide for that great blue horizon.

    This book can’t serve as a one-stop manual for all of your product design needs. This book exists because the increased complexity of experiences now demands new, bigger-picture, holistic frameworks and systems upon which you can layer the types of design work you’ve done in the past. You’ll still need voice designs. You’ll still need graphical user interfaces. But they will co-exist with other modalities, and you’ll need systems for connecting those experiences in a coherent, consistent way. Design Beyond Devices will shine light onto new techniques you can adopt as your product or service moves boldly into the final frontier of experience design.

    Apply It Now

    You may say that you’re not quite ready for multimodality in your product or experience. But here’s the truth: for some of your customers, your experience is already multimodal. No one input modality satisfies all customer needs. But if you’re not intentionally supporting these customers, their experience is likely to be fraught with friction as they’re driven to third-party tools and workarounds.

    Before exploring the rest of this book, take a moment to reflect on your current or most recent product:

    • Do you understand the extra tools that customers must use to address modalities not natively supported in your experience?

    • What happens as your current customers age and their needs change? Will your experience, as designed, still support them?

    • Does your roadmap include more direct support for these excluded customers?

    • How does your experience interact with the other people, devices, and systems in your customer’s life? Is this interaction positive or harmful?

    Odds are that you’re already working on a multimodal product, but it’s much harder to say you’re consciously guiding those divergent customer experiences within the full context of your customer’s world. In this book, you’ll learn the tools and techniques you need to take ownership of that experience, long shrouded in shadow.

    CHAPTER 2

    Capturing Customer Context

    Blending Theatricality with Design

    Storytelling for Design

    The Building Blocks of Storytelling

    Story as Shared Understanding

    Apply It Now

    To design for the real world, you must first understand it. Now that you can no longer make the assumption that your customer is sitting quietly at a desktop PC, you need to understand the broader context in which your customers live their lives and use your products. You must tell stories. You must tell human stories. You must bring the context of use to life—not just for yourself, but for all of your peers and decision-makers.

    As you move toward designing experiences that break out beyond the boundaries of any single device, there’s no switch you can flip in your brain to reset your assumptions. You’ll need to develop a new muscle: contextual curiosity. To begin working this muscle, you can

    Enjoying the preview?
    Page 1 of 1