Projects
Projects
Media Lab
Projects | Spring 2016
Many of the MIT Media Lab research projects described in the following pages are conducted under the auspices of sponsor-supported,
interdisciplinary Media Lab centers, joint research programs, and initiatives. They are:
Autism & Communication Technology Initiative
The Autism & Communication Technology Initiative utilizes the unique features of the Media Lab to foster the development of innovative
technologies that can enhance and accelerate the pace of autism research and therapy. Researchers are especially invested in creating
technologies that promote communication and independent living by enabling non-autistic people to understand the ways autistic people are
trying to communicate; improving autistic people's ability to use receptive and expressive language along with other means of functional,
non-verbal expression; and providing telemetric support that reduces reliance on caregivers' physical proximity, yet still enables enriching and
natural connectivity as wanted and needed.
Center for Civic Media
Communities need information to make decisions and take action: to provide aid to neighbors in need, to purchase an environmentally
sustainable product and shun a wasteful one, to choose leaders on local and global scales. Communities are also rich repositories of information
and knowledge, and often develop their own innovative tools and practices for information sharing. Existing systems to inform communities are
changing rapidly, and new ecosystems are emerging where old distinctions like writer/audience and journalist/amateur have collapsed. The Civic
Media group is a partnership between the MIT Media Lab and Comparative Media Studies at MIT. Together, we work to understand these new
ecosystems and to build tools and systems that help communities collect and share information and connect that information to action. We work
closely with communities to understand their needs and strengths, and to develop useful tools together using collaborative design principles. We
particularly focus on tools that can help amplify the voices of communities often excluded from the digital public sphere and connect them with
new audiences, as well as on systems that help us understand media ecologies, augment civic participation, and foster digital inclusion.
Center for Extreme Bionics
Half of the world's population currently suffers from some form of physical or neurological disability. At some point in our lives, it is all too likely
that a family member or friend will be struck by a limiting or incapacitating condition, from dementia, to the loss of a limb, to a debilitating disease
such as Parkinson's. Today we acknowledge and even "accept" serious physical and mental impairments as inherent to the human condition.
But must these conditions be accepted as "normal"? What if, instead, through the invention and deployment of novel technologies, we could
control biological processes within the body in order to repair or even eradicate them? What if there were no such thing as human disability?
These questions drive the work of Media Lab faculty members Hugh Herr and Ed Boyden, and MIT Institute Professor Robert Langer, and what
has led them and the MIT Media Lab to propose the establishment of a new Center for Extreme Bionics. This dynamic new interdisciplinary
organization will draw on the existing strengths of research in synthetic neurobiology, biomechatronics, and biomaterials, combined with
enhanced capabilities for design development and prototyping.
Center for Mobile Learning
The Center for Mobile Learning invents and studies new mobile technologies to promote learning anywhere anytime for anyone. The Center
focuses on mobile tools that empower learners to think creatively, collaborate broadly, and develop applications that are useful to themselves
and others around them. The Center's work covers location-aware learning applications, mobile sensing and data collection, augmented reality
gaming, and other educational uses of mobile technologies. The Center s first major activity will focus on App Inventor, a programming system
that makes it easy for learners to create mobile apps by fitting together puzzle piece-shaped blocks in a web browser.
Communications Futures Program
The Communications Futures Program conducts research on industry dynamics, technology opportunities, and regulatory issues that form the
basis for communications endeavors of all kinds, from telephony to RFID tags. The program operates through a series of working groups led
jointly by MIT researchers and industry collaborators. It is highly participatory, and its agenda reflects the interests of member companies that
include both traditional stakeholders and innovators. It is jointly directed by Dave Clark (CSAIL), Charles Fine (Sloan School of Management),
and Andrew Lippman (Media Lab).
The most current information about our research is available on the MIT Media Lab Web site, at http://www.media.mit.edu/research/.
April 2016
Page i
The Lab has also organized the following special interest groups (SIGs), which deal with particular subject areas.
Advancing Wellbeing
In contributing to the digital revolution, the Media Lab helped fuel a society where increasing numbers of people are obese, sedentary, and glued
to screens. Our online culture has promoted meaningfulness in terms of online fame and numbers of viewers, and converted time previously
spent building face-to-face relationships into interactions online with people who may not be who they say they are. What we have helped to
create, willingly or not, often diminishes the social-emotional relationships and activities that promote physical, mental, and social health.
Moreover, our workplace culture escalates stress, provides unlimited caffeine, distributes nutrition-free food, holds back-to-back sedentary
meetings, and encourages overnight hackathons and unhealthy sleep behavior. Without being dystopian about technology, this effort aims to
spawn a series of projects that leverage the many talents and strengths in the Media Lab in order to reshape technology and our workplace to
enhance health and wellbeing.
CE 2.0
Most of us are awash in consumer electronics (CE) devices: from cellphones, to TVs, to dishwashers. They provide us with information,
entertainment, and communications, and assist us in accomplishing our daily tasks. Unfortunately, most are not as helpful as they could and
should be; for the most part, they are dumb, unaware of us or our situations, and often difficult to use. In addition, most CE devices cannot
communicate with our other devices, even when such communication and collaboration would be of great help. The Consumer Electronics 2.0
initiative (CE 2.0) is a collaboration between the Media Lab and its sponsor companies to formulate the principles for a new generation of
consumer electronics that are highly connected, seamlessly interoperable, situation-aware, and radically simpler to use. Our goal is to show that
as computing and communication capability seep into more of our everyday devices, these devices do not have to become more confusing and
complex, but rather can become more intelligent in a cooperative and user-friendly way.
City Science
The world is experiencing a period of extreme urbanization. In China alone, 300 million rural inhabitants will move to urban areas over the next
15 years. This will require building an infrastructure equivalent to the one housing the entire population of the United States in a matter of a few
decades. In the future, cities will account for nearly 90 percent of global population growth, 80 percent of wealth creation, and 60 percent of total
energy consumption. Developing better strategies for the creation of new cities, is therefore, a global imperative. Our need to improve our
understanding of cities, however, is pressed not only by the social relevance of urban environments, but also by the availability of new strategies
for city-scale interventions that are enabled by emerging technologies. Leveraging advances in data analysis, sensor technologies, and urban
experiments, City Science will provide new insights into creating a data-driven approach to urban design and planning. To build the cities that
the world needs, we need a scientific understanding of cities that considers our built environments and the people who inhabit them. Our future
cities will desperately need such understanding.
Code2b
Code2b, a Media Lab collaboration with Google, aims to create a new generation of computer scientists, innovators, and inventors and have
them emerge from the underserved 8-12th grade Black and Latino populations. The pilot launched in January 2016 with two laboratories, one in
NYC and one in Oakland. Curricula is being developed by the Media Lab. Code2b's first year of tutorials and maker activities are focusing on
several domains: fabrication and design, digital music and interactive media, and game design. Our toolbox includes laser cutters, 3D printers,
Scratch, Makey Makey, and Arduino. In the second year, we will introduce Python, Raspberry Pi, BeagleBone, and emphasize making code to
make things that make things. Learning domains will emphasize computational design, mechatronics, robotics, web design, web technology,
and 2D and 3D design. In addition, we teach parents technology and provide academic enrichment to our students. We will have four
successive cohorts of freshmen (2016, 2017, 2018, 2019).
Connection Science
As more of our personal and public lives become infused and shaped by data from sensors and computing devices, the lines between the digital
and the physical have become increasingly blurred. New possibilities arise, some promising, others alarming, but both with an inexorable
momentum that is supplanting time honored practices and institutions. MIT Connection Science is a cross-disciplinary effort drawing on the
strengths of faculty, departments and researchers across the Institute, to decode the meaning of this dynamic, at times chaotic, new
environment. The initiative will help business executives, investors, entrepreneurs and policymakers capitalize on the multitude of opportunities
unlocked by the new hyperconnected world we live in.
Digital Currency Initiative (DCI)
The Internet enabled people to easily call each other without a phone company, send a document without a mail carrier, or publish an article
without a newspaper. As a result, more than 2.9 billion people depend on a decentralized communications protocol the Internet to more
efficiently communicate with one another. Similarly, cryptocurrencies like bitcoin enable permission-less innovation for entrepreneurs and
technologists to build world-changing applications that answer the demand for global transactions that has been created by global
communication. The Digital Currency Initiative strives to be a neutral leader of world-class research to push the boundaries of knowledge around
cryptocurrency and its underlying distributed ledger technology. We seek to clarify the real-world impact of these technologies, inspired by their
potential for public good and mindful of the risks and ethical questions attached to them. We act in support of the MIT and open-source
cryptocurrency communities and yet are open to collaborating with all sectors of society.
Emerging Worlds
The Emerging Worlds SIG is focused on emerging opportunities to address pressing challenges, and leapfrog existing solutions. Emerging
Worlds are vibrant ecosystems where we are rolling out new and innovative citizen-based technologies using a framework that supports the
wide-ranging needs of urban populations. It is a co-innovation initiative to solve problems in areas such as health, education, financial inclusion,
food and agriculture, housing, transportation, and local business.
Page ii
April 2016
Ethics
The Ethics Initiative works to foster multidisciplinary program designs and critical conversations around ethics, wellbeing, and human flourishing.
The initiative seeks to create collaborative platforms for scientists, engineers, artists, and policy makers to optimize designing for humanity....
Future of News
The Future of News is designing, testing, and making creative tools that help newsrooms adapt in a time of rapid change. As traditional news
models erode, we need new models and techniques to reach a world hungry for news, but whose reading and viewing habits are increasingly
splintered. Newsrooms need to create new storytelling techniques, recognizing that the way users consume news continues to change. Readers
and viewers expect personalized content, deeper context, and information that enables them to influence and change their world. At the same
time, newsrooms are seeking new ways to extend their influence, to amplify their message by navigating new paths for readers and viewers,
and to find new methods of delivery. To tackle these problems, we will work with Media Lab students and the broader MIT community to identify
promising projects and find newsrooms across the country interested in beta-testing those projects.
Future Storytelling
The Future Storytelling working group at the Media Lab is rethinking storytelling for the 21st century. The group takes a new and dynamic
approach to how we tell our stories, creating new methods, technologies, and learning programs that recognize and respond to the changing
communications landscape. The group builds on the Media Lab's more than 25 years of experience in developing society-changing technologies
for human expression and interactivity. By applying leading-edge technologies to make stories more interactive, improvisational, and social,
researchers are working to transform audiences into active participants in the storytelling process, bridging the real and virtual worlds, and
allowing everyone to make and share their own unique stories. Research also explores ways to revolutionize imaging and display technologies,
including developing next-generation cameras and programmable studios, making movie production more versatile and economic.
Media Lab Learning
The Media Lab Learning initiative explores new approaches to learning. We study learning across many dimensions, ranging from neurons to
nations, from early childhood to lifelong scholarship, and from human creativity to machine intelligence. The program is built around a cohort of
learning innovators from across the diverse Media Lab groups. We are designing tools and technologies that change how, when, where, and
what we learn; and developing new solutions to enable and enhance learning everywhere, including at the Media Lab itself. In addition to
creating tools and models, the initiative provides non-profit and for-profit mechanisms to help promising innovations to scale.
Open Agriculture (OpenAG)
The MIT Media Lab Open Agriculture (OpenAG) initiative is on a mission to create healthier, more engaging, and more inventive future food
systems. We believe the precursor to a healthier and more sustainable food system will be the creation of an open-source ecosystem of food
technologies that enable and promote transparency, networked experimentation, education, and hyper-local production. The OpenAG Initiative
brings together partners from industry, government, and academia to develop an open source "food tech"? research collective for the creation of
the global agricultural hardware, software, and data commons. Together we will build collaborative tools and open technology platforms for the
exploration of future food systems.
Pixel Factory
Data is ubiquitous in a world where our understanding of it is not. The Pixel Factory is a special interest group working to help people
understand their data by making tools to transform data into stories. The Pixel Factory is led by the Macro Connections group, a group
experienced in the creation of data visualization engines including: The Observatory of Economic Complexity (atlas.media.mit.edu), Immersion
(immersion.media.mit.edu), and Pantheon (pantheon.media.mit.edu).
Terrestrial Sensing
The deeply symbiotic relationship between our planet and ourselves is increasingly mediated by technology. Ubiquitous, networked sensing has
provided the earth with an increasingly sophisticated electronic nervous system. How we connect with, interpret, visualize, and use the
geoscience information shared and gathered is a deep challenge, with transformational potential. The Center for Terrestrial Sensing aims to
address this challenge.
Ultimate Media
Visual media has irretrievably lost its lock on the audience but has gained unprecedented opportunity to evolve the platform by which it is
communicated and to become integrated with the social and data worlds in which we live. Ultimate Media is creating a platform for the invention,
creation, and realization of new ways to explore and participate in the media universe. We apply extremes of access, processing, and interaction
to build new media experiences and explorations that permit instant video blogging, exploration of the universe of news and narrative
entertainment, and physical interfaces that allow people to collaborate around media.
April 2016
Page iii
1
1
1
1
1
1
1
2
2
2
2
2
2
2
3
3
3
3
3
3
3
4
4
4
4
4
4
4
5
5
5
5
6
6
6
6
6
6
7
7
7
7
7
8
8
8
8
8
8
9
9
Page iv
April 2016
11
11
11
11
11
12
12
12
12
12
13
13
13
13
13
13
14
14
14
14
14
14
15
15
15
15
15
15
15
16
16
16
bioLogic .....................................................................................................................................................................................................
HydroMorph ...............................................................................................................................................................................................
Inflated Appetite .........................................................................................................................................................................................
inFORM .....................................................................................................................................................................................................
jamSheets: Interacting with Thin Stiffness-Changing Material ..................................................................................................................
LineFORM .................................................................................................................................................................................................
MirrorFugue ...............................................................................................................................................................................................
Pneuduino .................................................................................................................................................................................................
Pneumatic Shape-Changing Interfaces .....................................................................................................................................................
Radical Atoms ...........................................................................................................................................................................................
TRANSFORM ............................................................................................................................................................................................
TRANSFORM: Adaptive and Dynamic Furniture ......................................................................................................................................
16
16
17
17
17
17
17
17
17
18
18
18
18
18
19
19
19
19
A Multi-Sensor Wearable Device for Analyzing Stress Response in Preschool Classrooms ....................................................................
Bicycle Study .............................................................................................................................................................................................
Big Data for Small Places ..........................................................................................................................................................................
Computational Scope and Sequence for a Montessori Learning Environment .........................................................................................
Microculture ...............................................................................................................................................................................................
Proximity Networks ....................................................................................................................................................................................
Storyboards ...............................................................................................................................................................................................
The Dog Programming Language .............................................................................................................................................................
April 2016
19
19
20
20
20
20
20
20
Page v
116.
117.
21
21
21
21
21
22
22
22
22
22
22
22
22
22
22
23
23
23
23
23
24
24
24
24
24
24
25
25
25
25
25
25
25
26
26
26
26
26
26
Page vi
April 2016
27
27
27
27
27
28
28
28
28
28
29
29
29
29
29
30
30
30
30
30
31
31
31
31
31
31
32
32
32
32
32
32
32
33
33
33
33
33
33
33
34
34
34
34
34
34
34
35
April 2016
35
35
35
35
36
36
36
36
36
36
37
37
37
37
37
38
38
38
38
38
39
39
39
39
39
39
40
40
40
40
Page vii
42
42
42
43
43
43
43
43
43
44
44
44
44
44
44
45
45
45
45
45
46
46
46
46
46
46
47
47
47
47
47
47
48
48
48
48
48
48
"Kind and Grateful": Promoting Kindness and Gratitude with Pervasive Technology ...............................................................................
Affective Response to Haptic signals ........................................................................................................................................................
An EEG and Motion-Capture Based Expressive Music Interface for Affective Neurofeedback ................................................................
Automated Tongue Analysis ......................................................................................................................................................................
Automatic Stress Recognition in Real-Life Settings ..................................................................................................................................
Autonomic Nervous System Activity in Epilepsy .......................................................................................................................................
BrightBeat: An On-Screen Intervention for Regulating Breathing .............................................................................................................
Building the Just-Right-Challenge in Games and Toys ............................................................................................................................
EDA Explorer .............................................................................................................................................................................................
Fathom: Probabilistic Graphical Models to Help Mental Health Counselors .............................................................................................
FEEL: A Cloud System for Frequent Event and Biophysiological Signal Labeling ....................................................................................
IDA: Inexpensive Networked Digital Stethoscope .....................................................................................................................................
Large-Scale Pulse Analysis .......................................................................................................................................................................
Lensing: Cardiolinguistics for Atypical Angina ...........................................................................................................................................
Page viii
April 2016
49
49
49
49
49
50
50
50
50
50
50
51
51
51
295.
296.
297.
298.
299.
300.
301.
302.
303.
304.
305.
306.
307.
308.
309.
310.
311.
51
51
51
52
52
52
52
52
52
53
53
53
53
53
54
54
54
54
54
54
55
55
55
55
6D Display .................................................................................................................................................................................................
A Switchable Light-Field Camera ..............................................................................................................................................................
AnEye: Extending the Reach of Anterior Segment Ophthalmic Imaging ...................................................................................................
Beyond the Self-Driving Car .....................................................................................................................................................................
Blind and Reference-Free Fluorescence Lifetime Estimation via Consumer Time-of-Flight Sensors .......................................................
Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance .................................................................................
CATRA: Mapping of Cataract Opacities Through an Interactive Approach ...............................................................................................
Coded Computational Photography ..........................................................................................................................................................
Coded Focal Stack Photography ...............................................................................................................................................................
Compressive Light-Field Camera: Next Generation in 3D Photography ...................................................................................................
Eyeglasses-Free Displays .........................................................................................................................................................................
Health-Tech Innovations with Tata Trusts, Mumbai ..................................................................................................................................
Hyderabad Eye Health Collaboration with LVP .........................................................................................................................................
Imaging Behind Diffusive Layers ...............................................................................................................................................................
Imaging through Scattering Media Using Femtophotography ...................................................................................................................
Inverse Problems in Time-of-Flight Imaging ..............................................................................................................................................
Layered 3D: Glasses-Free 3D Printing ......................................................................................................................................................
LensChat: Sharing Photos with Strangers .................................................................................................................................................
Looking Around Corners ............................................................................................................................................................................
Nashik Smart Citizen Collaboration with TCS ...........................................................................................................................................
NETRA: Smartphone Add-On for Eye Tests .............................................................................................................................................
New Methods in Time-of-Flight Imaging ....................................................................................................................................................
Optical Brush: Enabling Deformable Imaging Interfaces ...........................................................................................................................
PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures .............................................................................................
Polarization Fields: Glasses-Free 3DTV ...................................................................................................................................................
Portable Retinal Imaging ...........................................................................................................................................................................
Reflectance Acquisition Using Ultrafast Imaging .......................................................................................................................................
Second Skin: Motion Capture with Actuated Feedback for Motor Learning ..............................................................................................
Shield Field Imaging ..................................................................................................................................................................................
Single Lens Off-Chip Cellphone Microscopy .............................................................................................................................................
Single-Photon Sensitive Ultrafast Imaging ................................................................................................................................................
Skin Perfusion Photography ......................................................................................................................................................................
Slow Display ..............................................................................................................................................................................................
SpeckleSense ............................................................................................................................................................................................
SpecTrans: Classification of Transparent Materials and Interactions .......................................................................................................
StreetScore ................................................................................................................................................................................................
Tensor Displays: High-Quality Glasses-Free 3D TV .................................................................................................................................
The Next 30 Years of VR ..........................................................................................................................................................................
Theory Unifying Ray and Wavefront Lightfield Propagation ......................................................................................................................
Time-of-Flight Microwave Camera ............................................................................................................................................................
Towards In-Vivo Biopsy .............................................................................................................................................................................
Trillion Frames Per Second Camera .........................................................................................................................................................
April 2016
55
55
56
56
56
56
56
56
57
57
57
57
57
57
57
58
58
58
58
58
58
59
59
59
59
59
59
59
60
60
60
60
60
60
61
61
61
61
61
61
62
62
Page ix
361.
362.
363.
A-pops .......................................................................................................................................................................................................
App Inventor ..............................................................................................................................................................................................
Build in Progress .......................................................................................................................................................................................
Clubhouse Village ......................................................................................................................................................................................
Computer Clubhouse .................................................................................................................................................................................
Duct Tape Network ....................................................................................................................................................................................
Family Creative Learning ...........................................................................................................................................................................
Learning Creative Learning .......................................................................................................................................................................
Learning with Data .....................................................................................................................................................................................
Lemann Creative Learning Program .........................................................................................................................................................
Making Learning Work ...............................................................................................................................................................................
Media Lab Digital Certificates ....................................................................................................................................................................
Media Lab Virtual Visit ...............................................................................................................................................................................
ML Open ....................................................................................................................................................................................................
Para ...........................................................................................................................................................................................................
Peer 2 Peer University ...............................................................................................................................................................................
Read Out Loud ..........................................................................................................................................................................................
Scratch ......................................................................................................................................................................................................
Scratch Data Blocks ..................................................................................................................................................................................
Scratch Day ...............................................................................................................................................................................................
Scratch Extensions ....................................................................................................................................................................................
ScratchJr ...................................................................................................................................................................................................
Spin ...........................................................................................................................................................................................................
Start Making! .............................................................................................................................................................................................
Unhangout .................................................................................................................................................................................................
63
63
63
63
63
63
63
64
64
64
64
64
64
64
65
65
65
65
65
65
66
66
66
66
66
66
67
67
67
67
67
68
68
68
Activ8 .........................................................................................................................................................................................................
Amphibian: Terrestrial SCUBA Diving Simulator Using Virtual Reality .....................................................................................................
Meta-Physical-Space VR ...........................................................................................................................................................................
NailO ..........................................................................................................................................................................................................
OnTheGo ...................................................................................................................................................................................................
SensorTape: Modular and Programmable 3D-Aware Dense Sensor Network on a Tape ........................................................................
Spotz .........................................................................................................................................................................................................
Tattio ..........................................................................................................................................................................................................
Variable Reality: Interaction with the Virtual Book .....................................................................................................................................
68
68
69
69
69
69
69
69
69
Page x
April 2016
70
70
70
70
70
70
71
71
71
71
71
71
71
72
421.
422.
423.
424.
425.
426.
427.
428.
429.
GAMR ........................................................................................................................................................................................................
Hello, Operator! .........................................................................................................................................................................................
Homeostasis ..............................................................................................................................................................................................
MicroPsi: An Architecture for Motivated Cognition ....................................................................................................................................
radiO_o ......................................................................................................................................................................................................
Sneak: A Hybrid Digital-Physical Tabletop Game .....................................................................................................................................
Soft Exchange: Interaction Design with Biological Interfaces ....................................................................................................................
Storyboards ...............................................................................................................................................................................................
Troxes ........................................................................................................................................................................................................
72
72
72
72
72
72
73
73
73
April 2016
73
73
73
74
74
74
74
74
74
75
75
75
75
75
75
75
76
76
76
76
76
76
76
77
77
77
77
77
77
78
78
78
78
78
78
78
79
79
79
79
79
Page xi
1.
3D Telepresence Chair
Daniel Novy
An autostereoscopic (no glasses) 3D display engine is combined with a "Pepper's Ghost" setup to
create an office chair that appears to contain a remote meeting participant. The system geometry
is also suitable for other applications, such as tabletop or automotive heads-up displays.
2.
4K/8K Comics
3.
4.
5.
8K Time Machine
NEW LISTING
Aerial Light-Field
Display
V. Michael Bove, Daniel Novy and Henry Holtzman (Samsung NExD Lab)
BigBarChart
Suitable for anywhere a "Pepper's Ghost" display could be deployed, this display adds 3D with
motion parallax, as well as optically relaying the image into free space such that gestural and
haptic interfaces can be used to interact with it. The current version is able to display a person at
approximately full-size.
BigBarChart is an immersive, 3D bar chart that provides a new physical way for people to interact
with data. It takes data beyond visualizations to map out a new area--data experiences--that are
multisensory, embodied, and aesthetic interactions. BigBarChart is made up of a number of bars
that extend up to 10 feet to create an immersive experience. Bars change height and color in
response to interactions that are direct (a person entering the room), tangible (pushing down on a
bar to get meta information), or digital (controlling bars and performing statistical analyses through
a tablet). BigBarChart helps both scientists and the general public understand information from a
new perspective. Early prototypes are available.
6.
7.
Bottles&Boxes:
Packaging with Sensors
Calliope
We have added inexpensive, low-power, wireless sensors to product packages to detect user
interactions with products. Thus, a bottle can register when and how often its contents are
dispensed (and generate side effects, like causing a music player to play music when the bottle is
picked up, or generating an automatic refill order when near-emptiness is detected). A box can
understand usage patterns of its contents. Consumers can vote for their favorites among several
alternatives simply by handling them more often.
Calliope is the follow-up to the NeverEnding Drawing Machine. A portable, paper-based platform
for interactive story making, it allows physical editing of shared digital media at a distance. The
system is composed of a network of creation stations that seamlessly blend analog and digital
media. Calliope documents and displays the creative process with no need to interact directly with
a computer. By using human-readable tags and allowing any object to be used as material for
creation, it offers opportunities for cross-cultural and cross-generational collaboration among peers
with expertise in different media.
April 2016
Page 1
8.
Consumer Holo-Video
V. Michael Bove Jr., Bianca Datta, Sundeep Jolly, Nickolaos Savidis and Daniel Smalley
(BYU)
The goal of this project, building upon work begun by Stephen Benton and the Spatial Imaging
group, is to enable consumer devices such as tablets, phones, or glasses to display holographic
video images in real time, suitable for entertainment, engineering, telepresence, or medical
imaging. Our research addresses real-time scene capture and transmission, computational
strategies, display technologies, interaction models, and applications.
Alumni Contributors: James D. Barabas, Ermal Dreshaj, Daniel Smalley and Quinn Y J Smithwick
9.
Dressed in Data
10.
DUSK
11.
Emotive Materials
NEW LISTING
12.
13.
EmotiveModeler: An
Emotive Form Design
CAD Tool
Whether or not we're experts in the design language of objects, we have an unconscious
understanding of the emotional character of their forms. EmotiveModeler integrates knowledge
about our emotive perception of shapes into a CAD tool that uses descriptive adjectives as an
input to aid both expert and novice designers in creating objects that can communicate emotive
character.
Following upon work begun in the Graspables project, we are exploring what happens when a
wide range of everyday consumer products can sense, interpret into human terms (using pattern
recognition methods), and retain memories, such that users can construct a narrative with the aid
of the recollections of the "diaries" of their sporting equipment, luggage, furniture, toys, and other
items with which they interact.
14.
Free-Space Haptic
Feedback for 3D
Displays
NEW LISTING
Page 2
April 2016
15.
Guided-Wave Light
Modulator for
Holographic Video
V. Michael Bove Jr., Bianca Datta, Sunny Jolly, Nickolaos Savidis and Daniel Smalley (BYU)
We are developing inexpensive, efficient, high-bandwidth light modulators based on lithium
niobate guided-wave technology. These full-color modulators support hundreds of thousands of
pixels per scan line, making them suitable for fixed or wearable holographic displays.
Alumni Contributors: Daniel Smalley and Quinn Smithwick
16.
Infinity-by-Nine
17.
18.
ListenTree:
Audio-Haptic Display in
the Natural
Environment
Live Objects
A Live Object is a small device that can stream media content wirelessly to nearby mobile devices
without an Internet connection. Live Objects are associated with real objects in the environment,
such as an art piece in a museum, a statue in a public space, or a product in a store. Users
exploring a space can discover nearby Live Objects and view content associated with them, as
well as leave comments for future visitors. The mobile device retains a record of the media viewed
(and links to additional content), while the objects can retain a record of who viewed them. Future
extensions will look into making the system more social, exploring game applications such as
media scavenger hunts built on top of the platform, and incorporating other types of media such
as live and historical data from sensors associated with the objects.
19.
Narratarium
V. Michael Bove Jr., Fransheska Colon, Catherine Havasi, Katherine (Kasia) Hayden, Daniel
Novy, Jie Qi and Robert H. Speer
Narratarium augments printed and oral stories and creative play by projecting immersive images
and sounds. We are using natural language processing to listen to and understand stories being
told, and analysis tools to recognize activity among sensor-equipped objects such as toys, then
thematically augmenting the environment using video and sound. New work addresses the
creation and representation of audiovisual content for immersive story experiences and the
association of such content with viewer context.
20.
21.
Networked Playscapes:
Dig Deep
Pillow-Talk
Networked Playscapes re-imagine outdoor play by merging the flexibility and fantastical of the
digital world with the tangible, sensorial properties of physical play to create hybrid interactions for
the urban environment. Dig Deep takes the classic sandbox found in children's playgrounds and
merges it with the common fantasy of "digging your way to the other side of the world" to create a
networked interaction in tune with child cosmogony.
Pillow-Talk is the first of a series of objects designed to aid creative endeavors through the
unobtrusive acquisition of unconscious, self-generated content to permit reflexive self-knowledge.
Composed of a seamless recording device embedded in a pillow, and a playback and visualization
system in a jar, Pillow-Talk crystallizes that which we normally forget. This allows users to capture
their dreams in a less mediated way, aiding recollection by priming the experience and providing
no distraction for recall and capture through embodied interaction.
April 2016
Page 3
22.
Printed Wearable
Holographic Display
NEW LISTING
23.
24.
V. Michael Bove, Bianca Datta, Sunny Jolly, Nickolaos Savidis and Daniel Smalley (BYU)
Holographic displays offer many advantages, including comfort and maximum realism. In this
project we adapt our guided-wave light-modulator technology to see-through lenses to create a
wearable 3D display suitable for augmented or virtual reality applications. As part of this work we
also are developing a femtosecond-laser-based process that can fabricate the entire device by
"printing."
Programmable
Synthetic
Hallucinations
Yosuke Bando, Daniel Dubois, Konosuke Watanabe, Arata Miyamoto, Henry Holtzman, and
V. Michael Bove
We are creating consumer-grade appliances and authoring methodologies that will allow
hallucinatory phenomena to be programmed and utilized for information display and narrative
storytelling.
ShAir is a platform for instantly and easily creating local content-shareable spaces without
requiring an Internet connection or location information. ShAir-enabled devices can
opportunistically communicate with other mobile devices and optional pervasive storage devices
such as WiFi SD cards whenever they enter radio range of one another. Digital content can hop
through devices in the background without user intervention. Applications that can be built on top
of the platform include ad-hoc photo/video/music sharing and distribution, opportunistic social
networking and games, digital business card exchange during meetings and conferences, and
local news article-sharing on trains and buses.
25.
26.
Smell Narratives
27.
SurroundVision
28.
Page 4
V. Michael Bove, Laura Perovich, Don Blair and Sara Wiley (Northeastern University)
Two of the most important traits of environmental hazards today are their invisibility and the fact
that they are experienced by communities, not just individuals. Yet we don't have a good way to
make hazards like chemical pollution visible and intuitive. The thermal fishing bob seeks to
visceralize rather than simply visualize data by creating a data experience that makes water
pollution data present. The bob measures water temperature and displays that data by changing
color in real time. Data is also logged to be physically displayed elsewhere and can be further
recorded using long-exposure photos. Making environmental data experiential and interactive will
help both communities and researchers better understand pollution and its implications.
April 2016
29.
Cognitive Integration:
The Nature of the Mind
NEW LISTING
30.
31.
32.
Optogenetics and
Synthetic Biology Tools
Nikita Pak, Christian Wentz, Yongxin Zhao, Joel Dapello, Nir Grossman
Shahar Alon, Shoh Asano, Jae-Byum Chang, Fei Chen, Amauche Emenari, Linyi Gao, Rui
Gao, Dan Goodwin, Grace Huynh, Louis Kang, Manos Karagiannis, Adam Marblestone,
Andrew Payne, Paul Reginato, Sam Rodriques, Deblina Sarkar, Paul Tillberg, Ru Wang, Oz
Wassi
We have pioneered the development of fully genetically encoded reagents that, when targeted to
specific cells, enable their physiology to be controlled via light, as well as other specific
manipulations of cellular biological processes. Optogenetic tools enable temporally precise control
of neural electrical activity, cellular signaling, and other high-speed physiological processes using
light. Other tools we are developing enable the control and monitoring of protein translation and
other key cell biological processes. Such tools are being explored throughout neuroscience and
bioengineering, for the study and repair of brain circuits. Derived from the natural world, these
tools highlight the power of ecological diversity, in yielding technologies for analyzing biological
complexity and addressing human health. We distribute these tools as freely as possible.
New technologies for recording neural activity, controlling neural activity, or building brain circuits,
may be capable some day of serving in therapeutic roles for improving the health of human
patients: enabling the restoration of lost senses, the control of aberrant or pathological neural
dynamics, and the augmentation of neural circuit computation, through prosthetic means. High
throughput molecular and physiological analysis methods may also open up new diagnostic
possibilities. We are assessing, often in collaborations with other groups, the translational
possibilities opened up by our technologies, exploring the safety and efficacy of our technologies
in multiple animal models, in order to discover potential applications of our tools to various
clinically relevant scenarios. New kinds of "brain co-processor" may be possible which can work
efficaciously with the brain to augment its computational abilities, e.g., in the context of cognitive,
emotional, sensory, or motor disability.
Brain circuits are large, 3D structures. However, the building blocks -- proteins, signaling
complexes, synapses--are organized with nanoscale precision. This presents a fundamental
tension in neuroscience: to understand a neural circuit, you might need to map a large diversity of
nanoscale building blocks, across an extended spatial expanse. We are developing a new suite of
tools that enable mapping of the location and identity of the molecular building blocks of the brain,
so that comprehensive taxonomies of cells, circuits, and computations might someday become
possible, even in entire brains. One of the technologies we are developing enables large, 3D
objects to be imaged with nanoscale precision, by physically expanding the sample -- a tool we
call expansion microscopy (ExM). We are working to improve expansion microscopy further, and
are developing, often in interdisciplinary collaborations, a suite of new labeling and analysis
techniques to enable multiplexed readout.
33.
Jake Bernstein, Limor Freifeld, Ishan Gupta, Mike Henninger, Erica Jung, Changyang
Linghu, Caroline Moore-Kochlacs, Kiryl Piatkevich, Nick Savidis, Jorg Scholvin, Guangyu
Xu, Young Gyu Yoon, Kettner Griswold, Justin Kinney
The brain is a three-dimensional, densely wired circuit that computes via large sets of widely
distributed neurons interacting at fast timescales. Ideally it would be possible to observe the
activity of many neurons with as great a degree of precision as possible, so as to understand the
neural codes and dynamics that are produced by the circuits of the brain. Our lab and our
April 2016
Page 5
collaborators are developing a number of innovations to enable such analyses. These tools will
hopefully enable pictures of how neurons work together to implement brain computations, and how
these computations go awry in brain disorders. Such neural observation strategies may also serve
as detailed biomarkers of brain disorders or indicators of potential drug side effects. These
technologies may, in conjunction with optogenetics, enable closed-loop neural control
technologies, which can introduce information into the brain as a function of brain state ("brain
co-processors").
34.
Understanding Normal
and Pathological Brain
Computations
Brian Allen, David Rolnick, Annabelle Singer, Harbi Sohal, Ho-Jun Suk, Giovanni Talei
Franzesi, Yosuke (Bandy) Bando, Nick Barry
We are providing our tools to the community, and also using them within our lab, to analyze how
specific brain mechanisms (molecular, cellular, circuit-level) give rise to behaviors and pathological
states. These studies may yield fundamental insights into how best to go about treating brain
disorders.
35.
36.
37.
AIDA: Affective
Intelligent Driving
Agent
Animal-Robot
Interaction
Cloud-HRI
Drivers spend a significant amount of time multi-tasking while they are behind the wheel. These
dangerous behaviors, particularly texting while driving, can lead to distractions and ultimately to
accidents. Many in-car interfaces designed to address this issue still neither take a proactive role
to assist the driver nor leverage aspects of the driver's daily life to make the driving experience
more seamless. In collaboration with Volkswagen/Audi and the SENSEable City Lab, we are
developing AIDA (Affective Intelligent Driving Agent), a robotic driver-vehicle interface that acts as
a sociable partner. AIDA elicits facial expressions and strong non-verbal cues for engaging social
interaction with the driver. AIDA also leverages the driver's mobile device as its face, which
promotes safety, offers proactive driver support, and fosters deeper personalization to the driver.
Like people, dogs and cats live among technologies that affect their lives. Yet little of this
technology has been designed with pets in mind. We are developing systems that interact
intelligently with animals to entertain, exercise, and empower them. Currently, we are developing a
laser-chasing game, in which dogs or cats are tracked by a ceiling-mounted webcam, and a
computer-controlled laser moves with knowledge of the pet's position and movement. Machine
learning will be applied to optimize the specific laser strategy. We envision enabling owners to
initiate and view the interaction remotely through a web interface, providing stimulation and
exercise to pets when the owners are at work or otherwise cannot be present.
Imagine opening your eyes and being awake for only half an hour at a time. This is the life that
robots traditionally live. This is due to a number of factors, such as battery life and wear on
prototype joints. Roboticists have typically muddled though this challenge by crafting handmade
perception and planning models of the world, or by using machine learning with synthetic and
real-world data, but cloud-based robotics aims to marry large distributed systems with machine
learning techniques to understand how to build robots that interpret the world in a richer way. This
movement aims to build large-scale machine learning algorithms that use experiences from large
groups of people, whether sourced from a large number of tabletop robots or a large number of
experiences with virtual agents. Large-scale robotics aims to change embodied AI as it changed
non-embodied AI.
38.
Collaborative Robot
Storyteller
NEW LISTING
Page 6
Cynthia Breazeal, Hae Won Park, Jacqueline M Kory, Mirko Gelsomini, Goren Gordon (Tel
Aviv), Stephanie Gottwald(Tufts), and Susan Engel(Williams College)
Can robots collaboratively exchange stories with children and improve their language and
storytelling skills? With our latest Tega robot platform, we aim to develop a deep personalization
algorithm based on a long-term interaction with an individual user. Through robot interaction, we
collect a corpus of each child's linguistics, narrative, and concept skill information, and develop
robot's AI that can generate stories and behaviors personalized to each child's growth level and
engagement factors, including affective states.
April 2016
39.
40.
DragonBot: Android
Phone Robots for
Long-Term HRI
Cynthia Breazeal, David Nunez, Tinsley Galyean, Maryanne Wolf (Tufts), and Robin Morris
(GSU)
DragonBot is a new platform built to support long-term interactions between children and robots.
The robot runs entirely on an Android cell phone, which displays an animated virtual face.
Additionally, the phone provides sensory input (camera and microphone) and fully controls the
actuation of the robot (motors and speakers). Most importantly, the phone always has an Internet
connection, so a robot can harness cloud-computing paradigms to learn from the collective
interactions of multiple robots. To support long-term interactions, DragonBot is a "blended-reality"
character: if you remove the phone from the robot, a virtual avatar appears on the screen and the
user can still interact with the virtual character on the go. Costing less than $1,000, DragonBot was
specifically designed to be a low-cost platform that can support longitudinal human-robot
interactions "in the wild."
We are developing a system of early literacy apps, games, toys, and robots that will triage how
children are learning, diagnose literacy deficits, and deploy dosages of content to encourage app
play using a mentoring algorithm that recommends an appropriate activity given a child's progress.
Currently, over 200 Android-based tablets have been sent to children around the world; these
devices are instrumented to provide a very detailed picture of how kids are using these
technologies. We are using this big data to discover usage and learning models that will inform
future educational development.
41.
Huggable: A Social
Robot for Pediatric Care
42.
Interactive Journaling
43.
44.
Mind-Theoretic
Planning for Robots
Mind-Theoretic Planning (MTP) is a technique for robots to plan in social domains. This system
takes into account probability distributions over the initial beliefs and goals of people in the
environment that are relevant to the task, and creates a prediction of how they will rationally act on
their beliefs to achieve their goals. The MTP system then proceeds to create an action plan for the
robot that simultaneously takes advantage of the effects of anticipated actions of others and also
avoids interfering with them.
To serve us well, robots and other agents must understand our needs and how to fulfill them. To
that end, our research develops robots that empower humans by interactively learning from them.
Interactive learning methods enable technically unskilled end-users to designate correct behavior
and communicate their task knowledge to improve a robot's task performance. This research on
interactive learning focuses on algorithms that facilitate teaching by signals of approval and
disapproval from a live human trainer. We operationalize these feedback signals as numeric
rewards within the machine-learning framework of reinforcement learning. In comparison to the
complementary form of teaching by demonstration, this feedback-based teaching may require less
April 2016
Page 7
task expertise and place less cognitive load on the trainer. Envisioned applications include
human-robot collaboration and assistive robotic devices for handicapped users, such as
myolectrically controlled prosthetics.
45.
46.
Robotic Language
Learning Companions
Cynthia Breazeal, Hae Won Park and Goren Gordon (Tel Aviv)
A growth mindset and curiosity have significant impact on children's academic and social
achievements. We are developing and evaluating a novel expressive cognitive-affective
architecture that synergistically integrates models of curiosity, understanding of mindsets, and
expressive social behaviors to advance the state-of the-art of robot companions. In doing so, we
aim to contribute major advancements in the design of AI algorithms for artificial curiosity, artificial
mindset, and their verbal and non-verbal expressiveness in a social robot companion for children.
In our longitudinal study, we aim to evaluate the robot companion's ability to sustain engagement
and promote children's curiosity and growth mindset for improved learning outcomes in an
educational play context.
Cynthia Breazeal, Jacqueline Kory Westlund, Sooyeon Jeong, Paul Harris, Dave DeSteno,
and Leah Dickens
Young children learn language not through listening alone, but through active communication with
a social actor. Cultural immersion and context are also key in long-term language development.
We are developing robotic conversational partners and hybrid physical/digital environments for
language learning. For example, the robot Sophie helped young children learn French through a
food-sharing game. The game was situated on a digital tablet embedded in a caf table. Sophie
modeled how to order food and as the child practiced the new vocabulary, the food was delivered
via digital assets onto the table's surface. A teacher or parent can observe and shape the
interaction remotely via a digital tablet interface to adjust the robot's conversation and behavior to
support the learner. More recently, we have been examining how social nonverbal behaviors
impact children's perceptions of the robot as an informant and social companion.
Alumni Contributors: Natalie Anne Freed and Adam Michael Setapen
47.
Robotic Learning
Companions
48.
49.
SHARE: Understanding
and Manipulating
Attention Using Social
Robots
Socially Assistive
Robotics: An NSF
Expedition in
Computing
Our mission is to develop the computational techniques that will enable the design,
implementation, and evaluation of "relational" robots, in order to encourage social, emotional, and
cognitive growth in children, including those with social or cognitive deficits. Funding for the project
comes from the NSF Expeditions in Computing program. This expedition has the potential to
substantially impact the effectiveness of education and healthcare, and to enhance the lives of
children and other groups that require specialized support and intervention. In particular, the MIT
effort is focusing on developing second-language learning companions for pre-school aged
children, ultimately for ESL (English as a Second Language).
Alumni Contributors: Catherine Havasi and Brad Knox
Page 8
April 2016
50.
Cooper Perkins Inc., Fardad Faridi, Cynthia Breazeal, Jin Joo Lee, Luke Plummer, IFRobots
and Stacey Dyer
Tega is a new robot platform for long-term interactions with children. The robot leverages smart
phones to graphically display facial expressions. Smart phones are also used for computational
needs, including behavioral control, sensor processing, and motor control to drive its five degrees
of freedom. To withstand long-term continual use, we have designed an efficient battery-powered
system that can potentially run for up to six hours before needing to be charged. We also designed
for more robust and reliable actuator movements so that the robot can express consistent and
expressive behaviors over long periods of time. Through its small size and furry exterior, the robot
is aesthetically designed for children. We aim to field test the robot's ability to work reliably in
out-of-lab environments and engage young children in educational activities.
Alumni Contributor: Kris Dos Santos
51.
TinkRBook:
Reinventing the
Reading Primer
52.
Computer-Assisted
Transgenesis
NEW LISTING
53.
Engineering Microbial
Ecosystems
NEW LISTING
Kevin Esvelt, Erika Alden DeBenedictis, Cody Gilleland and Jianghong Min
This is a new platform to automate experiments in genetic engineering and bring large-scale
moonshot projects within reach. Too often, lab experiments are limited in scale by human fatigue
and the costs associated with manual labor. In particular, the process of delivering genetic
materials via manual microinjection remains a long-standing bottleneck. We are developing a
computer-assisted microinjection platform to streamline the production of transgenic organisms.
Briefly, organisms are immobilized in a gel and microinjections are performed using precision
robotics using computer vision algorithms. This platform demonstrated high-throughput gene
editing in an animal model (C. elegans) for the first time. We will be using this technology to refine
and create safeguards for our gene drive technology.
Kevin Esvelt, Erika Alden DeBenedictis, Jianghong Min and Devora Najjar
We are developing methods of controlling the genetic and cellular composition of microbial
communities in the gut. Stably colonized microbes could be engineered to sense disease, resist
pathogen invasion, and release appropriate therapeutics in situ.
April 2016
Page 9
54.
Preventing Lyme
Disease by Permanently
Immunizing Mice
NEW LISTING
55.
Reducing Suffering in
Laboratory Animals
NEW LISTING
56.
57.
Understanding
Molecular Evolution
NEW LISTING
Page 10
April 2016
58.
59.
Artificial
Gastrocnemius
Biomimetic Active
Prosthesis for
Above-Knee Amputees
Human walking neuromechanical models show how each muscle works during normal,
level-ground walking. They are mainly modeled with clutches and linear springs, and are able to
capture dominant normal walking behavior. This suggests to us to use a series-elastic clutch at the
knee joint for below-knee amputees. We have developed the powered ankle prosthesis, which
generates enough force to enable a user to walk "normally." However, amputees still have
problems at the knee joint due to the lack of gastrocnemius, which works as an ankle-knee flexor
and a plantar flexor. We hypothesize that metabolic cost and EMG patterns of an amputee with
our powered ankle and virtual gastrocnemius will dramatically improve.
Using biologically inspired design principles, a biomimetic robotic knee prosthesis is proposed that
uses a clutchable series-elastic actuator. In this design, a clutch is placed in parallel to a combined
motor and spring. This architecture permits the mechanism to provide biomimetic walking
dynamics while requiring minimal electromechanical energy from the prosthesis. The overarching
goal for this project is to design a new generation of robotic knee prostheses capable of generating
significant energy during level-ground walking, that can be stored in a battery and used to power a
robotic ankle prosthesis and other net-positive locomotion modes (e.g., stair ascent).
Alumni Contributor: Ernesto C. Martinez-Villalpando
60.
Control of
Muscle-Actuated
Systems via Electrical
Stimulation
Hugh Herr
Motivated by applications in rehabilitation and robotics, we are developing methodologies to
control muscle-actuated systems via electrical stimulation. As a demonstration of such potential,
we are developing centimeter-scale robotic systems that utilize muscle for actuation and glucose
as a primary source of fuel. This is an interesting control problem because muscles: a) are
mechanical state-dependent actuators; b) exhibit strong nonlinearities; and c) have slow
time-varying properties due to fatigue-recuperation, growth-atrophy, and damage-healing cycles.
We are investigating a variety of adaptive and robust control techniques to enable us to achieve
trajectory tracking, as well as mechanical power-output control under sustained oscillatory
conditions. To implement and test our algorithms, we developed an experimental capability that
allows us to characterize and control muscle in real time, while imposing a wide variety of
dynamical boundary conditions.
Alumni Contributor: Waleed A. Farahat
61.
62.
Dancing Control
System for Bionic Ankle
Prosthesis
Hugh Herr, Bevin Lin, Elliott Rouse, Nathan Villagaray-Carski and Robert Emerson
Effect of a Powered
Ankle on Shock
Absorption and
Interfacial Pressure
Professional ballroom dancer Adrianne Haslet-Davis lost her natural ability to dance when her left
leg was amputated below the knee following the Boston Marathon bombings in April 2013. Hugh
Herr was introduced to Adrianne while meeting with bombing survivors at Boston's Spaulding
Rehabilitation Hospital. For Professor Herr, this meeting generated a research challenge: build
Adrianne a bionic ankle prosthesis, and restore her ability to dance. The research team for this
project spent some 200 days studying the biomechanics of dancing and designing the bionic
technology based on their investigations. The control system for Adrianne was implemented on a
customized BiOM bionic ankle prosthesis.
April 2016
Page 11
63.
FitSocket:
Measurement for
Attaching Objects to
People
64.
65.
FlexSEA: Flexible,
Scalable Electronics
Architecture for
Wearable Robotics
Applications
This project aims to enable fast prototyping of a multi-axis and multi-joint active prosthesis by
developing a new modular electronics system. This system provides the required hardware and
software to do precise motion control, data acquisition, and networking. Scalability is achieved
through the use of a fast industrial communication protocol between the modules, and by a
standardization of the peripherals' interfaces: it is possible to add functionalities to the system
simply by plugging in additional cards. Hardware and software encapsulation are used to provide
high-performance, real-time control of the actuators, while keeping the high-level algorithmic
development and prototyping simple, fast, and easy.
We are studying the mechanical behavior of leg muscles and tendons during human walking in
order to motivate the design of power-efficient robotic legs. The Endo-Herr walking model uses
only three actuators (leg muscles) to power locomotion. It uses springs and clutches in place of
other essential tendons and muscles to store energy and transfer energy from one joint to another
during walking. Since mechanical clutches require much less energy than electric motors, this
model can be used to design highly efficient robotic legs and exoskeletons. Current work includes
analysis of the model at variable walking speeds and informing design specifications for a
collaborative "SuperFlex" exosuit project.
Alumni Contributor: Ken Endo
66.
67.
68.
Page 12
Load-Bearing
Exoskeleton for
Augmentation of
Human Running
Neural Interface
Technology for
Advanced Prosthetic
Limbs
Powered Ankle-Foot
Prosthesis
Hugh Herr
Augmentation of human locomotion has proved an elusive goal. Natural human walking is
extremely efficient, and the complex articulation of the human leg poses significant engineering
difficulties. We present a wearable exoskeleton designed to reduce the metabolic cost of jogging.
The exoskeleton places a stiff fiberglass spring in parallel with the complete leg during stance
phase, then removes it so that the knee may bend during leg swing. The result is a bouncing gait
with reduced reliance on the musculature of the knee and ankle.
Recent advances in artificial limbs have resulted in the provision of powered ankle and knee
function for lower extremity amputees and powered elbow, wrist, and finger joints for upper
extremity prostheses. Researchers still struggle, however, with how to provide prosthesis users
with full volitional and simultaneous control of the powered joints. This project seeks to develop
means to allow amputees to control their powered prostheses by activating the peripheral nerves
present in their residual limb. Such neural control can be more natural than currently used
myoelectric control, since the same functions previously served by particular motor fascicles can
be directed to the corresponding prosthesis actuators for simultaneous joint control, as in normal
limbs. Future plans include the capability to electrically activate the sensory components of
residual limb nerves to provide amputees with tactile feedback and an awareness of joint position
from their prostheses.
The human ankle provides a significant amount of net positive work during the stance period of
walking, especially at moderate to fast walking speeds. Conversely, conventional ankle-foot
prostheses are completely passive during stance, and consequently, cannot provide net positive
work. Clinical studies indicate that transtibial amputees using conventional prostheses experience
many problems during locomotion, including a high gait metabolism, a low gait speed, and gait
asymmetry. Researchers believe the main cause for the observed locomotion is due to the inability
of conventional prostheses to provide net positive work during stance. The objective of this project
is to develop a powered ankle-foot prosthesis that is capable of providing net positive work during
April 2016
the stance period of walking. To this end, we are investigating the mechanical design and control
system architectures for the prosthesis. We are also conducting a clinical evaluation of the
proposed prosthesis on different amputee participants.
Alumni Contributor: Samuel Au
69.
70.
Sensor-Fusions for an
EMG Controlled
Robotic Prosthesis
Terrain-Adaptive Lower
Limb Prosthesis
NEW LISTING
71.
72.
73.
Current unmotorized prostheses do not provide adequate energy return during late stance to
improve level-ground locomotion. Robotic prostheses can provide power during late-stance to
improve metabolic economy in an amputee during level-ground walking. This project seeks to
improve the types of terrain a robotic ankle can successfully navigate by using command signals
taken from the intact and residual limbs of an amputee. By combining these command signals with
sensors attached to the robotic ankle, it might be possible to further understand the role of
physiological signals in the terrain adaptation of robotic ankles.
Although there have been great advances in the control of lower extremity prostheses,
transitioning between terrains such as ramps or stairs remains a major challenge for the field. The
mobility of leg amputees is thus limited, impacting their quality of life and independence. This
projects aims to solve this problem by designing, implementing, and integrating a combined
terrain-adaptive and volitional controller for powered lower limb prostheses. The controller will be
able to predict terrain changes using data from both intrinsic sensors and electromyography
(EMG) signals from the user; adapt the ankle position before footfall in a biologically accurate
manner; and provide a torque profile consistent with biological ankle kinetics during stance. The
result will allow amputees to traverse and transition among flat ground, stairs, and slopes of
varying grade with lower energy and pain, greater balance, and without manually changing the
walking mode of their prosthesis.
Tethered Robotic
System for
Understanding Human
Movements
Variable-Impedance
Prosthetic (VIPr) Socket
Design
Volitional Control of a
Powered Ankle-Foot
Prosthesis
The goal of this project is to build a powerful system as a scientific tool for bridging the gap in the
literature by determining the dynamic biomechanics of the lower-limb joints and metabolic effects
of physical interventions during natural locomotion. This system is meant for use in applying forces
to the human body and measuring the force, displacement, and other physiological properties
simultaneously, helping investigate controllability and efficacy of mechanical devices physically
interacting with a human subject.
Today, 100 percent of amputees experience some form of prosthetic socket discomfort. This
project involves the design and production of a comfortable, variable impedance prosthetic (VIPr)
socket using digital anatomical data for a transtibial amputee using computer-aided design and
manufacturing (CAD/CAM). The VIPr socket uses multiple materials to achieve compliance,
thereby increasing socket comfort for amputees, while maintaining structural integrity. The
compliant features are seamlessly integrated into the 3D-printed socket to achieve lower interface
peak pressures over bony protuberances and other anatomical points in comparison to a
conventional socket. This lower peak pressure is achieved through a design that uses
anthropomorphic data acquired through surface scan and Magnetic Resonance Imaging
techniques. A mathematical transformation maps the quantitative measurements of the human
residual limb to the corresponding socket shape and impedance characteristics, spatially.
This project focuses on giving transtibial amputees volitional control over their prostheses by
combining electromyographic (EMG) activity from the amputees' residual limb muscles with
intrinsic controllers on the prosthesis. The aim is to generalize biomimetic behavior of the
prosthesis, making it independent of walking terrains and transitions.
April 2016
Page 13
74.
Collective Memory
75.
76.
77.
Data USA
NEW LISTING
Data USA makes available public data for the entire United States through millions of interactive
visualizations.
DIVE
The rise of computational methods has generated a new natural resource: data. While it's unclear
if big data will open up trillion-dollar markets, it is clear that making sense of data isn't easy, and
that data visualizations are essential to squeeze meaning out of data. But the capacity to create
data visualizations is not widespread; to help develop it we introduce the Pixel Factory, a new
initiative focusing on the creation of data-visualization resources and tools in collaboration with
corporate members. Our goals are to create software resources for development of online
data-visualization platforms that work with any type of data; and to create these resources as a
means to learn. The most valuable outcome of this work will not be the software resources
produced, incredible as these could be, but the generation of people with the capacity to make
these resources.
78.
FOLD
Alexis Hope, Kevin Hu, Joe Goldbeck, Nathalie Huynh, Matthew Carroll, Cesar A. Hidalgo,
Ethan Zuckerman
FOLD is an authoring and publishing platform for creating modular, multimedia stories. Some
readers require greater context to understand complex stories. Using FOLD, authors can search
for and add "context cards" to their stories. Context cards can contain videos, maps, tweets,
music, interactive visualizations, and more. FOLD also allows authors to link stories together by
remixing context cards created by other writers.
79.
GIFGIF
Page 14
April 2016
80.
Immersion
81.
Opus
82.
Pantheon
Ali Almossawi, Andrew Mao, Defne Gurel, Cesar A. Hidalgo, Kevin Zeng Hu, Deepak
Jagdish, Amy Yu, Shahar Ronen and Tiffany Lu
We were not born with the ability to fly, cure disease, or communicate at long distances, but we
were born in a society that endows us with these capacities. These capacities are the result of
information that has been generated by humans and that humans have been able to embed in
tangible and digital objects. This information is all around us: it's the way in which the atoms in an
airplane are arranged or the way in which our cellphones whisper dance instructions to
electromagnetic waves. Pantheon is a project celebrating the cultural information that endows our
species with these fantastic capacities. To celebrate our global cultural heritage, we are compiling,
analyzing, and visualizing datasets that can help us understand the process of global cultural
development.
83.
Place Pulse
84.
PubPub
Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu, Travis Rich and Thariq Shihipar
PubPub reinvents publication the way the web was designed: collaborative, evolving, and open.
We do so in a graphical format that is deliberately simple and allows illustrations and text that are
programs as well as static PDFs. The intention is to create an author-driven, distributed alternative
to academic journals that is tuned to the dynamic nature of many of our modern experiments and
discoveries replicated, evaluated on-the-fly, and extended as they are used. It is optimized for
public discussion and academic journals and is being used for both. It is equally useful for a
newsroom to develop a story that is intended for both print and online distribution.
85.
StreetScore
86.
The Economic
Complexity
Observatory
April 2016
Page 15
87.
88.
89.
Most interactions between cultures require overcoming a language barrier, which is why
multilingual speakers play an important role in facilitating such interactions. In addition, certain
languages (not necessarily the most spoken ones) are more likely than others to serve as
intermediary languages. We present the Language Group Network, a new approach for studying
global networks using data generated by tens of millions of speakers from all over the world: a
billion tweets, Wikipedia edits in all languages, and translations of two million printed books. Our
network spans over eighty languages, and can be used to identify the most connected languages
and the potential paths through which information diffuses from one culture to another.
Applications include promotion of cultural interactions, prediction of trends, and marketing.
Diverse teams of authors are known to generate higher-impact research papers, as measured by
their number of citations. But is this because cognitively diverse teams produce higher quality
work, which is more likely to get cited and adopted? Or is it because they possess a larger number
of social connections through which to distribute their findings? In this project we are mapping the
co-authorship networks and the academic diversity of the authors in a large volume of scientific
publications to test whether the adoption of papers is explained by cognitive diversity or the size of
the network associated with each of these authors. This project will help us understand whether
the larger levels of adoption of work generated by diverse groups is the result of higher quality, or
better connections.
We used 15 months of data from 1.5 million people to show that four points--approximate places
and times--are enough to identify 95 percent of individuals in a mobility database. Our work shows
that human behavior puts fundamental natural constraints on the privacy of individuals, and these
constraints hold even when the resolution of the dataset is low. These results demonstrate that
even coarse datasets provide little anonymity. We further developed a formula to estimate the
uniqueness of human mobility traces. These findings have important implications for the design of
frameworks and institutions dedicated to protecting the privacy of individuals.
90.
bioLogic
Lining Yao, Wen Wang, Guanyun Wang, Helene Steiner, Chin-Yi Cheng, Jifei Ou, Oksana
Anilionyte, Hiroshi Ishii
BioLogic is our attempt to program living organisms and invent responsive and transformational
interfaces of the future. Nature has engineered its own actuators, as well as the efficient material
composition, geometry, and structure to utilize its actuators and achieve functional transformation.
Based on the natural phenomenon of hygromorphic transformation, we introduce a specific type of
living cells as nanoactuators that react to body temperature and humidity change. The living
nanoactuator can be controlled by electrical signal and communicate with the virtual world as well.
A digital printing system and design simulation software are introduced to assist the design of
transformation structure.
91.
HydroMorph
NEW LISTING
92.
Inflated Appetite
NEW LISTING
Page 16
Ken Nakagaki, Pasquale Totaro, Jim Peraino, Thariq Shihipar, Chantine Akiyama, Yin
Shuang, Anthony Stuart, Hiroshi Ishii
HydroMorph is an interactive display based on shapes formed by a stream of water. Inspired by
the membrane formed when a water stream hits a smooth surface (e.g., a spoon), we developed a
system that dynamically controls the shape of a water membrane. This project explores a design
space of interactions around water shapes, and proposes a set of user scenarios in applications
across scales, from the faucet to the fountain. Through this work, we look to enrich our interaction
with water, an everyday material, with the added dimension of transformation.
Chin-Yi Cheng, Hiroshi Ishii, Jifei Ou, Wen Wang and Lining Yao
As part of human evolution and revolution, food is among the earliest forms of human interaction,
but it has remained essentially unchanged from ancient to modern times. What if we introduced
engineered and programmable food materials? With that change, food can change its role from
April 2016
passive to active. Food can "communicate" using its inherent behaviors combined with
engineering accuracy. Food becomes media and interface. During an MIT winter course we
initiated and taught, we encouraged students to design pneumatic food. Students successfully
implemented inflatable sugar and cheese products. To inflate food, we use both an engineering
approach and a biological approach; to solidify the inflated food, we introduce both heat via the
oven, and coldness with liquid nitrogen.
93.
inFORM
94.
95.
jamSheets: Interacting
with Thin
Stiffness-Changing
Material
Jifei Ou, Lining Yao, Daniel Tauber, Juergen Steimle, Ryuma Niiyama, Hiroshi Ishii
LineFORM
This project introduces layer jamming as an enabling technology for designing deformable,
stiffness-tunable, thin sheet interfaces. Interfaces that exhibit tunable stiffness properties can yield
dynamic haptic feedback and shape deformation capabilities. In contrast to particle jamming, layer
jamming allows for constructing thin and lightweight form factors of an interface. We propose
five-layer structure designs and an approach that composites multiple materials to control the
deformability of the interfaces. We also present methods to embed different types of sensing and
pneumatic actuation layers on the layer-jamming unit. Through three application prototypes we
demonstrate the benefits of using layer jamming in interface design. Finally, we provide a survey
of materials that have proven successful for layer jamming.
We propose a novel shape-changing interface that consists of a single line. Lines have several
interesting characteristics from the perspective of interaction design: abstractness of data
representation; a variety of inherent interactions/affordances; and constraints such as boundaries
or borderlines. By utilizing such aspects of lines together with the added transformation capability,
we present various applications in different scenarios such as shape-changing cords, mobiles,
body constraints, and data manipulation to investigate the design space of line-based
shape-changing interfaces.
96.
MirrorFugue
97.
Pneuduino
98.
Pneumatic
Shape-Changing
Interfaces
Jifei Ou, Lining Yao, Ryuma Niiyama, Sean Follmer and Hiroshi Ishii
An enabling technology to build shape-changing interfaces through pneumatically driven,
soft-composite materials. The composite materials integrate the capabilities of both input sensing
and active shape output. We explore four applications: a multi-shape mobile device, table-top
shape-changing tangibles, dynamically programmable texture for gaming, and a shape-shifting
lighting apparatus.
April 2016
Page 17
99.
Radical Atoms
Hiroshi Ishii
Radical Atoms is our vision of interactions with future materials. Radical Atoms takes a leap
beyond Tangible Bits by assuming a hypothetical generation of materials that can change form
and appearance dynamically, becoming as reconfigurable as pixels on a screen. Radical Atoms is
a computationally transformable and reconfigurable material that is bidirectionally coupled with an
underlying digital model (bits) so that dynamic changes of physical form can be reflected in digital
states in real time, and vice versa.
Alumni Contributors: Keywon Chung, Adam Kumpf, Amanda Parkes, Hayes Raffle and Jamie B
Zigelbaum
100. TRANSFORM
Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Philipp Schoessler, Amit Zoran and LEXUS
International
TRANSFORM fuses technology and design to celebrate its transformation from still furniture to a
dynamic machine driven by a stream of data and energy. TRANSFORM aims to inspire viewers
with unexpected transformations and the aesthetics of the complex machine in motion. First
exhibited at LEXUS DESIGN AMAZING MILAN (April 2014), the work comprises three dynamic
shape displays that move over one thousand pins up and down in real time to transform the
tabletop into a dynamic tangible display. The kinetic energy of the viewers, captured by a sensor,
drives the wave motion represented by the dynamic pins. The motion design is inspired by
dynamic interactions among wind, water, and sand in nature, Escher's representations of
perpetual motion, and the attributes of sand castles built at the seashore. TRANSFORM tells of
the conflict between nature and machine, and its reconciliation, through the ever-changing tabletop
landscape.
Luke Vink, Viirj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp Schoessler,
Amit Zoran, Hiroshi Ishii
Introducing TRANSFORM, a shape-changing desk. TRANSFORM is an exploration of how shape
display technology can be integrated into our everyday lives as interactive, transforming furniture.
These interfaces not only serve as traditional computing devices, but also support a variety of
physical activities. By creating shapes on demand or by moving objects around, TRANSFORM
changes the ergonomics and aesthetic dimensions of furniture, supporting a variety of use cases
at home and work: it holds and moves objects like fruit, game tokens, office supplies, and tablets,
creates dividers on demand, and generates interactive sculptures to convey messages and audio.
Page 18
April 2016
104. GeneFab
Bram Sterling, Kelly Chang, Joseph M. Jacobson, Peter Carr, Brian Chow, David Sun Kong,
Michael Oh and Sam Hwang
What would you like to "build with biology"? The goal of the GeneFab project is to develop
technology for the rapid fabrication of large DNA molecules, with composition specified directly by
the user. Our intent is to facilitate the field of synthetic biology as it moves from a focus on single
genes to designing complete biochemical pathways, genetic networks, and more complex
systems. Sub-projects include: DNA error correction, microfluidics for high throughput gene
synthesis, and genome-scale engineering (rE. coli).
Alumni Contributor: Chris Emig
105. NanoFab
107. Synthetic
Photosynthesis
Our goals include novel gene logic and data logging systems, as well as DNA scaffolds that can
be produced on commercial scales. State of the art in the former is limited by finding analogous
and orthogonal proteins for those used in current single-layer gates and two-layered circuits. State
of the art in the latter is constrained in size and efficiency by kinetic limits on self-assembly. We
have designed and plan to demonstrate cascaded logic on chromosomes and DNA scaffolds that
exhibit exponential growth.
We are using nanowires to build structures for synthetic photosynthesis for the solar generation of
liquid fuels.
108. A Multi-Sensor
Wearable Device for
Analyzing Stress
Response in Preschool
Classrooms
NEW LISTING
April 2016
Page 19
that aggregate to form metropolitan identity. We hope that this study will improve our collective
understanding of the urban environments we shape and the stories they generate, that it will allow
us to more sensitively test and implement real change in our shared public realm and support the
invisible narratives it generates.
112. Microculture
114. Storyboards
Sepandar Kamvar, Ayesha Bose, Connie Liu, Nazmus Saquib and Dwayne George
A crucial part of Montessori education is observation of the students, so teachers can assist
individuals and structure the environment as needed. Our work aims to assist this observation by
measuring proximity of students through Simblee COM sensors. We provide detailed
visualizations in a dashboard-style interface to both teachers and parents. This dashboard helps
teachers individualize their own methods to facilitate a child's growth in the classroom.
Sepandar Kamvar, Kevin Slavin, Jonathan Bobrow and Shantell Martin
Giving opaque technology a glass house, Storyboards present the tinkerers or owners of
electronic devices with stories of how their devices work. Just as the circuit board is a story of
star-crossed lovers--Anode and Cathode--with its cast of characters (resistor, capacitor,
transistor), Storyboards have their own characters driving a parallel visual narrative.
Sep Kamvar, Kim Smith, Yonatan Cohen, Kim Holleman, Nazmus Saquib, Caroline Jaffe
Dog is a new programming language that makes it easy and intuitive to create social applications.
A key feature of Dog is built-in support for interacting with people. Dog provides a natural
framework in which both people and computers can be sent requests and return results. It can
perform a long-running computation while also displaying messages, requesting information, or
sending operations to particular individuals or groups. By switching between machine and human
computation, developers can create powerful workflows and model complex social processes
without worrying about low-level technical details.
Page 20
April 2016
Sep Kamvar, Yonatan Cohen, Wesam Manassra, Pranav Ramkrishnan, Stephen Rife, Jia
Zhang, Edward Faulkner, Kim Smith, Asa Oines, Jake Sanchez, and Jennifer Jang
You Are Here is an experiment in microurbanism. In this project, we are creating 100 maps each
of 100 different cities. Each map gives a collective portrait of one aspect of life in the city, and is
designed to give communities meaningful micro-suggestions of what they might do to improve
their city. The interplay between the visualizations and the community work they induce creates a
collective, dynamic, urban-scale project.
Kent Larson, Luis A. Alonso Pastor, Alex (Sandy) Pentland, Ira Winder, Nai Chun Chen, Yan
Leng, Alejandro Noriega Campero, Agnis Stibe, Michael Chia-liang Lin, Carson Smuts, Ariel
Noyman, Yan Zhang (Ryan), Jason Nawyn, Juanita Devis, Nuria Macas
This is a unique collaborative project between the Media Lab and Andorra's government, largest
public and private companies (e.g., energy and telecom), and academic institutions. The
overarching paradigm of our work is the application of data science methodologies and spatial
analysis on Andorra's big data, with the goal of enabling an understanding of the country's
dynamics on tourism and commerce, human mobility and transportation systems, and energy and
environmental impact; as well as to shed light on technological and systems innovation toward
radical improvements in these domains. Goals include helping to develop big data platforms for
understanding, utilizing, and leveraging big data; developing concepts that have the potential to
establish Andorra as an international center for innovation; and designing interventions that can
improve the experience of tourists, encouraging them to visit more often, stay longer, and increase
spending.
Alumni Contributors: Luc Rocher and Arkadiusz Stopczynski
Kent Larson, Luis Alberto Alonso Pastor, Ivan Fernandez, Hasier Larrea and Carlos Rubio
120. BoxLab
In an urbanized world, where space is too valuable to be static and unresponsive, ARkits provide a
robotic kit of parts to empower real estate developers, furniture manufacturers, architects, and
"space makers" in general, to create a new generation of transformable and intelligent spaces.
Designed as a platform to enable rich contextual data collection in real homes, BoxLab uses a
broad array of wireless sensing devices to study responsive applications situated in natural home
settings. BoxLab has been deployed in homes around the Boston area, and has generated a
dataset containing over 10,000 hours of sensor data to be used as training libraries for
computational activity recognition and other applications of artificial intelligence. BoxLab also
enables rapid deployment of context-triggered applications that allow systems to respond to
occupant activities in real time.
Alumni Contributors: Jennifer Suzanne Beaudin, Edward Burns, Manu Gupta, Pallavi Kaushik,
Aydin Oztoprak, Randy Rockinson and Emmanuel Munguia Tapia
Kent Larson, Hasier Larrea, Daniel Goodman, Oier Ario, Phillip Ewing
Live large in 200 square feet! An all-in-one disentangled robotic furniture piece makes it possible
to live comfortably in a tiny footprint not only by magically reconfiguring the space, but also by
serving as a platform for technology integration and experience augmentation. Two hundred
square feet has never seemed so large.
122. CityOffice
April 2016
Page 21
Kent Larson, Waleed Gowharji, Carson Smuts, J. Ira Winder and Yan Zhang
The "Barcelona" demo is an independent prototype designed to model and simulate human
interactions within a Barcelona-like urban environment. Different types of land use (residential,
office, and amenities) are configured into urban blocks and analyzed with agent-based techniques.
Ryan Chin, Allenza Michel, Ariel Noyman, Jeffrey Rosenblum, Anson Stewart, Phil Tinn, Ira
Winder, Chris Zegras
CityScope is working with the Barr Foundation of Boston to develop a tangible-interactive
participatory environment for planning bus rapid transit (BRT).
Ira Winder
Ira Winder
Page 22
Real-time geospatial data is visualized on an exhibition-scale 3D city model. The model is built of
LEGO bricks, and visualization is performed by an array of calibrated projectors. Through
computation, GIS data is "LEGO-tized" to create a LEGO abstraction of existing urban areas. Data
layers include mobility systems, land use, social media, business activity, windflow simulations,
and more.
The CityScope "Scout" prototype integrates augmented reality with real-time mathematical
modeling of geospatial systems. In practice, the technology transforms any tabletop into a canvas
for land-use planning and walkability optimization. Users perform rapid prototyping with LEGO
bricks and receive real-time simulation and evaluation feedback.
The Dynamic 3D prototype allows users to edit a digital model by moving physical 3D abstractions
of building typologies. Movements are automatically detected, scanned, and digitized so as to
generate inputs for computational analysis. 3D information is also projected back onto the model
to give the user feedback while edits are made.
We recently led a workshop in Saudi Arabia, with staff from the Riyadh Development Authority, to
test a new version of our CityScope platform. With only an hour to work, four teams of five
professionals competed to develop a redevelopment proposal for a neighborhood near the city
center. The platform evaluated their designs according to energy, daylighting, and walkability.
CityScope MarkIVb is programmed to demonstrate and model the relationship between land use
(live and work), population density, parking supply and demand, and traffic congestion.
The robotic faade is conceived as a mass-customizable module that combines solar control,
heating, cooling, ventilation, and other functions to serve an urban apartment. It attaches to the
building "chassis" with standardized power, data, and mechanical attachments to simplify field
installation and dramatically increase energy performance. The design makes use of an
articulating mirror to direct shafts of sunlight to precise points in the apartment interior. Tiny,
low-cost, easily installed wireless sensors and activity recognition algorithms allow occupants to
use a mobile phone interface to map activities of daily living to personalized sunlight positions. We
are also developing strategies to control LED luminaires to turn off, dim, or tune the lighting to
more energy-efficient spectra in response to the location, activities, and paths of the occupants.
April 2016
Kent Larson, Ryan C.C. Chin, Chih-Chao Chuang, William Lark, Jr., Brandon Phillip
Martin-Anderson and SiZhi Zhou
Cities are hubs for innovation, characterized by densely populated areas where people and firms
cluster together, share resources, and collaborate. In turn, dense cities show higher rates of
economic growth and viability. Yet, the specific places innovation occurs in urban areas, and what
the socioeconomic conditions are that encourage it, are still elusive for both researches and
policymakers. Understanding the social and spatial settings that enable innovation to accrue will
equip policymakers and developers with the metrics to promote and sustain innovation in cities.
This research will measure the attributes of innovation districts across the US in terms of their
land-use configurations and population characteristics and behaviors. These measurements will
be used to identify the factors that enable innovation, with the goal of developing a methodological
approach for producing quantitative planning guidelines to support decision-making processes.
Mobility on Demand (MoD) systems are fleets of lightweight electric vehicles at strategically
distributed electrical charging stations throughout a city. MoD systems solve the "first and last
mile" problem of public transit, providing mobility between transit station and home/workplace.
Users swipe a membership card at the MoD station and drive a vehicle to any other station
(one-way rental). The Vlib' system of 20,000+ shared bicycles in Paris is the largest and most
popular one-way rental system in the world. MoD systems incorporate intelligent fleet
management through sensor networks, pattern recognition, and dynamic pricing, and the benefits
of Smart Grid technologies include intelligent electrical charging (including rapid charging),
vehicle-to-grid (V2G), and surplus energy storage for renewable power generation and peak
sharing for the local utility. We have designed three MoD vehicles: CityCar, RoboScooter, and
GreenWheel bicycle. (Continuing the vision of William J. Mitchell.)
April 2016
Page 23
Agnis Stibe, Matthias Wunsch, Alexandra Millonig, Chengzhen Dai, Stefan Seer, Katja
Schechtner, Ryan C. C. Chin and Kent Larson
The effects of global climate change, in combination with rapid urbanization, have forced cities to
seek low-energy and less carbon-intensive modes of transport. Cities have adopted policies like
congestion pricing to encourage its citizens to give up private automobiles and to use mass transit
or bicycling and walking. In this research study, we examine how persuasion technologies can be
utilized to encourage positive modal shifts in mobility behavior in cities. We are particularly
interested in studying the key persuasive strategies that enable, motivate, and trigger users to shift
from high-energy to low-energy modes. This project is a collaboration between the MIT Media Lab
and the Austrian Institute of Technology (AIT).
Alumni Contributors: Sandra Richter and Katja Schechtner
140. ViewCube
143. Captions++
NEW LISTING
144. DbDb
Page 24
April 2016
own techniques. Our intention is for the research community as a whole to benefit from a growing
body of open, analytical techniques. DbDb provides an interface for archiving data, executing
code, and visualizing a tree of forked analyses. It is part of the Viral initiative on open,
author-driven publishing, collaboration, and analysis. It is intended to be linked to PubPub, the
main project.
145. GIFGIF
147. MedRec
NEW LISTING
148. NewsClouds
Ariel Ekblaw, Asaf Azaria, Thiago Vieira, Joe Paradiso, Andrew Lippman
We face a well-known need for innovation in electronic medical records (EMRs), but the
technology is cumbersome and innovation is impeded by inefficiency and regulation. We
demonstrate MedRec as a solution tuned to the needs of patients, the treatment community, and
medical researchers. It is a novel, decentralized record management system for EMRs that uses
blockchain technology to manage authentication, confidentiality, accountability, and data sharing.
A modular design integrates with providers' existing, local data-storage solutions, enabling
interoperability and making our system convenient and adaptable. As a key feature of our work,
we engage the medical research community with an integral role in the protocol. Medical
researchers provide the "mining" necessary to secure and sustain the blockchain authentication
log, in return for access to anonymized, medical metadata in the form of "transaction fees."
Andrew Lippman and Thariq Shihipar
NewsClouds presents a visual exploration of how the news reporting of an event evolves over
time. Each cloud represents a publication and each competing news organization usually
emphasizes different aspects of that same story. Using the time sliders, that evolution becomes
evident. In addition, each word or phrase can be expanded to show its links and context. We are
building an archive of events associated with ongoing US election developments.
149. PubPub
Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu, Travis Rich and Thariq Shihipar
PubPub reinvents publication the way the web was designed: collaborative, evolving, and open.
We do so in a graphical format that is deliberately simple and allows illustrations and text that are
programs as well as static PDFs. The intention is to create an author-driven, distributed alternative
to academic journals that is tuned to the dynamic nature of many of our modern experiments and
discoveries replicated, evaluated on-the-fly, and extended as they are used. It is optimized for
public discussion and academic journals and is being used for both. It is equally useful for a
newsroom to develop a story that is intended for both print and online distribution.
April 2016
Page 25
152. SuperGlue
154. VR Codes
Page 26
April 2016
157. Ambisonic
Surround-Sound Audio
Compression
Breathing Window is a tool for non-verbal dialogue that reflects on your own breathing while also
offering a window on another person's respiration. This prototype is an example of shared human
experiences (SHEs) crafted to improve the quality of human understanding and interactions. Our
work on SHEs focuses on first encounters with strangers. We meet strangers every day, and
without prior background knowledge of the individual we often form opinions based on prejudices
and differences. In this work, we bring respiration to the foreground as one common experience of
all living creatures.
Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Charles Holbrow, David Nunez,
Simone Ovsey, Sarah Platte, Bryn Bliska, Rbecca Kleinberger, Peter Alexander Torpey and
Garrett Parrish
Until now, the impact of crowdsourced and interactive music projects has been limited: the public
contributes a small part of the final result, and is often disconnected from the artist leading the
project. We believe that a new musical ecology is needed for true creative collaboration between
experts and amateurs. Toward this goal, we are creating "city symphonies," each collaboratively
composed with the population of an entire city. We designed the infrastructure needed to bring
together an unprecedented number of people, including a variety of web-based music composition
applications, a social media framework, and real-world community-building activities. We have
premiered city symphonies in Toronto, Edinburgh, Perth, and Lucerne. With support from the
Knight Foundation, our first city symphony for the US, A Symphony for Detroit, premiered in fall
2015. We are also working on scaling this process by mentoring independent groups, beginning
with Akron, Ohio.
Tod Machover, Peter Torpey, Ben Bloomberg, Elena Jessop, Charles Holbrow, Simone
Ovsey, Garrett Parrish, Justin Martinez, and Kevin Nattinger
Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung, Akito van Troyer
The live global interactive simulcast of the final February 2014 performance of "Death and the
Powers" in Dallas made innovative use of satellite broadcast and Internet technologies to expand
the boundaries of second-screen experience and interactivity during a live remote performance. In
the opera, Simon Powers uploads his mind, memories, and emotions into The System,
represented onstage through reactive robotic, visual, and sonic elements. Remote audiences, via
simulcast, were treated as part of The System alongside Powers and the operabots. Audiences
had an omniscient view of the action of the opera, as presented through the augmented,
multi-camera video and surround sound. Multimedia content delivered to mobile devices, through
the Powers Live app, privileged remote audiences with perspectives from within The System.
Mobile devices also allowed audiences to influence The System by affecting the illumination of the
Winspear Opera House's Moody Foundation Chandelier.
"Death and the Powers" is a groundbreaking opera that brings a variety of technological,
conceptual, and aesthetic innovations to the theatrical world. Created by Tod Machover
(composer), Diane Paulus (director), and Alex McDowell (production designer), the opera uses the
techniques of tomorrow to address age-old human concerns of life and legacy. The unique
performance environment, including autonomous robots, expressive scenery, new
Hyperinstruments, and human actors, blurs the line between animate and inanimate. The opera
premiered in Monte Carlo in fall 2010, with additional performances in Boston and Chicago in 2011
and a new production with a global, interactive simulcast in Dallas in February 2014. The DVD of
the Dallas performance of Powers was released in April 2015.
April 2016
Page 27
Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Bryn Bliska, Charles Holbrow,
David Nunez, Rbecca Kleinberger, Simone Ovsey, Sarah Platte, Peter Torpey, Kelly
Donovan, Meejin Yoon and the Empathy and Experience class
Nothing is more important in today's troubled world than the process of eliminating prejudice and
misunderstanding, and replacing them with communication and empathy. We explore the
possibility of creating public experiences to dramatically increase individual and community
awareness of the power of empathy on an unprecedented scale. We draw on numerous
precedents from the Opera of the Future group that have proposed concepts and technologies to
inspire and intensify human connectedness (such as Sleep No More, Death and the Powers,
Vocal Vibrations, City Symphonies, and Hyperinstruments) and from worldwide instances of
transformative shared human experience (such as the Overview Effect, Human Libraries,
Immersive Theatre, and non-sectarian spiritual traditions). The objective is to create a model of a
multisensory, participatory, spatially radical installation that will break down barriers between
people of immensely different backgrounds, providing instantaneous understanding of as well as
long-term commitment to empathic communication.
163. Fablur
NEW LISTING
164. Fensadense
165. Hyperinstruments
Tod Machover
The Hyperinstruments project creates expanded musical instruments and uses technology to give
extra power and finesse to virtuosic performers. They were designed to augment a wide range of
traditional musical instruments and have been used by some of the world's foremost performers
(Yo-Yo Ma, the Los Angeles Philharmonic, Peter Gabriel, and Penn & Teller). Research focuses
on designing computer systems that measure and interpret human expression and feeling,
exploring appropriate modalities and content of interactive art and entertainment environments,
and building sophisticated interactive musical instruments for non-professional musicians,
students, music lovers, and the general public. Recent projects involve the production a new
version of the "classic" Hyperstring Trilogy for the Lucerne Festival, and the design of a new
generation of Hyperinstruments, for Fensadense and other projects, that emphasizes
measurement and interpretation of inter-player expression and communication, rather than simply
the enhancement of solo performance.
Alumni Contributors: Roberto M. Aimi, Mary Farbood, Ed Hammond, Tristan Jehan, Margaret Orth,
Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana Young
166. Hyperproduction:
Advanced Production
Systems
Page 28
April 2016
167. Hyperscore
Tod Machover
Hyperscore is an application to introduce children and non-musicians to musical composition and
creativity in an intuitive and dynamic way. The "narrative" of a composition is expressed as a
line-gesture, and the texture and shape of this line are analyzed to derive a pattern of
tension-release, simplicity-complexity, and variable harmonization. The child creates or selects
individual musical fragments in the form of chords or melodic motives, and layers them onto the
narrative-line with expressive brushstrokes. The Hyperscore system automatically realizes a full
composition from a graphical representation. Currently, Hyperscore uses a mouse-based
interface; the final version will support freehand drawing, and integration with the Music Shapers
and Beatbugs to provide a rich array of tactile tools for manipulation of the graphical score.
Alumni Contributors: Mary Farbood, Ed Hammond, Tristan Jehan, Margaret Orth, Dan Overholt,
Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana Young
Expert or fraud, the powerful person in front of an orchestra or choir attracts both hate and
admiration. But what is the actual influence a conductor has on the musician and the sounding
result? To throw light on the fundamental principles of this special gestural language, we try to
prove a direct correlation between the conductor's gestures, muscle tension, and the physically
measurable reactions of musicians in onset-precision, muscle tension, and sound quality. We also
measure whether the mere form of these gestures causes different levels of stress or arousal.
With this research we aim not only to contribute to the development of a theoretical framework on
conducting, but also to enable a precise mapping of gestural parameters in order to develop and
demonstrate a new system to the optional enhancement of musical learning, performance, and
expression.
Media Scores extends the concept of a musical score to other modalities, facilitating the process
of authoring and performing multimedia compositions and providing a medium through which to
realize a modern-day Gesamtkunstwerk. The web-based Media Scores environment and related
show control systems leverage research into multimodal representation and encoding of
expressive intent. Using such a tool, the composer will be able to shape an artistic work that may
be performed through a variety of media and modalities. Media Scores offer the potential for
authoring content using live performance data as well as audience participation and interaction.
This paradigm bridges the extremes of the continuum from composition to performance, allowing
for improvisation. The Media Score also provides a common point of reference in collaborative
productions, as well as the infrastructure for real-time control of technologies used during live
performance.
Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon Dublon, Jason
Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi, Nicholas Joliat, and Peter Torpey
We have collaborated with London-based theater group Punchdrunk to create an online platform
connected to their NYC show, Sleep No More. In the live show, masked audience members
explore and interact with a rich environment, discovering their own narrative pathways. We have
developed an online companion world to this real-life experience, through which online participants
partner with live audience members to explore the interactive, immersive show together. Pushing
the current capabilities of web standards and wireless communications technologies, the system
delivers personalized multimedia content, allowing each online participant to have a unique
experience co-created in real time by his own actions and those of his onsite partner. This project
explores original ways of fostering meaningful relationships between online and onsite audience
members, enhancing the experiences of both through the affordances that exist only at the
intersection of the real and the virtual worlds.
April 2016
Page 29
Tod Machover, Charles Holbrow, Elena Jessop, Rebecca Kleinberger, Le Laboratoire, and
the Dalai Lama Center at MIT
Our voice is an important part of our individuality. From the voices of others, we understand a
wealth of non-linguistic information, such as identity, social-cultural clues, and emotional state. But
the relationship we have with our own voice is less obvious. We don't hear it the way others do,
and our brain treats it differently from any other sound. Yet its sonority is deeply connected with
how we are perceived by society and how we see ourselves, body and mind. This project is
composed of software, devices, installations, and thoughts used to challenge us to gain new
insights on our voices. To increase self-awareness, we propose different ways to extend, project,
and visualize the voice. We show how our voices sometimes escape our control, and we explore
the consequences in terms of self-reflection, cognitive processes, therapy, affective features
visualization, and communication improvement.
Vocal Vibrations explores relationships between human physiology and the vibrations of the voice.
The voice is an expressive instrument that nearly everyone possesses and that is intimately linked
to the physical form. In collaboration with Le Laboratoire and the MIT Dalai Lama Center, we
examine the hypothesis that voices can influence mental and physical health through
physico-physiological phenomena. The first Vocal Vibrations installation premiered in Paris,
France, in March 2014. The public "Chapel" space of the installation encouraged careful
meditative listening. A private "Cocoon" environment guided an individual to explore his/her voice,
augmented by tactile and acoustic stimuli. Vocal Vibrations then had a successful showing as the
inaugural installation at the new Le Laboratoire Cambridge from November 2014 through March
2015. The installation was incorporated into Le Laboratoire's Memory/Witness of the Unimaginable
exhibit, April 17-August 16, 2015.
Alumni Contributor: Eyal Shahar
Page 30
April 2016
177. BrainVR: A
Neuroscience Learning
Experience in Virtual
Reality
NEW LISTING
178. Enlight
Pattie Maes, Scott Greenwald, Alex Norton and Amy Robinson (EyeWire) and Daniel Citron
(Harvard University)
BrainVR is a learning experience for neuroscience that leverages motion-tracked virtual reality to
convey cutting-edge knowledge in neuroscience. In particular, an interactive 3D model of the
retina illustrates how the eye detects moving objects. The goal of the project is to explore the
potential of motion tracked virtual reality for learning complex concepts, and build reusable tools to
maximize this potential across knowledge domains.
Tal Achituv, Natan Linder, Rony Kubat, Pattie Maes and Yihui Saw
In physics education, virtual simulations have given us the ability to show and explain phenomena
that are otherwise invisible to the naked eye. However, experiments with analog devices still play
an important role. They allow us to verify theories and discover ideas through experiments that are
not constrained by software. What if we could combine the best of both worlds? We achieve that
by building our applications on a projected augmented reality system. By projecting onto physical
objects, we can paint the phenomena that are invisible. With our system, we have built "physical
playgrounds": simulations that are projected onto the physical world and that respond to detected
objects in the space. Thus, we can draw virtual field lines on real magnets, track and provide
history on the location of a pendulum, or even build circuits with both physical and virtual
components.
179. Express
NEW LISTING
Cristina Powell (Artist, Pattie Maes, Tal Achituv, Daniel Kish (Expert in Perception and
Accessibility for the Blind) and Suffers from Cerebral Palsy)
We are developing a new and exciting tool for expression in paint, combining technology and art to
bring together the physical and the virtual through the use of robotics, artificial intelligence, signal
processing, and wearable technology. Our technology promotes expression in paint not only by
making it a lot more accessible, but also by making it flexible, adaptive, and fun, for everyone
across the entire spectrum of abilities. With the development of the technology, new forms of art
also emerge, such as hyper, hybrid, and collaborative painting. All of these can be extended to
remote operation (or co-operation) thanks to the modular system design. For example, a parent
and a child can be painting together even when far apart; a disabled person can experience an
embodied painting experience; and medical professionals can reach larger populations with
physical therapy, occupational therapy, and art therapy, including motor/neuromuscular impaired
persons.
181. FingerReader
EyeRing is a wearable, intuitive interface that allows a person to point at an object to see or hear
more information about it. We came up with the idea of a micro-camera worn as a ring on the
index finger with a button on the side, which can be pushed with the thumb to take a picture or a
video that is then sent wirelessly to a mobile phone to be analyzed. The user tells the system what
information they are interested in and receives the answer in either auditory or visual form. The
device also provides some simple haptic feedback. This finger-worn configuration of sensors and
actuators opens up a myriad of possible applications for the visually impaired as well as for sighted
people.
FingerReader is a finger-worn device that helps the visually impaired to effectively and efficiently
read paper-printed text. It works in a local-sequential manner for scanning text that enables
reading of single lines or blocks of text, or skimming the text for important sections while providing
auditory and haptic feedback.
April 2016
Page 31
185. HRQR
2D screens, even stereoscopic ones, limit our ability to interact with and collaborate on 3D data.
We believe that an augmented reality solution, where 3D data is seamlessly integrated in the real
world, is promising. We are exploring a collaborative augmented reality system for visualizing and
manipulating 3D data using a head-mounted, see-through display, that allows for communication
and data manipulation using simple hand gestures.
HRQR is a visual Human and Machine Readable Quick Response Code that can replace usual 2D
barcode and QR Code applications. The code can be read by humans in the same way it can be
read by machines. Instead of relying on a computational error correction, the system allows a
human to read the message and therefore is able to reinterpret errors in the visual image. The
design is highly inspired by a 2,000 year-old Arabic calligraphy called Kufic.
Pattie Maes, Joseph A. Paradiso, Xavier Benavides Palos and Chang Long Zhu Jin
189. LuminAR
Invisibilia seeks to explore the use of Augmented Reality (AR), head-mounted displays (HMD),
and depth cameras to create a system that makes invisible data from our environment visible,
combining widely accessible hardware to visualize layers of information on top of the physical
world. Using our implemented prototype, the user can visualize, interact with, and modify
properties of sound waves in real time by using intuitive hand gestures. Thus, the system supports
experiential learning about certain physics phenomena through observation and hands-on
experimentation.
JaJan! is a telepresence system wherein remote users can learn a second language together
while sharing the same virtual environment. JaJan! can support five aspects of language learning:
learning in context; personalization of learning materials; learning with cultural information;
enacting language-learning scenarios; and supporting creativity and collaboration. Although JaJan!
is still in an early stage, we are confident that it will bring profound changes to the ways in which
we experience language learning and can make a great contribution to the field of second
language education.
KickSoul is a wearable device that maps natural foot movements into inputs for digital devices. It
consists of an insole with embedded sensors that track movements and trigger actions in devices
that surround us. We present a novel approach to use our feet as input devices in mobile
situations when our hands are busy. We analyze the foot's natural movements and their meaning
before activating an action.
LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new
category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector,
camera, and wireless computer in a compact form factor. This self-contained system enables
users with just-in-time projected information and a gestural user interface, and it can be screwed
into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed
to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their
environments with media and information, while seamlessly connecting with laptops, mobile
phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive
spaces that blend digital media and information with the physical space. The project radically
rethinks the design of traditional lighting objects, and explores how we can endow them with novel
augmented-reality interfaces.
Page 32
April 2016
Rony Daniel Kubat, Natan Linder, Ben Weissmann, Niaja Farve, Yihui Saw and Pattie Maes
Projected augmented reality in the manufacturing plant can increase worker productivity, reduce
errors, gamify the workspace to increase worker satisfaction, and collect detailed metrics. We
have built new LuminAR hardware customized for the needs of the manufacturing plant and
software for a specific manufacturing use case.
Move Your Glass is an activity and behavior tracker that also tries to increase wellness by nudging
the wearer to engage in positive behaviors.
Valentin Heun, Shunichi Kasahara, James Hobin, Kevin Wong, Michelle Suh, Benjamin F
Reynolds, Marc Teyssier, Eva Stern-Rodriguez, Afika A Nyati, Kenny Friedman, Anissa
Talantikite, Andrew Mendez, Jessica Laughlin, Pattie Maes
Open Hybrid is an open source augmented reality platform for physical computing and Internet of
Things. It is based on the web and Arduino.
193. PsychicVR
Pattie Maes, Judith Amores Fernandez, Xavier Benavides Palos and Daniel Novy
PsychicVR integrates a brain-computer interface device and virtual reality headset to improve
mindfulness while enjoying a playful immersive experience. The fantasy that any of us could have
superhero powers has always inspired us, and by using Virtual Reality and real-time brain activity
sensing we are moving one step closer to making this dream real. We non-invasively monitor and
record electrical activity of the brain and incorporate this data in the VR experience using an
Oculus Rift and the MUSE headband. By sensing brain waves using a series of EEG sensors, the
level of activity is fed back to the user via 3D content in the virtual environment. When users are
focused, they are able to make changes in the 3D environment and control their powers. Our
system increases mindfulness and helps achieve higher levels of concentration while entertaining
the user.
Valentin Heun, Eva Stern-Rodriguez, Marc Teyssier, Shuo Yan, Kevin Wong, Michelle Suh
The Reality Editor is a new kind of tool for empowering you to connect and manipulate the
functionality of physical objects. Just point the camera of your smartphone at an object and its
invisible capabilities will become visible for you to edit. Drag a virtual line from one object to
another and create a new relationship between these objects. With this simplicity, you are able to
master the entire scope of connected objects.
Tal Achituv
Remot-IO is a system for mobile collaboration and remote assistance around Internet-connected
devices. It uses two head-mounted displays, cameras, and depth sensors to enable a remote
expert to be immersed in a local user's point of view, and to control devices in that user's
environment. The remote expert can provide guidance through hand gestures that appear in real
time in the local user's field of view as superimposed 3D hands. In addition, the remote expert can
operate devices in the novice's environment and bring about physical changes by using the same
hand gestures the novice would use. We describe a smart radio where the knobs of the radio can
be controlled by local and remote users. Moreover, the user can visualize, interact, and modify
properties of sound waves in real time by using intuitive hand gestures.
Scanner Grabber is a digital police scanner that enables reporters to record, playback, and export
audio, as well as archive public safety radio (scanner) conversations. Like a TiVo for scanners, it's
an update on technology that has been stuck in the last century. It's a great tool for newsrooms.
For instance, a problem for reporters is missing the beginning of an important police incident
because they have stepped away from their desk at the wrong time. Scanner Grabber solves this
because conversations can be played back. Also, snippets of exciting audio, for instance a police
chase, can be exported and embedded online. Reporters can listen to files while writing stories, or
listen to older conversations to get a more nuanced grasp of police practices or long-term trouble
spots. Editors and reporters can use the tool for collaborating, or crowdsourcing/public
collaboration.
197. ScreenSpire
Pattie Maes, Tal Achituv, Chang Long Zhu Jin and Isa Sobrinho
Screen interactions have been shown to contribute to increases in stress, anxiety, and deficiencies
in breathing patterns. Since better respiration patterns can have a positive impact on wellbeing,
ScreenSpire improves respiration patterns during information work using subliminal biofeedback.
By using subtle graphical variations that are tuned to attempt to influence the user subconsciously
April 2016
Page 33
user distraction and cognitive load are minimized. To enable a true seamless interaction, we have
adapted an RF based sensor (ResMed S+ sleep sensor) to serve as a screen-mounted
contact-free and respiration sensor. Traditionally, respiration sensing is achieved with either
invasive or on-skin sensors (such as a chest belt); having a contact-free sensor contributes to
increased ease, comfort, and user compliance, since no special actions are required from the
user.
199. Skrin
ShowMe is an immersive mobile collaboration system that allows remote users to communicate
with peers using video, audio, and gestures. With this research, we explore the use of
head-mounted displays and depth sensor cameras to create a system that (1) enables remote
users to be immersed in another person's view, and (2) offers a new way of sending and receiving
the guidance of an expert through 3D hand gestures. With our system, both users are surrounded
in the same physical environment and can perceive real-time inputs from each other.
Skrin is an skin interface that helps users express themselves using different hand gestures. To
achieve that, we use lighting (on hands) as our language. This project is part of the course Sci
Fab: Science Fiction-Inspired Prototyping.
200. SmileCatcher
202. TagMe
TagMe is an end-user toolkit for easy creation of responsive objects and environments. It consists
of a wearable device that recognizes the object or surface the user is touching. The user can make
everyday objects come to life through the use of RFID tag stickers, which are read by an RFID
bracelet whenever the user touches the object. We present a novel approach to create simple and
customizable rules based on emotional attachment to objects and social interactions of people.
Using this simple technology, the user can extend their application interfaces to include physical
objects and surfaces into their personal environment, allowing people to communicate through
everyday objects in very low-effort ways.
Page 34
April 2016
204. WATCH
NEW LISTING
205. 3D Printing of
Functionally Graded
Materials
207. Anthozoa
Neri Oxman
A 3D-printed dress was debuted during Paris Fashion Week Spring 2013 as part of collaboration
with fashion designer Iris Van Herpen for her show "Voltage." The 3D-printed skirt and cape were
produced using Stratasys' unique Objet Connex multi-material 3D printing technology, which
allows a variety of material properties to be printed in a single build. This allowed both hard and
soft materials to be incorporated within the design, crucial to the movement and texture of the
piece. Core contributers include: Iris Van Herpen, fashion designer (Amsterdam); Keren Oxman,
artist and designer (NY); and W. Craig Carter (Department of Materials Science and Engineering,
MIT). Fabricated by Stratasys.
208. Beast
Neri Oxman
Beast is an organic-like entity created synthetically by the incorporation of physical parameters into
digital form-generation protocols. A single continuous surface, acting both as structure and as skin,
is locally modulated for both structural support and corporeal aid. Beast combines structural,
environmental, and corporeal performance by adapting its thickness, pattern density, stiffness,
flexibility, and translucency to load, curvature, and skin-pressured areas respectively.
Neri Oxman, Jorge Duro-Royo, Markus Kayser, Jared Laucks and Laia Mogas-Soldevila
The Biblical story of the Tower of Babel involved a deliberate plan hatched by mankind to
construct a platform from which man could fight God. The tower represented the first documented
attempt at constructing a vertical city. The divine response to the master plan was to sever
communication by instilling a different language in each builder. Tragically, the building's ultimate
destruction came about through the breakdown of communications between its fabricators. In this
April 2016
Page 35
installation we redeem the Tower of Babel by creating its antithesis. We will construct a virtuous,
decentralized, yet highly communicative building environment of cable-suspended fabrication bots
that together build structures bigger than themselves. We explore themes of asynchronous motion,
multi-nodal fabrication, lightweight additive manufacturing, and the emergence of form through
fabrication. (With contributions from Carlos Gonzalez Uribe and Dr. James Weaver (WYSS
Institute and Harvard University))
210. Building-Scale 3D
Printing
Neri Oxman
Carpal Skin is a prototype for a protective glove to protect against Carpal Tunnel Syndrome, a
medical condition in which the median nerve is compressed at the wrist, leading to numbness,
muscle atrophy, and weakness in the hand. Night-time wrist splinting is the recommended
treatment for most patients before going into carpal tunnel release surgery. Carpal Skin is a
process by which to map the pain-profile of a particular patient--its intensity and duration--and to
distribute hard and soft materials to fit the patient's anatomical and physiological requirements,
limiting movement in a customized fashion. The form-generation process is inspired by animal
coating patterns in the control of stiffness variation.
Neri Oxman
214. FABRICOLOGY:
Variable-Property 3D
Printing as a Case for
Sustainable Fabrication
Neri Oxman
CNSILK explores the design and fabrication potential of silk fibers inspired by silkworm
cocoons for the construction of woven habitats. It explores a novel approach to the design and
fabrication of silk-based building skins by controlling the mechanical and physical properties of
spatial structures inherent in their microstructures using multi-axis fabrication. The method offers
construction without assembly, such that material properties vary locally to accommodate for
structural and environmental requirements. This approach stands in contrast to functional
assemblies and kinetically actuated facades which require a great deal of energy to operate, and
are typically maintained by global control. Such material architectures could simultaneously bear
structural load, change their transparency so as to control light levels within a spatial compartment
(building or vehicle), and open and close embedded pores so as to ventilate a space.
The digitally reconfigurable surface is a pin matrix apparatus for directly creating rigid 3D surfaces
from a computer-aided design (CAD) input. A digital design is uploaded into the device, and a grid
of thousands of tiny pins, much like the popular pin-art toy are actuated to form the desired
surface. A rubber sheet is held by vacuum pressure onto the tops of the pins to smooth out the
surface they form; this strong surface can then be used for industrial forming operations, simple
resin casting, and many other applications. The novel phase-changing electronic clutch array
allows the device to have independent position control over thousands of discrete pins with only a
single motorized "push plate," lowering the complexity and manufacturing cost of this type of
device. Research is ongoing into new actuation techniques to further lower the cost and increase
the surface resolution of this technology.
Rapid prototyping technologies speed product design by facilitating visualization and testing of
prototypes. However, such machines are limited to using one material at a time; even high-end 3D
printers, which accommodate the deposition of multiple materials, must do so discretely and not in
mixtures. This project aims to build a proof-of-concept of a 3D printer able to dynamically mix and
vary the ratios of different materials in order to produce a continuous gradient of material
properties with real-time correspondence to structural and environmental constraints.
Alumni Contributors: Mindy Eng, William J. Mitchell and Rachel Fong
Page 36
April 2016
215. FitSocket:
Measurement for
Attaching Objects to
People
Neri Oxman, Carlos Gonzalez Uribe and Hugh Herr and the Biomechatronics group
217. G3P
Neri Oxman, Markus Kayser, John Klein, Chikara Inamura, Daniel Lizardo, Giorgia Franchin,
Michael Stern, Shreya Dave, Peter Houk, MIT Glass Lab
Prosthetic Sockets belong to a family of orthoic devices designed for amputee rehabilitation and
performance augmentation. Although such products are fabricated out of lightweight composite
materials and designed for optimal shape and size, they are limited in their capacity to offer local
control of material properties for optimizing load distribution and ergonomic fit over surface and
volume areas. Our research offers a novel workflow to enable the digital design and fabrication of
customized prosthetic sockets with variable impedance informed by MRI data. We implement
parametric environments to enable the controlled distribution of functional gradients of a
filament-wound carbon fiber socket.
Digital design and construction technologies for product and building scale are generally limited in
their capacity to deliver multi-functional building skins. Recent advancements in additive
manufacturing and digital fabrication at large are today enabling the fabrication of multiple
materials with combinations of mechanical, electrical, and optical properties; however, most of
these materials are non-structural and cannot scale to architectural applications. Operating at the
intersection of additive manufacturing, biology, and architectural design, the Glass Printing project
is an enabling technology for optical glass 3D printing at architectural scale designed to
manufacture multi-functional glass structures and facade elements. The platform deposits molten
glass in a layer-by-layer (FDM) fashion, implementing numerical control of tool paths, and it allows
for controlled optical variation across surface and volume areas.
Chikara Inamura, Daniel Lizardo, Michael Stern, Pierre-Thomas Brun, Peter Houk, Neri
Oxman
The G3P 2.0 platform sets itself apart from traditional 3D printers, industrial glass forming devices,
and its own predecessor G3P 1.0 by a fundamental restructuring of architecture and
computer-aided machining (CAM) process based on the material properties of silicate glass. Glass
is an extreme material. Working temperature range, viscosity, and thermal stress are sufficiently
high that we must rethink the glass deposition forming process using new expertise on material
behavior. The aim is to produce a platform that is less a printer, in the conventional sense, but
rather, a freeform fabrication tool for glass. What results is a truly novel fabrication platform making
use of a dynamic thermal and flow control system, digitally integrated with a four-axis motion
system. The material fundamentally drives how the machine is used, and in return, the machine
can change how glass is formed and used.
219. Gemini
Neri Oxman with Le Laboratoire (David Edwards, Founder), Stratasys, and SITU Fabrication
Gemini is an acoustical "twin chaise" spanning multiple scales of human existence, from the womb
to the stretches of the Gemini zodiac. We are exploring interactions between pairs: sonic and solar
environments, natural and synthetic materials, hard and soft sensations, and subtractive and
additive fabrication. Made of two material elements--a solid wood milled shell housing and an
intricate cellular skin made of sound-absorbing material--the chaise forms a semi-enclosed space
surrounding the human with a stimulation-free environment, recapitulating the ultimate quiet of the
womb. It is the first design to implement Stratasys' Connex3 technology using 44 materials with
different pre-set mechanical combinations varying in rigidity, opacity, and color as a function of
geometrical, structural, and acoustical constraints. This calming and still experience of being inside
the chaise is an antidote to the stimuli-rich world in which we live.
April 2016
Page 37
Neri Oxman, Will Patrick, Steven Keating, Sunanda Sharma; Stratasys; Christoph Bader
and Dominik Kolb; Pamela Silver and Stephanie Hays (Harvard Medical School); and Dr.
James Weaver
How can we design relationships between the most primitive and sophisticated life forms? Can we
design wearables embedded with synthetic microorganisms that can enhance and augment
biological functionality, and generate consumable energy when exposed to the sun? We explored
these questions through the creation of Mushtari, a 3D-printed wearable with 58 meters of internal
fluid channels. Designed to function as a microbial factory, Mushtari uses synthetic
microorganisms to convert sunlight into useful products for the wearer, engineering a symbiotic
relationship between two bacteria: photosynthetic cyanobacteria and E. coli. The cyanobacteria
convert sunlight to sucrose, and E. coli convert sucrose to useful products such as pigments,
drugs, food, fuel, and scents. This form of symbiosis, known as co-culture, is a phenomenon
commonly found in nature. Mushtari is part of the Wanderers collection, an astrobiological
exploration dedicated to medieval astronomers who explored worlds beyond by visiting worlds
within.
222. Meta-Mesh:
Computational Model
for Design and
Fabrication of
Biomimetic Scaled
Body Armors
Altec, BASF, Neri Oxman, Steven Keating, John Klein, Julian Leland and Nathan Spielberg
Page 38
A collaboration between Professor Christine Ortiz (project lead), Professor Mary C. Boyce, Katia
Zolotovsky, and Swati Varshaney (MIT). Operating at the intersection of biomimetic design and
additive manufacturing, this research proposes a computational approach for designing
multifunctional scaled-armors that offer structural protection and flexibility in movement. Inspired
by the segmented exoskeleton of Polypterus senegalus, an ancient fish, we have developed a
hierarchical computational model that emulates structure-function relationships found in the
biological exoskeleton. Our research provides a methodology for the generation of biomimetic
protective surfaces using segmented, articulated components that maintain user mobility alongside
full-body coverage of doubly curved surfaces typical of the human body. The research is
supported by the MIT Institute for Soldier Nanotechnologies, the Institute for Collaborative
Biotechnologies, and the National Security Science and Engineering Faculty Fellowship Program.
Micro-Macro Fluidic Fabrication (MMFF) enables the control of mechanical properties through the
design of non-linear lattices embedded within multi-material matrices. At its core it is a hybrid
technique that integrates molding, casting, and macro-fluidics. Its workflow allows for the
fabrication of complex matrices with geometrical channels injected with polymers of different
pre-set mechanical combinations. This novel fabrication technique is implemented in the design
and fabrication of a midsole running shoe. The goal is to passively tune material stiffness across
surface area in order to absorb the impact force of the user's body weight relative to the ground,
and enhance the direction of the foot-strike impulse force relative to the center of body mass.
The MDCP is an in-progress research project consisting of a compound robotic arm system. The
system comprises a 6-axis KUKA robotic arm attached to the endpoint of a 3-axis Altec hydraulic
boom arm, which is mounted on a mobile platform. Akin to the biological model of the human
shoulder and hand, this compound system utilizes the large boom arm for gross positioning and
the small robotic arm for fine positioning and oscillation correction, respectively. Potential
applications include fabrication of non-standard architectural forms, integration of real-time on-site
sensing data, improvements in construction efficiency, enhanced resolution, lower error rates, and
increased safety.
April 2016
225. Monocoque
Neri Oxman
French for "single shell," Monocoque stands for a construction technique that supports structural
load using an object's external skin. Contrary to the traditional design of building skins that
distinguish between internal structural frameworks and non-bearing skin elements, this approach
promotes heterogeneity and differentiation of material properties. The project demonstrates the
notion of a structural skin using a Voronoi pattern, the density of which corresponds to multi-scalar
loading conditions. The distribution of shear-stress lines and surface pressure is embodied in the
allocation and relative thickness of the vein-like elements built into the skin. Its innovative 3D
printing technology provides for the ability to print parts and assemblies made of multiple materials
within a single build, as well as to create composite materials that present preset combinations of
mechanical properties.
Neri Oxman, Will Patrick, Sunanda Sharma, Steven Keating, Steph Hays, Elonore Tham,
Professor Pam Silver, and Professor Tim Lu
How can biological organisms be incorporated into product, fashion, and architectural design to
enable the generation of multi-functional, responsive, and highly adaptable objects? This research
pursues the intersection of synthetic biology, digital fabrication, and design. Our goal is to
incorporate engineered biological organisms into inorganic and organic materials to vary material
properties in space and time. We aim to use synthetic biology to engineer organisms with varied
output functionalities and digital fabrication tools to pattern these organisms and induce their
specific capabilities with spatiotemporal precision.
Neri Oxman, Steven Keating, Will Patrick and David Sun Kong (MIT Lincoln Laboratory)
Neri Oxman
Computation and fabrication in biology occur in aqueous environments. Through on-chip mixing,
analysis, and fabrication, microfluidic chips have introduced new possibilities in biology for over
two decades. Existing construction processes for microfluidics use complex, cumbersome, and
expensive lithography methods that produce single-material, multi-layered 2D chips. Multi-material
3D printing presents a promising alternative to existing methods that would allow microfluidics to
be fabricated in a single step with functionally graded material properties. We aim to create
multi-material microfluidic devices using additive manufacturing to replicate current devices, such
as valves and ring mixers, and to explore new possibilities enabled by 3D geometries and
functionally graded materials. Applications range from medicine to genetic engineering to product
design.
The values endorsed by vernacular architecture have traditionally promoted designs constructed
and informed by and for the environment, while using local knowledge and indigenous materials.
Under the imperatives and growing recognition of sustainable design, Rapid Craft seeks
integration between local construction techniques and globally available digital design
technologies to preserve, revive, and reshape these cultural traditions.
230. Raycounting
Neri Oxman
Raycounting is a method for generating customized light-shading constructions by registering the
intensity and orientation of light rays within a given environment. 3D surfaces of double curvature
are the result of assigning light parameters to flat planes. The algorithm calculates the intensity,
position, and direction of one or multiple light sources placed in a given environment, and assigns
local curvature values to each point in space corresponding to the reference plane and the light
dimension. Light performance analysis tools are reconstructed programmatically to allow for
morphological synthesis based on intensity, frequency, and polarization of light parameters as
defined by the user.
April 2016
Page 39
Neri Oxman, Jorge Duro-Royo, Carlos Gonzalez, Markus Kayser, and Jared Laucks, with
James Weaver (Wyss Institute, Harvard University) and Fiorenzo Omenetto (Tufts
University)
The Silk Pavilion explores the relationship between digital and biological fabrication. The primary
structure was created from 26 polygonal panels made of silk threads laid down by a CNC
(Computer-Numerically Controlled) machine. Inspired by the silkworm's ability to generate a 3D
cocoon out of a single multi-property silk thread, the pavilion's overall geometry was created using
an algorithm that assigns a single continuous thread across patches, providing various degrees of
density. Overall density variation was informed by deploying the silkworm as a biological "printer"
in the creation of a secondary structure. Positioned at the bottom rim of the scaffold, 6,500
silkworms spun flat, non-woven silk patches as they locally reinforced the gaps across
CNC-deposited silk fibers. Affected by spatial and environmental conditions (geometrical density,
variation in natural light and heat), the silkworms were found to migrate to darker and denser
areas.
232. SpiderBot
Neri Oxman, Jorge Duro-Royo, Markus Kayser, Sunanda Sharma, Dr. James Weaver (Wyss
Institute); Dr. Anne Madden; Dr. Noah Wilson-Rich (Best Bees Company)
The Synthetic Apiary proposes a new kind of environment, bridging urban and organismic scales
by exploring one of the most important organisms for both the human species and our planet:
bees. We explore the cohabitation of humans and other species through the creation of a
controlled atmosphere and associated behavioral paradigms. The project facilitates Mediated
Matter s ongoing research into biologically augmented digital fabrication with eusocial insect
communities in architectural, and possibly urban, scales. Many animal communities in nature
present collective behaviors known as swarming," prioritizing group survival over individuals, and
constantly working to achieve a common goal. Often, swarms of organisms are skilled builders; for
example, ants can create extremely complex networks by tunneling, and wasps can generate
intricate paper nests with materials sourced from local areas.
Neri Oxman, Jorge Duro-Royo, and Laia Mogas-Soldevila, in collaboration with Dr. Javier G.
Fernandez (Wyss Institute, Harvard University)
This research presents water-based robotic fabrication as a design approach and enabling
technology for additive manufacturing (AM) of biodegradable hydrogel composites. We focus on
expanding the dimensions of the fabrication envelope, developing structural materials for additive
deposition, incorporating material-property gradients, and manufacturing architectural-scale
biodegradable systems. The technology includes a robotically controlled AM system to produce
biodegradable composite objects, combining natural hydrogels with other organic aggregates. It
demonstrates the approach by designing, building, and evaluating the mechanics and controls of a
multi-chamber extrusion system. Finally, it provides evidence of large-scale composite objects
fabricated by our technology that display graded properties and feature sizes ranging from microto macro-scale. Fabricated objects may be chemically stabilized or dissolved in water and recycled
within minutes. Applications include the fabrication of fully recyclable products or temporary
architectural components, such as tent structures with graded mechanical and optical properties.
Page 40
April 2016
Dan Chen
Dan Chen
NEW LISTING
CremateBot is an apparatus that takes in human-body samples such as fingernails, hair, or dead
skin and turns them into ashes through the cremation process. The process of converting human
remains to ashes becomes a critical experience for observers, causing witnesses to question their
sense of existence and physical self through the conversion process. CremateBot transforms our
physical self and celebrates our rebirth through self-regeneration. The transformation and rebirth
open our imagination to go beyond our physical self and cross the span of time. Similar to
Theseus' paradox, the dead human cells which at one point were considered part of our physical
selves and helped to define our sense of existence are continually replaced with newly generated
cells. With recent advancements in implants, biomechatronics, and bioengineered organs, how we
define ourselves is increasingly blurred.
Driven female professionals often choose to pursue their careers in lieu of having children. For
many of them, strategies of surrogacy or freezing eggs are popular options not only because of
available technological advancements, but also because of shifts in cultural perspective enabled
by a new biotechnical regime. The dichotomy that forces an either/or divide between motherhood
and career can be seen as a modern form of regulatory control on women. The question of
reproduction becomes a matter of our bio-techno-capitalist society as a confine of women s voices
and freedom. Companies such as Facebook and Apple have recently offered to pay female
employees to freeze their eggs so they can continue with their careers, without interrupting their
dreams of having children. However, there still remain many ethical, social, and political dilemmas
which exist with surrogacy, questions that must be posed to the public.
Dan Chen
Nostalgic Touch proposes a new ritual for remembering the deceased in the digital and
multicultural age. It is an apparatus that captures hand motions and attempts to replicate the
sensation of intimacy or affection by playing back the comforting gestures. It stores gesture data of
the people you cared about, then plays them back after they are gone. Similar to rituals in all
religions, it gives us a sense of comfort in coping with the death. People in Japan, Singapore, and
China live with high standards of technology, but many embrace religious rituals and superstitions
as an important part of their wellbeing and decision-making. Nostalgic Touch explores how
emerging technologies could be used to enrich the experience of these rituals. How could we
augment these rituals to give an even better sense of comfort and intimacy?
Mary Tsang
Open Source Estrogen combines do-it-yourself science, body and gender politics, and ethics of
hormonal manipulation. The goal of the project is to create an open source protocol for estrogen
biosynthesis. The kitchen is a politically charged space, prescribed to women as their proper
dwelling, making it the appropriate place to prepare an estrogen synthesis recipe. With recent
developments in the field of synthetic biology, the customized kitchen laboratory may be a
ubiquitous possibility in the near future. Open-access estrogen would allow women and
transgender females to exercise greater control over their bodies by circumventing governments
and institutions. We want to ask: What are the biopolitics governing our bodies? More importantly,
is it ethical to self-administer self-synthesized hormones?
April 2016
Page 41
Ai Hasegawa
Facing issues of food crisis by overpopulation, this project explores a possible future where a
small community of activists arises to design an edible cockroach that can survive in harsh
environments. These genetically modified roaches are designed to pass their genes to the next
generations; thus the awful black and brown roaches will be pushed to extinction by the newly
designed, cute, colorful, tasty, and highly nutritional "pop roach." The color of these "pop roaches"
corresponds to a different flavor, nutrition, and function, while the original ones remain black or
brown, and not recommended to be eaten. How will genetic engineering shift our perception of
food and eating habits? Pop Roach explores how we can expand our perception of cuisine to
solve some of the world's most pressing problems.
242. TrancefloraAmy's
Glowing Silk
Sputniko!
Red String of Fate is an East Asian mythology in which gods tie an invisible red string between
those that are destined to be together. Sputniko! has collaborated with scientists from NIAS to
geneticallyengineer silkworms to spin this mythical 'Red String of Fate' by inserting genes that
produce oxytocin, a social-bonding 'love' hormone, and the genes of a red-glowing coral into
silkworm eggs. Science has long challenged and demystified the world of mythologies: from
Galileo's belief that the earth revolved around the sun, to Darwin's theory of evolution and
beyond--but in the near future, could science be recreating our mythologies? The film unravels a
story around the protagonist Tamaki, an aspiring genetic engineer, who engineers her own "Red
Silk of Fate" in the hopes of winning the heart of her crush, Sachihiko. However, strange, mythical
powers start to inhabit her creation....
Sputniko!
We collaborated with NIAS (National Institute of Agricultural Science) to genetically engineer
silkworms to develop novel kinds of silk which can be used for future fashion. For the exhibition,
we designed a Nishijin-Kimono dress, working with NIAS's glowing silk (created by injecting the
genes of a glowing coral and jellyfish into silkworm eggs) and exhibited the piece in Tokyo's Gucci
Gallery. More detailed information about this project can be found here:
http://sputniko.com/2015/04/amyglowingsilk/.
Page 42
April 2016
247. DoppelLab:
Experiencing
Multimodal Sensor Data
Homes and offices are being filled with sensor networks to answer specific queries and solve
pre-determined problems, but no comprehensive visualization tools exist for fusing these disparate
data to examine relationships across spaces and sensing modalities. DoppelLab is a cross-reality
virtual environment that represents the multimodal sensor data produced by a building and its
inhabitants. Our system encompasses a set of tools for parsing, databasing, visualizing, and
sonifying these data; by organizing data by the space from which they originate, DoppelLab
provides a platform to make both broad and specific queries about the activities, systems, and
relationships in a complex, sensor-rich environment.
We are evaluating new methods of interacting and controlling solid-state lighting based on our
findings of how participants experience and perceive architectural lighting in our new lighting
laboratory (E14-548S). This work, aptly named "Experiential Lighting," reduces the complexity of
modern lighting controls (intensity/color/space) into a simple mapping, aided by both human input
and sensor measurement. We believe our approach extends beyond general lighting control and is
applicable in situations where human-based rankings and preference are critical requirements for
control and actuation. We expect our foundational studies to guide future camera-based systems
that will inevitably incorporate context in their operation (e.g., Google Glass).
The FingerSynth is a wearable musical instrument made up of a bracelet and set of rings that
enables its players to produce sound by touching nearly any surface in their environments. Each
ring contains a small, independently controlled audio exciter transducer. The rings sound loudly
when they touch a hard object, and are silent otherwise. When a wearer touches their own (or
someone else's) head, the contacted person hears sound through bone conduction, inaudible to
others. A microcontroller generates a separate audio signal for each ring, and can take user input
through an accelerometer in the form of taps, flicks, and other gestures. The player controls the
envelope and timbre of the sound by varying the physical pressure and the angle of their finger on
the surface, or by touching differently resonant surfaces. The FingerSynth encourages players to
experiment with the materials around them and with one another.
In this project we investigate how the process of building a circuit can be made more organic, like
sketching in a sketchbook. We integrate a rechargeable power supply into the spine of a traditional
sketchbook, so that each page of the sketchbook has power connections. This enables users to
begin creating functioning circuits directly onto the pages of the book and to annotate as they
would in a regular notebook. The sequential nature of the sketchbook allows creators to document
their process for circuit design. The book also serves as a single physical archive of various
hardware designs. Finally, the portable and rechargeable nature of the book allows users to take
their electronic prototypes off of the lab bench and share their creations with people outside of the
lab environment.
Imagine a future where lights are not fixed to the ceiling, but follow us wherever we are. In this
colorful world we enjoy lighting that is designed to go along with the moment, the activity, our
feelings, and our outfits. Halo is a wearable lighting device created to explore this scenario.
Different from architectural lighting, this personal lighting device aims to illuminate and present its
user. Halo changes the wearer's appearance with the ease of a button click, similar to adding a
filter to a photograph. It can also change the user's view of the world, brightening up a rainy day or
coloring a gray landscape. Halo can react to activities and adapt based on context. It is a
responsive window between the wearer and his or her surroundings.
April 2016
Page 43
Pattie Maes, Joseph A. Paradiso, Xavier Benavides Palos and Chang Long Zhu Jin
254. ListenTree:
Audio-Haptic Display in
the Natural
Environment
Glorianna Davenport, Joe Paradiso, Gershon Dublon, Pragun Goyal and Brian Dean Mayton
257. MedRec
Ariel Ekblaw, Asaf Azaria, Thiago Vieira, Joe Paradiso, Andrew Lippman
NEW LISTING
Page 44
With our Ubiquitous Sonic Overlay, we are working to place virtual sounds in the user's
environment, fixing them in space even as the user moves. We are working toward creating a
seamless auditory display, indistinguishable from the user's actual surroundings. Between
bone-conduction headphones, small and cheap orientation sensors, and ubiquitous GPS, a
confluence of fundamental technologies is in place. However, existing head-tracking systems
either limit the motion space to a small area (e.g., Occulus Rift), or sacrifice precision for scale
using technologies like GPS. We are seeking to bridge the gap to create large outdoor spaces of
sonic objects.
KickSoul is a wearable device that maps natural foot movements into inputs for digital devices. It
consists of an insole with embedded sensors that track movements and trigger actions in devices
that surround us. We present a novel approach to use our feet as input devices in mobile
situations when our hands are busy. We analyze the foot's natural movements and their meaning
before activating an action.
Living Observatory is an initiative for documenting and interpreting ecological change that will
allow people, individually and collectively, to better understand relationships between ecological
processes, human lifestyle choices, and climate change adaptation. As part of this initiative, we
are developing sensor networks that document ecological processes and allow people to
experience the data at different spatial and temporal scales. Low-power sensor nodes capture
climate and other data at a high spatiotemporal resolution, while others stream audio. Sensors on
trees measure transpiration and other cycles, while fiber-optic cables in streams capture
high-resolution temperature data. At the same time, we are developing tools that allow people to
explore this data, both remotely and onsite. The remote interface allows for immersive 3D
exploration of the terrain, while visitors to the site will be able to access data from the network
around them directly from wearable devices.
We face a well-known need for innovation in electronic medical records (EMRs), but the
technology is cumbersome and innovation is impeded by inefficiency and regulation. We
demonstrate MedRec as a solution tuned to the needs of patients, the treatment community, and
medical researchers. It is a novel, decentralized record management system for EMRs that uses
blockchain technology to manage authentication, confidentiality, accountability, and data sharing.
A modular design integrates with providers' existing, local data-storage solutions, enabling
interoperability and making our system convenient and adaptable. As a key feature of our work,
we engage the medical research community with an integral role in the protocol. Medical
researchers provide the "mining" necessary to secure and sustain the blockchain authentication
log, in return for access to anonymized, medical metadata in the form of "transaction fees."
April 2016
259. NailO
Light enables our visual perception. It is the most common medium for displaying digital
information. Light regulates our circadian rhythms, affects productivity and social interaction, and
makes people feel safe. Yet despite the significance of light in structuring human relationships with
their environments on all these levels, we communicate very little with our artificial lighting
systems. Occupancy, ambient illuminance, intensity, and color preferences are the only input
signals currently provided to these systems. With advanced sensing technology, we can establish
better communication with our devices. This effort is often described as context-awareness.
Context has typically been divided into properties such as location, identity, affective state, and
activity. Using wearable and infrastructure sensors, we are interested in detecting these properties
and using them to control lighting. The Mindful Photons Project aims to close the loop and allow
our light sources to "see" us.
NailO is a nail-mounted gestural input surface inspired by commercial nail stickers. Using
capacitive sensing on printed electrodes, the interface can distinguish on-nail finger swipe
gestures with high accuracy (>92 percent). NailO works in real time: the system is miniaturized to
fit on the fingernail, while wirelessly transmitting the sensor data to a mobile phone or PC. NailO
allows for one-handed and always-available input, while being unobtrusive and discrete. The
device blends into the user's body, is customizable, fashionable, and even removable.
Sensor networks permeate our built and natural environments, but our means for interfacing to the
resultant data streams have not evolved much beyond HCI and information visualization.
Researchers have long experimented with wearable sensors and actuators on the body as
assistive devices. A user's neuroplasticity can, under certain conditions, transcend sensory
substitution to enable perceptual-level cognition of "extrasensory" stimuli delivered through
existing sensory channels. But there remains a huge gap between data and human sensory
experience. We are exploring the space between sensor networks and human augmentation, in
which distributed sensors become sensory prostheses. In contrast, user interfaces are
substantially unincorporated by the body, our relationship to them never fully pre-attentive.
Attention and proprioception are key, not only to moderate and direct stimuli, but also to enable
users to move through the world naturally, attending to the sensory modalities relevant to their
specific contexts.
Inspired by previous work in the field of sonification, we are building a data-driven composition
platform that will enable users to map collision event information from experiments in high-energy
physics to audio properties. In its initial stages, the tool will be used for outreach purposes,
allowing physicists and composers to interact with collision data through novel interfaces. Our
longer-term goal is to develop strategic mappings that facilitate the auditory perception of hidden
regularities in high dimensional datasets and thus evolve into a useful analysis tool for physicists
as well, possibly for the purpose of monitoring slow control data in experiment control rooms. The
project includes a website with real-time audio streams and basic event data, which is not yet
public.
SensorChimes aims to create a new canvas for artists leveraging ubiquitous sensing and data
collection. Real-time data from environmental sensor networks are realized as musical
composition. Physical processes are manifested as musical ideas, with the dual goal of making
meaningful music and rendering an ambient display. The Tidmarsh Living Observatory initiative,
which aims to document the transformation of a reclaimed cranberry bog, provides an opportunity
to explore data-driven musical composition based on a large-scale environmental sensor network.
The data collected from Tidmarsh are piped into a mapping framework, which a composer
configures to produce music driven by the data.
April 2016
Page 45
264. Skrin
265. Tid'Zam
NEW LISTING
Alex 'Sandy' Pentland, Harvard Humanitarian Initiative and Overseas Development Institute
bandicoot provides a complete, easy-to-use environment for researchers using mobile phone
metadata. It allows them to easily load their data, perform analysis, and export their results with a
few lines of code. It computes 100+ standardized metrics in three categories: individual (number of
calls, text response rate), spatial (radius of gyration, entropy of places), and social network
(clustering coefficient, assortativity). The toolbox is easy to extend and contains extensive
documentation with guides and examples.
Data-Pop Alliance is a joint initiative on big data and development with a goal of helping to craft
and leverage the new ecosystem of big data--new personal data, new tools, new actors--to
improve decisions and empower people in a way that avoids the pitfalls of a new digital divide,
de-humanization, and de-democratization. Data-Pop Alliance aims to serve as a designer, broker,
and implementer of ideas and activities, bringing together institutions and individuals around
common principles and objectives through collaborative research, training and capacity building,
technical assistance, convening, knowledge curation, and advocacy. Our thematic areas of focus
include official statistics, socio-economic and demographic methods, conflict and crime, climate
change and environment, literacy, and ethics.
268. DeepShop:
Understanding
Purchase Patterns via
Deep Learning
NEW LISTING
Page 46
April 2016
269. Enigma
270. Incentivizing
Cooperation Using
Social Pressure
Dhaval Adjodah, Erez Shmueli, David Shrier and Alex 'Sandy' Pentland
272. Location
Recommendations
Based on Large-Scale
Call Detail Records
Alex 'Sandy' Pentland, Yan Leng, Jinhua Zhao and Larry Rudolph
NEW LISTING
We believe that the narrative of only listening to experts or trusting the wisdom of the crowd blindly
is flawed. Instead we have developed a system that weighs experts and lay-people differently and
dynamically and show that a good balance is required. We show that our methodology leads to a
15 percent improvement in mean performance, 15 percent decrease in variance, and almost 30
percent increase in Sharpe-type ratio in a real online market.
The availability of large-scale longitudinal geolocation records offer planners and service providers
an unprecedented opportunity to understand human behavior. Location recommendations based
on these data sources can not only reduce information loads for travelers, but also increase
revenues for service providers. Large-scale behavioral datasets transform the way planners and
authorities create systematic-efficient interventions and provide customized information with the
availability for a comprehensive picture. In this research, we aim to make recommendations by
exploiting travelers' choice flexibilities. We infer implicit location preferences based on sparse and
passively collected CDR. We then formulate an optimization model with the objective of
maximizing overall satisfaction toward the recommendations with road capacity constraints. We
are implementing the method in Andorra, a small European country heavily relying on tourism. We
demonstrate that the method can reduce the travel time caused by congestion while making
satisfactory location recommendations.
Alex 'Sandy' Pentland, Bruno Lepri and David Shrier
The Mobile Territorial Lab (MTL) aims at creating a living laboratory integrated in the real life of
the Trento territory in Italy, open to manifold kinds of experimentations. In particular, the MTL is
focused on exploiting the sensing capabilities of mobile phones to track and understand human
behaviors (e.g., families' spending behaviors, lifestyles, mood, and stress patterns); on designing
and testing social strategies aimed at empowering individual and collective lifestyles through
attitude and behavior change; and on investigating new paradigms in personal data management
and sharing. This project is a collaboration with Telecom Italia SKIL Lab, Foundation Bruno
Kessler, and Telefonica I+D.
Yves-Alexandre de Montjoye, Laura Radaelli, Vivek Kumar Singh, Alex 'Sandy' Pentland
Even when real names and other personal information are stripped from metadata datasets, it is
often possible to use just a few pieces of information to identify a specific person. Here, we study
three months of credit card records for 1.1 million people and show that four spatiotemporal points
are enough to uniquely reidentify 90 percent of individuals. We show that knowing the price of a
transaction increases the risk of reidentification by 22 percent, on average. Finally, we show that
even data sets that provide coarse information at any or all of the dimensions provide little
anonymity, and that women are more reidentifiable than men in credit card metadata.
April 2016
Page 47
275. openPDS/
SaferAnswers:
Protecting the Privacy
of Metadata
Alex 'Sandy' Pentland, Brian Sweatt, Erez Shmueli, and Yves-Alexandre de Montjoye
Alex 'Sandy' Pentland, Yan Leng, Jinhua Zhao and Larry Rudolph
NEW LISTING
In a world where sensors, data storage, and processing power are too cheap to meter, how do you
ensure that users can realize the full value of their data while protecting their privacy? openPDS is
a field-tested, personal metadata management framework that allows individuals to collect, store,
and give fine-grained access to their metadata to third parties. SafeAnswers is a new and practical
way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard
anonymization problem into a more tractable security one. It allows services to ask questions
whose answers are calculated against the metadata, instead of trying to anonymize individuals'
metadata. Together, openPDS and SafeAnswers provide a new way of dynamically protecting
personal metadata.
Markets are notorious for bubbles and bursts. Other research has found that crowds of lay-people
can replace even leading experts to predict everything from product sales to the next big
diplomatic event. In this project, we leverage both threads of research to see how prediction
markets can be used to predict business and technological innovations, and use them as a model
to fix financial bubbles. For example, a prediction market was rolled out inside of Intel and the
experiment was very successful, and led to better predictions than the official Intel forecast 75
percent of the time. Prediction markets also led to as much as a 25 percent reduction in mean
squared error over the prediction of official experts at Google, Ford, and Koch industries.
Location prediction is a critical building block in many location-based services and transportation
management. This project explores the issue of next-location prediction based on the longitudinal
movements of the locations individuals have visited, as observed from call detail decords (CDR).
In a nutshell, we apply recurrent neural network (RNN) to next-location prediction on CDR. RNN
can take in sequential input with no restriction on the dimensions of the input. The method can
infer the hidden similarities among locations and interpret the semantic meanings of the locations.
We compare the proposed method with Markov and a Naive Model proving that RNN has better
accuracy in location prediction.
Alex 'Sandy' Pentland, Benjamin Waber and Daniel Olguin Olguin
Data mining of email has provided important insights into how organizations function and what
management practices lead to greater productivity. But important communications are almost
always face-to-face, so we are missing the greater part of the picture. Today, however, people
carry cell phones and wear RFID badges. These body-worn sensor networks mean that we can
potentially know who talks to whom, and even how they talk to each other. Sensible Organizations
investigates how these new technologies for sensing human interaction can be used to reinvent
organizations and management.
Page 48
April 2016
Javier Hernandez Rivera, Weixuan 'Vincent' Chen, Akane Sano, and Rosalind W. Picard
This study attempts to examine humans' affective responses to superimposed sinusoidal signals.
These signals can be perceived either through sound, in the case of electronically synthesized
musical notes, or through vibro-tactile stimulation, in the case of vibrations produced by vibrotactile
actuators. This study is concerned with the perception of superimposed vibrations, whereby two or
more sinusoisal signals are perceived simultaneously, producing a perceptual impression that is
substantially different than of each signal alone, owing to the interactions between perceived
sinusoidal vibrations that give rise to a unified percept of a sinusoidal chord. The theory of interval
affect was derived from systematic analyses of Indian, Chinese, Greek, and Arabic music theory
and tradition, and proposes a universal organization of affective response to intervals organized
using a multidimensional system. We hypothesize that this interval affect system is multi-modal
and will transfer to the vibrotactile domain.
This project examines how the expression granted by new musical interfaces can be harnessed to
create positive changes in health and wellbeing. We are conducting experiments to measure EEG
dynamics and physical movements performed by participants who are using software designed to
invite physical and musical expression of the basic emotions. The present demonstration of this
system incorporates an expressive gesture sonification system using a Leap Motion device, paired
with an ambient music engine controlled by EEG-based affective indices. Our intention is to better
understand affective engagement, by creating both a new musical interface to invite it, and a
method to measure and monitor it. We are exploring the use of this device and protocol in
therapeutic settings in which mood recognition and regulation are a primary goal.
A common practice in Traditional Chinese Medicine (TCM) is visual examination of the patient's
tongue. This study will examine ways to make this process more objective and to test its efficacy
for understanding stress- and health-related changes in people over time. We start by developing
an app that makes it comfortable and easy for people to collect tongue data in daily life together
with other stress- and health-related information. We will obtain assessment from expert
practitioners of TCM, and also use pattern analysis and machine learning to attempt to create
state-of-the-art algorithms able to help provide better insights for health and prevention of
sickness.
April 2016
Page 49
287. BrightBeat: An
On-Screen Intervention
for Regulating
Breathing
Deep and slow breathing techniques are components of various relaxation methods and have
been used in treatment of many psychiatric and somatic disorders, as well as to improve mental
function and attentiveness in healthy individuals. Many adults spend more than eight hours a day
in front of screens, suggesting a need for an on-screen system to guide the user toward healthier
breathing habits without requiring interruption. In this project, we explore the design and
implementation of unobtrusive systems to promote healthier breathing habits.
With the LEGO Group and Hasbro, we looked at the emotional experience of playing with games
and LEGO bricks. We measured participants' skin conductance as they learned to play with these
new toys. By marking the stressful moments, we were able to see what moments in learning
should be redesigned. Our findings suggest that framing is key: how can we help children
recognize their achievements? We also saw how children are excited to take on new
responsibilities but are then quickly discouraged when they aren't given the resources to succeed.
Our hope for this work is that by using skin conductance sensors, we can help companies better
understand the unique perspective of children and build experiences fit for them.
Electrodermal Activity (EDA) is a physiological indicator of stress and strong emotion. While an
increasing number of wearable devices can collect EDA, analyzing the data to obtain reliable
estimates of stress and emotion remains a difficult problem. We have built a graphical tool that
allows anyone to upload their EDA data and analyze it. Using a highly accurate machine learning
algorithm, we can automatically detect noise within the data. We can also detect skin conductance
responses, which are spikes in the signal indicating a "fight or flight" response. Users can visualize
these results and download files containing features calculated on the data to be used in their own
analysis. Those interested in machine learning can also view and label their data to train a
machine learning classifier. We are currently adding active learning, so the site can intelligently
select the fewest possible samples for the user to label.
Alumni Contributors: Weixuan 'Vincent' Chen, Szymon Fedor and Akane Sano
Page 50
We explore advanced machine learning and reflective user interfaces to scale the national Crisis
Text Line. We are using state-of-the-art probabilistic graphical topic models and visualizations to
help a mental health counselor extract patterns of mental health issues experienced by
participants, and bring large-scale data science to understanding the distribution of mental health
issues in the United States.
The wide availability of low-cost, wearable, biophysiological sensors enables us to measure how
the environment and our experiences impact our physiology. This creates a new challenge: in
order to interpret the collected longitudinal data, we require the matching contextual information as
well. Collecting weeks, months, and years of continuous biophysiological data makes it unfeasible
to rely solely on our memory for providing the contextual information. Many view maintaining
journals as burdensome, which may result in low compliance levels and unusable data. We
present an architecture and implementation of a system for the acquisition, processing, and
visualization of biophysiological signals and contextual information.
April 2016
Yadid Ayzenberg
Weixuan 'Vincent' Chen, Javier Hernandez Rivera, Akane Sano and Rosalind W. Picard
294. Lensing:
Cardiolinguistics for
Atypical Angina
Complex and expensive medical devices are mainly used in medical facilities by health
professionals. IDA is an attempt to disrupt this paradigm and introduce a new type of device: easy
to use, low cost, and open source. It is a digital stethoscope that can be connected to the Internet
for streaming physiological data to remote clinicians. Designed to be fabricated anywhere in the
world with minimal equipment, it can be operated by individuals without medical training.
This study aims to bring objective measurement to the multiple "pulse" and "pulse-like" measures
made by practitioners of Traditional Chinese Medicine (TCM). The measures are traditionally
made by manually palpitating the patient's inner wrist in multiple places, and relating the sensed
responses to various medical conditions. Our project brings several new kinds of objective
measurement to this practice, compares their efficacy, and examines the connection of the
measured data to various other measures of health and stress. Our approach includes the
possibility of building a smartwatch application that can analyze stress and health information from
the point of view of TCM.
Conversations between two individuals--whether between doctor and patient, mental health
therapist and client, or between two people romantically involved with each other--are complex.
Each participant contributes to the conversation using her or his own "lens." This project involves
advanced probabilistic graphical models to statistically extract and model these dual lenses across
large datasets of real-world conversations, with applications that can improve crisis and
psychotherapy counseling and patient-cardiologist consultations. We're working with top
psychologists, cardiologists, and crisis counseling centers in the United States.
Receiving a shot or discussing health problems can be stressful, but does not always have to be.
We measure participants' skin conductance as they use medical devices or visit hospitals and note
times when stress occurs. We then prototype possible solutions and record how the emotional
experience changes. We hope work like this will help bring the medical community closer to their
customers.
Physiological arousal is an important part of occupational therapy for children with autism and
ADHD, but therapists do not have a way to objectively measure how therapy affects arousal. We
hypothesize that when children participate in guided activities within an occupational therapy
setting, informative changes in electrodermal activity (EDA) can be detected using iCalm. iCalm is
a small, wireless sensor that measures EDA and motion, worn on the wrist or above the ankle.
Statistical analysis describing how equipment affects EDA was inconclusive, suggesting that many
factors play a role in how a child's EDA changes. Case studies provided examples of how
occupational therapy affected children's EDA. This is the first study of the effects of occupational
therapy's in situ activities using continuous physiologic measures. The results suggest that careful
case study analyses of the relation between therapeutic activities and physiological arousal may
inform clinical practice.
We are developing a mobile phone-based platform to assist people with chronic diseases,
panic-anxiety disorders, or addictions. Making use of wearable, wireless biosensors, the mobile
phone uses pattern analysis and machine learning algorithms to detect specific physiological
states and perform automatic interventions in the form of text/images plus sound files and social
networking elements. We are currently working with the Veterans Administration drug rehabilitation
program involving veterans with PTSD.
We are conducting EEG studies to identify the musical features and musical interaction patterns
that universally impact measures of arousal. We hypothesize that we can induce states of high
and low arousal using electrodermal activity (EDA) biofeedback, and that these states will produce
correlated differences in concurrently recorded skin conductance and EEG data, establishing a
connection between peripherally recorded physiological arousal and cortical arousal as revealed in
EEG. We also hypothesize that manipulation of musical features of a computer-generated musical
stimulus track will produce changes in peripheral and cortical arousal. These musical stimuli and
programmed interactions may be incorporated into music technology therapy, designed to reduce
April 2016
Page 51
arousal or increase learning capability by increasing attention. We aim to provide a framework for
the neural basis of emotion-cognition integration of learning that may shed light on education and
possible applications to improve learning by emotion regulation.
Rosalind W. Picard, Szymon Fedor, Brigham and Women's Hospital and Massachusetts
General Hospital
300. Panoply
Current methods to assess depression and then ultimately select appropriate treatment have
many limitations. They are usually based on having a clinician rate scales, which were developed
in the 1960s. Their main drawbacks are lack of objectivity, being symptom-based and not
preventative, and requiring accurate communication. This work explores new technology to assess
depression, including its increase or decrease, in an automatic, more objective, pre-symptomatic,
and cost-effective way using wearable sensors and smart phones for 24/7 monitoring of different
personal parameters such as physiological data, voice characteristics, sleep, and social
interaction. We aim to enable early diagnosis of depression, prevention of depression, assessment
of depression for people who cannot communicate, better assignment of a treatment, early
detection of treatment remission and response, and anticipation of post-treatment relapse or
recovery.
Panoply is a crowdsourcing application for mental health and emotional wellbeing. The platform
offers a novel approach to computer-based psychotherapy, targeting accessibility without stigma,
engagement, and therapeutic efficacy. A three-week randomized-controlled trial with 166
participants showed Panoply conferred greater or equal benefits for nearly every therapeutic
outcome measure compared to an active control task (online expressive writing). Panoply
significantly outperformed the control task also on all measures of engagement, and is now being
commercialized at itskoko.com.
Natasha Jaques, Sara Taylor, Akane Sano, Ehi Nosakhare and Rosalind Picard
The goal of this project is to apply machine learning methods to model the wellbeing of MIT
undergraduate students. Extensive data is obtained from the SNAPSHOT study, which monitors
participating students on a 24/7 basis, collecting data on their location, sleep schedule, phone and
SMS communications, academics, social networks, and even physiological markers like skin
conductance, skin temperature, and acceleration. We extract features from this data and apply a
variety of machine learning algorithms including Gaussian Mixture Models and Multi-task
Multi-Kernel Learning; we are currently working to apply Bayesian Hierarchical Multi-task Learning
and Deep Learning as well. Interesting findings include: when participants visit novel locations they
tend to be happier; when they use their phones or stay indoors for long periods they tend to be
unhappy; and when several dimensions of wellbeing (including stress, happiness, health, and
energy) are learned together, classification accuracy improves.
Alumni Contributors: Asaph Azaria and Asma Ghandeharioun
Depression correlated with anxiety is one of the key factors leading to suicidal behavior, and is
among the leading causes of death worldwide. Despite the scope and seriousness of suicidal
thoughts and behaviors, we know surprisingly little about what suicidal thoughts look like in nature
(e.g., How frequent, intense, and persistent are they among those who have them? What
cognitive, affective/physiological, behavioral, and social factors trigger their occurrence?). The
reason for this lack of information is that historically researchers have used retrospective
self-report to measure suicidal thoughts, and have lacked the tools to measure them as they
naturally occur. In this work we explore use of wearable devices and smartphones to identify
behavioral, affective, and physiological predictors of suicidal thoughts and behaviors.
We are applying learnings from the SNAPSHOT study to the problem of changing behavior,
exploring the design of user-centered tools which can harness the experience of collecting and
reflecting on personal data to promote healthy behaviors--including stress management and sleep
regularity. We draw on commonly used theories of behavior change as the inspiration for distinct
conceptual designs for a behavior-change application based on the SNAPSHOT study. This
approach will enable us to compare the types of visualization strategies that are most meaningful
and useful for acting on each theory.
Page 52
April 2016
Akane Sano, Amy Yu, Sara Taylor, Cesar Hidalgo and Rosalind Picard
The SNAPSHOT study seeks to measure Sleep, Networks, Affect, Performance, Stress, and
Health using Objective Techniques. It is an NIH-funded collaborative research project between the
Affective Computing and Macro Connections groups, and Harvard Medical School's Brigham &
Women's hospital. Since fall 2013, we've run this study to collect one month of data every
semester from 50 MIT undergraduate students who are socially connected. We have collected
data from about 170 participants, totaling over 5,000 days of data. We measure physiological,
behavioral, environmental, and social data using mobile phones, wearable sensors, surveys, and
lab studies. We investigate how daily behaviors and social connectivity influence sleep behaviors
and health, and outcomes such as mood, stress, and academic performance. Using this
multimodal data, we are developing models to predict onsets of sadness and stress. This study will
provide insights into behavioral choices for wellbeing and performance.
305. StoryScape
308. Tributary
April 2016
Page 53
Rosalind W. Picard, Karthik Dinakar, Eric Horvitz (Microsoft Research) and Matthew Nock
(Harvard)
Weixuan 'Vincent' Chen, Natasha Jaques, Sara Taylor, Akane Sano, Szymon Fedor and
Rosalind W. Picard
We are developing statistical tools for understanding, modeling, and predicting self-harm by using
advanced probabilistic graphical models and fail-soft machine learning in collaboration with
Harvard University and Microsoft Research.
Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring psychological
or physiological arousal. However, analysis of EDA is hampered by its sensitivity to motion
artifacts. We propose a method for removing motion artifacts from EDA, measured as skin
conductance (SC), using a stationary wavelet transform (SWT). We modeled the wavelet
coefficients as a Gaussian mixture distribution corresponding to the underlying skin conductance
level (SCL) and skin conductance responses (SCRs). The goodness-of-fit of the model was
validated on ambulatory SC data. We evaluated the proposed method in comparison with three
previous approaches. Our method achieved a greater reduction of artifacts while retaining
motion-artifact-free data.
313. Crowdsourcing a
Manhunt
Page 54
People often say that we live in a small world. In a brilliant experiment, legendary social
psychologist Stanley Milgram proved the six degrees of separation hypothesis: that everyone is six
or fewer steps away, by way of introduction, from any other person in the world. But how far are
we, in terms of time, from anyone on Earth? Our team won the Tag Challenge, a social gaming
competition, showing it is possible to find a person, using only his or her mug shot, within 12
hours.
The Internet has unleashed the capacity for planetary-scale collective problem solving (also known
as crowdsourcing). However, the very openness of crowdsourcing makes it vulnerable to sabotage
by rogue or competitive actors. To explore the effect of errors and sabotage on the performance of
crowdsourcing, we analyze data from the DARPA Shredder Challenge, a prize competition for
exploring methods to reconstruct documents shredded by a variety of paper shredding techniques.
April 2016
Iyad Rahwan, Edmond Awad, Sohan Dsouza, Azim Shariff and Jean-Franois Bonnefon
Iyad Rahwan, Lorenzo Coviello, Morgan Frank, Lijun Sun, Manuel Cebrian and NICTA
Adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number
of traffic accidents, but some inevitable accidents will require AVs to choose the lesser of two evils,
such as running over a pedestrian on the road or the sidewalk. Defining the algorithms to guide
AVs confronted with such moral dilemmas is a challenge, and manufacturers and regulators will
need psychologists to apply methods of experimental ethics to these situations.
The Honest Crowds project addresses shortcomings of traditional survey techniques in the
modern information and big data age. Web survey platforms, such as Amazon's Mechanical Turk
and CrowdFlower, bring together millions of surveys and millions of survey participants, which
means paying a flat rate for each completed survey may lead to survey responses that lack
desirable care and forethought. Rather than allowing survey takers to maximize their reward by
completing as many surveys as possible, we demonstrate how strategic incentives can be used to
actually reward information and honesty rather than just participation. The incentive structures that
we propose provide scalable solutions for the new paradigm of survey and active data collection.
317. Human-Machine
Cooperation
NEW LISTING
Iyad Rahwan
Since Alan Turing envisioned Artificial Intelligence (AI), a major driving force behind technical
progress has been competition with human cognition (e.g. beating humans in Chess or Jeopardy!).
Less attention has been given to developing autonomous machines that learn to cooperate with
humans. Cooperation does not require sheer computational power, but relies on intuition, and
pre-evolved dispositions toward cooperation, common-sense mechanisms that are difficult to
encode in machines. We develop state-of-the-art machine-learning algorithms that cooperate with
people and other machines at levels that rival human cooperation in two-player repeated games.
Iyad Rahwan
Cooperation in a large society of self-interested individuals is notoriously difficult to achieve when
the externality of one individual's action is spread thin and wide on the whole society (e.g., in the
case of pollution). We introduce a new approach to achieving global cooperation by localizing
externalities to one's peers in a social network, thus leveraging the power of peer-pressure to
regulate behavior. Global cooperation becomes more like local cooperation.
319. 6D Display
320. A Switchable
Light-Field Camera
Matthew Hirsch, Sriram Sivaramakrishnan, Suren Jayasuriya, Albert Wang, Aloysha Molnar,
Ramesh Raskar, and Gordon Wetzstein
We propose a flexible light-field camera architecture that represents a convergence of optics,
sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises
tailored, angle-sensitive pixels and advanced reconstruction algorithms, we show that, contrary to
light-field cameras today, our system can use the same measurements captured in a single sensor
image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear
processing, or a high-resolution light field using sparsity-constrained optimization.
April 2016
Page 55
NEW LISTING
Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn Smithwick
Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess, David Schafran,
Manuel M. Oliveira, Everett Lawson, and Estebam Clua
Ramesh Raskar, Gordon Wetzstein, Xing Lin, Nikhil Naik and Tsinghua University
Page 56
Fluorescence lifetime imaging is a significant bio-imaging tool that finds important applications in
life-sciences. Widely known applications include cancer detection and DNA sequencing. To that
end, fluorescence microscopy which is at the heart of bio-imaging is an electronically and optically
sophisticated device which is prohibitively expensive. Our work is demonstrates the fluorescence
microscopy like functionality can be achieved by a simple, consumer sensor such as the Microsoft
Kinect which costs about $100. This is done by trading-off the precision in optics and electronics
for sophistication in computational methods. Not only this allows for massive cost reduction but
leads to several advances in the area. For example, our method is calibration-free in that we do
not assume sample's relative placement with respect to the sensor. Furthermore, our work opens
new pathways of interaction between bio-imaging, optics and computer vision communities.
With over a billion people carrying camera-phones worldwide, we have a new opportunity to
upgrade the classic bar code to encourage a flexible interface between the machine world and the
human world. Current bar codes must be read within a short range, and the codes occupy
valuable space on products. We present a new, low-cost, passive optical design so that bar codes
can be shrunk to fewer than 3mm and can be read by unmodified ordinary cameras several
meters away.
We introduce a novel interactive method to assess cataracts in the human eye by crafting an
optical solution that measures the perceptual impact of forward scattering on the foveal region.
Current solutions rely on highly trained clinicians to check the back scattering in the crystallin lens
and test their predictions on visual acuity tests. Close-range parallax barriers create collimated
beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User
feedback generates maps for opacity, attenuation, contrast, and local point-spread functions. The
goal is to allow a general audience to operate a portable, high-contrast, light-field display to gain a
meaningful understanding of their own visual conditions. The compiled maps are used to
reconstruct the cataract-affected view of an individual, offering a unique approach for capturing
information for screening, diagnostic, and clinical analysis.
April 2016
(AIF) image from a single photograph. Coded focal stack photography is a significant step towards
a computational camera architecture that facilitates high-resolution post-capture refocusing,
flexible depth of field, and 3D imaging.
328. Compressive
Light-Field Camera:
Next Generation in 3D
Photography
329. Eyeglasses-Free
Displays
330. Health-Tech
Innovations with Tata
Trusts, Mumbai
NEW LISTING
Consumer photography is undergoing a paradigm shift with the development of light field cameras.
Commercial products such as those by Lytro and Raytrix have begun to appear in the marketplace
with features such as post-capture refocus, 3D capture, and viewpoint changes. These cameras
suffer from two major drawbacks: major drop in resolution (converting a 20 MP sensor to a 1 MP
image) and large form factor. We have developed a new light-field camera that circumvents
traditional resolution losses (a 20 MP sensor turns into a full-sensor resolution refocused image) in
a thin form factor that can fit into traditional DSLRs and mobile phones.
Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce
a computational display technology that predistorts the presented content for an observer, so that
the target image is perceived without the need for eyewear. We demonstrate a low-cost prototype
that can correct myopia, hyperopia, astigmatism, and even higher-order aberrations that are
difficult to correct with glasses.
We believe that tough global health problems require an innovation pipeline. We must bring
together the people and providers facing health challenges to form what we call an innovation
continuum: inventors building new low-cost technologies; developers capable of rapidly iterating
on these inventions for use in the real world; clinicians and end users to validate our creations; and
entrepreneurs, philanthropists, and development agencies to scale our solutions. We are asking
big questions such as: What billion-dollar ideas could impact a billion lives in health, education,
transportation through digital interfaces, digital opportunities, and applications for physical
systems? Using machine learning, computer vision, Big Data, sensors, mobile technology,
diagnostics, and crowdsourcing, we are conducting research at the Media Lab, and also
collaborating with innovators in three centers in India and in other centers worldwide. Innovations
like this launched the effort to create the Emerging Worlds initiative.
Ramesh Raskar and Anshuman Das
We believe that tough global health problems require an innovation pipeline. We must bring
together the people and providers facing health challenges to form what we call an innovation
continuum: inventors building new low-cost technologies; developers capable of rapidly iterating
on these inventions for use in the real world; clinicians and end users to validate our creations; and
entrepreneurs, philanthropists, and development agencies to scale our solutions. We are asking
big questions such as: What billion-dollar ideas could impact a billion lives in health, education,
transportation through digital interfaces, digital opportunities, and applications for physical
systems? Using machine learning, computer vision, Big Data, sensors, mobile technology,
diagnostics, and crowdsourcing, we are conducting research at the Media Lab, and also
collaborating with innovators in three centers in India and in other centers worldwide. Innovations
like this launched the effort to create the Emerging Worlds initiative.
The use of fluorescent probes and the recovery of their lifetimes allow for significant advances in
many imaging systems, in particular medical imaging systems. Here, we propose and
experimentally demonstrate reconstructing the locations and lifetimes of fluorescent markers
hidden behind a turbid layer. This opens the door to various applications for non-invasive
diagnosis, analysis, flowmetry, and inspection. The method is based on a time-resolved
measurement which captures information about both fluorescence lifetime and spatial position of
the probes. To reconstruct the scene, the method relies on a sparse optimization framework to
invert time-resolved measurements. This wide-angle technique does not rely on coherence, and
does not require the probes to be directly in line of sight of the camera, making it potentially
suitable for long-range imaging.
April 2016
Page 57
Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, Wolfgang Heidrich, and Ramesh
Raskar
Andreas Velten, Di Wu, Christopher Barsi, Ayush Bhandari, Achuta Kadambi, Nikhil Naik,
Micha Feigin, Daniel Raviv, Thomas Willwacher, Otkrist Gupta, Ashok Veeraraghavan,
Moungi G. Bawendi, and Ramesh Raskar
We are developing tomographic techniques for image synthesis on displays composed of compact
volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or
high-contrast 2D image when illuminated by a uniform backlight. Since arbitrary views may be
inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the
difference between the emitted and target light fields, subject to physical constraints on
attenuation. For 3D displays, spatial resolution, depth of field, and brightness are increased,
compared to parallax barriers. We conclude by demonstrating the benefits and limitations of
attenuation-based light field displays using an inexpensive fabrication method: separating multiple
printed transparencies with acrylic sheets.
With networked cameras in everyone's pockets, we are exploring the practical and creative
possibilities of public imaging. LensChat allows cameras to communicate with each other using
trusted optical communications, allowing users to share photos with a friend by taking pictures of
each other, or borrow the perspective and abilities of many cameras.
Using a femtosecond laser and a camera with a time resolution of about one trillion frames per
second, we recover objects hidden out of sight. We measure speed-of-light timing information of
light scattered by the hidden objects via diffuse surfaces in the scene. The object data are mixed
up and are difficult to decode using traditional cameras. We combine this "time-resolved"
information with novel reconstruction algorithms to untangle image information and demonstrate
the ability to look around corners.
Alumni Contributors: Andreas Velten, Otkrist Gupta and Di Wu
Ramesh Raskar, Christopher Barsi, Ayush Bhandari, Anshuman Das, Micha Feigin-Almon
and Achuta Kadambi
Time-of-flight (ToF) cameras are commercialized consumer cameras that provide a depth map of
a scene, with many applications in computer vision and quality assurance. Currently, we are
exploring novel ways of integrating the camera illumination and detection circuits with
Page 58
April 2016
Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and Ramesh
Raskar
Our deformable camera exploits new, flexible form factors for imaging in turbid media. In this study
we enable a brush-like form factor with a time-of-flight camera. This has enabled us to reconstruct
images through a set of 1100 optical fibers that are randomly distributed and permuted in a
medium.
We present a near real-time system for interactively exploring a collectively captured moment
without explicit 3D reconstruction. Our system favors immediacy and local coherency to global
consistency. It is common to represent photos as vertices of a weighted graph. The weighted
angled graphs of photos used in this work can be regarded as the result of discretizing the
Riemannian geometry of the high dimensional manifold of all possible photos. Ultimately, our
system enables everyday people to take advantage of each others' perspectives in order to create
on-the-spot spatiotemporal visual experiences similar to the popular bullet-time sequence. We
believe that this type of application will greatly enhance shared human experiences, spanning from
events as personal as parents watching their children's football game to highly publicized
red-carpet galas.
We introduce polarization field displays as an optically efficient design for dynamic light field
display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels
with a single pair of crossed linear polarizers. Each layer is modeled as a spatially controllable
polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates
light. We demonstrate that such displays can be controlled, at interactive refresh rates, by
adopting the SART algorithm to tomographically solve for the optimal spatially varying polarization
state rotations applied by each layer. We validate our design by constructing a prototype using
modified off-the-shelf panels. We demonstrate interactive display using a GPU-based SART
implementation supporting both polarization-based and attenuation-based architectures.
Everett Lawson, Jason Boggess, Alex Olwal, Gordon Wetzstein, and Siddharth Khullar
The major challenge in preventing blindness is identifying patients and bringing them to specialty
care. Diseases that affect the retina, the image sensor in the human eye, are particularly
challenging to address, because they require highly trained eye specialists (ophthalmologists) who
use expensive equipment to visualize the inner parts of the eye. Diabetic retinopathy,
HIV/AIDS-related retinitis, and age-related macular degeneration are three conditions that can be
screened and diagnosed to prevent blindness caused by damage to retina. We exploit a
combination of two novel ideas to simplify the constraints of traditional devices, with simplified
optics and cleaver illumination in order to capture and visualize images of the retina in a
standalone device easily operated by the user. Prototypes are conveniently embedded in either a
mobile hand-held retinal camera, or wearable eyeglasses.
We demonstrate a new technique that allows a camera to rapidly acquire reflectance properties of
objects "in the wild" from a single viewpoint, over relatively long distances and without encircling
equipment. This project has a wide variety of applications in computer graphics, including image
relighting, material identification, and image editing.
Alumni Contributor: Andreas Velten
Ramesh Raskar, Kenichiro Fukushi, Nikhil Naik, Christopher Schonauer and Jan Zizka
We have created a 3D motion-tracking system with automatic, real-time vibrotactile feedback and
an assembly of photo-sensors, infrared projector pairs, vibration motors, and a wearable suit. This
system allows us to enhance and quicken the motor learning process in a variety of fields such as
healthcare (physiotherapy), entertainment (dance), and sports (martial arts).
Alumni Contributor: Dennis Ryan Miaw
April 2016
Page 59
Jaewon Kim
We present a new method for scanning 3D objects through a single-shot, shadow-based method.
We decouple 3D occluders from 4D illumination using shield fields: the 4D attenuation function
which acts on any light field incident on an occluder. We then analyze occluder reconstruction from
cast shadows, leading to a single-shot light-field camera for visual hull reconstruction.
Within the last few years, cellphone subscriptions have spread widely and now cover even the
remotest parts of the planet. Adequate access to healthcare, however, is not widely available,
especially in developing countries. We propose a new approach to converting cellphones into
low-cost scientific devices for microscopy. Cellphone microscopes have the potential to
revolutionize health-related screening and analysis for a variety of applications, including blood
and water tests. Our optical system is more flexible than previously proposed mobile microscopes,
and allows for wide field-of-view panoramic imaging, the acquisition of parallax, and coded
background illumination, which optically enhances the contrast of transparent and refractive
specimens.
The ability to record images with extreme temporal resolution enables a diverse range of
applications, such as time-of-flight depth imaging and characterization of ultrafast processes. Here
we present a demonstration of the potential of single-photon detector arrays for visualization and
rapid characterization of events evolving on picosecond time scales. The single-photon sensitivity,
temporal resolution, and full-field imaging capability enables the observation of light-in-flight in air,
as well as the measurement of laser-induced plasma formation and dynamics in its natural
environment. The extreme sensitivity and short acquisition times pave the way for real-time
imaging of ultrafast processes or visualization and tracking of objects hidden from view.
Skin and tissue perfusion measurements are important parameters for diagnosis of wounds and
burns, and for monitoring plastic and reconstructive surgeries. In this project, we use a standard
camera and a laser source in order to image blood-flow speed in skin tissue. We show results of
blood-flow maps of hands, arms, and fingers. We combine the complex scattering of laser light
from blood with computational techniques found in computer science.
Alumni Contributor: Christopher Barsi
Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi and Masahiko
Inami
How can we show our 16-megapixel photos from our latest trip on a digital display? How can we
create screens that are visible in direct sunlight as well as complete darkness? How can we create
large displays that consume less than 2W of power? How can we create design tools for digital
decal application and intuitive-computer aided modeling? We introduce a display that is
high-resolution but updates at a low frame rate: a slow display. We use lasers and monostable
light-reactive materials to provide programmable space-time resolution. This refreshable,
high-resolution display exploits the time decay of monostable materials, making it attractive in
terms of cost and power requirements. Our effort to repurpose these materials involves solving
underlying problems in color reproduction, day-night visibility, and optimal time sequences for
updating content.
352. SpeckleSense
353. SpecTrans:
Classification of
Transparent Materials
and Interactions
Page 60
Munehiko Sato, Alex Olwal, Boxin Shi, Shigeo Yoshida, Atsushi Hiyama, Michitaka Hirose
and Tomohiro Tanikawa, Ramesh Raskar
Surface and object recognition is of significant importance in ubiquitous and wearable computing.
While various techniques exist to infer context from material properties and appearance, they are
typically neither designed for real-time applications nor for optically complex surfaces that may be
specular, textureless, and even transparent. These materials are, however, becoming increasingly
relevant in HCI for transparent displays, interactive surfaces, and ubiquitous computing. We
April 2016
present SpecTrans, a new sensing technology for surface classification of exotic materials, such
as glass, transparent plastic, and metal. The proposed technique extracts optical features by
employing laser and multi-directional, multi-spectral LED illumination that leverages the material's
optical properties. The sensor hardware is small in size, and the proposed classification method
requires significantly lower computational cost than conventional image-based methods, which use
texture features or reflectance analysis, thereby providing real-time performance for ubiquitous
computing.
354. StreetScore
Tensor displays are a family of glasses-free 3D displays comprising all architectures employing (a
stack of) time-multiplexed LCDs illuminated by uniform or directional backlighting. We introduce a
unified optimization framework that encompasses all tensor display architectures and allows for
optimal glasses-free 3D display. We demonstrate the benefits of tensor displays by constructing a
reconfigurable prototype using modified LCD panels and a custom integral imaging backlight. Our
efficient, GPU-based NTF implementation enables interactive applications. In our experiments we
show that tensor displays reveal practical architectures with greater depths of field, wider fields of
view, and thinner form factors, compared to prior automultiscopic displays.
In this visual brainstorming, we present the next 30 years of VR in a set of concept designs.
George Barbastathis, Ramesh Raskar, Belen Masia, Nikhil Naik, Se Baek Oh and Tom
Cuypers
358. Time-of-Flight
Microwave Camera
Ramesh Raskar, Micha Feigin-Almon, Nikhil Naik, Andrew Temme and Gregory Charvat
This work focuses on bringing powerful concepts from wave optics to the creation of new
algorithms and applications for computer vision and graphics. Specifically, ray-based, 4D lightfield
representation, based on simple 3D geometric principles, has led to a range of new applications
that include digital refocusing, depth estimation, synthetic aperture, and glare reduction within a
camera or using an array of cameras. The lightfield representation, however, is inadequate to
describe interactions with diffractive or phase-sensitive optical elements. Therefore we use Fourier
optics principles to represent wavefronts with additional phase information. We introduce a key
modification to the ray-based model to support modeling of wave phenomena. The two key ideas
are "negative radiance" and a "virtual light projector." This involves exploiting higher dimensional
representation of light transport.
Our architecture takes a hybrid approach to microwaves and treats them like waves of light. Most
other work places antennas in a 2D arrangement to directly sample the RF reflections that return.
Instead of placing antennas in a 2D arrangment, we use a single, passive, parabolic reflector
(dish) as a lens. Think of every point on that dish as an antenna with a fixed phase-offset. This
means that the lens acts as a fixed set of 2D antennas which are very dense and spaced across a
large aperture. We then sample the focal-plane of that lens. This architecture makes it possible for
us to capture higher resolution images at a lower cost.
April 2016
Page 61
Andreas Velten, Di Wu, Adrin Jarabo, Belen Masia, Christopher Barsi, Chinmaya Joshi,
Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar
We have developed a camera system that captures movies at an effective rate of approximately
one trillion frames per second. In one frame of our movie, light moves only about 0.6 mm. We can
observe pulses of light as they propagate through a scene. We use this information to understand
how light propagation affects image formation and to learn things about a scene that are invisible
to a regular camera.
361. Ultrasound
Tomography
Hang Zhao, Boxin Shi, Christy Fernandez-Cull, Sai-Kit Yeung and Ramesh Raskar
363. VisionBlocks
Traditional medical ultrasound assumes that we are imaging ideal liquids. We are interested in
imaging muscle and bone as well as measuring elastic properties of tissues, all of which are
places where this assumption fails quite miserably. Interested in cancer detections, Duchenne
muscular dystrophy, and prosthetic fitting, we use tomographic techniques as well as ideas from
seismic imaging to deal with these issues.
We present a novel framework to extend the dynamic range of images called Unbounded High
Dynamic Range (UHDR) photography with a modulo camera. A modulo camera could theoretically
take unbounded radiance levels by keeping only the least significant bits. We show that with
limited bit depth, very high radiance levels can be recovered from a single modulus image with our
newly proposed unwrapping algorithm for natural images. We can also obtain an HDR image with
details equally well preserved for all radiance levels by merging the least number of modulus
images. Synthetic experiments and experiments with a real modulo camera show the
effectiveness of the proposed approach.
364. A-pops
NEW LISTING
Page 62
Jennifer Groff
A design project done in collaboration with the MIT Media Lab and the Laboratorio para la Ciudad
(Laboratory for the City), Mexico City's experimental office for civic innovation and urban creativity,
A-pops is a networked learning experience across Mexico City that supports young learners in
engaging in emergent and playful opportunities in and beyond their local communities. In line with
April 2016
the "Playful City" goal, this project aims to embed playful learning experiences across Mexico City
that are creative, collaborative, and public, by leveraging existing public spaces throughout
neighborhoods and micro-communities across the city. By embedding a variety of playful learning
experiences across a variety of locations, a wide range of learners have the ability to easily and
socially engage in transformative experiences that support key skills in design, collaboration,
creativity, programming, and learner agency.
Hal Abelson, Andrew McKinney, CSAIL, and Scheller Teacher Education Program
App Inventor is an intuitive, visual programming environment that allows everyone, even those
with no prior coding experience, to build fully functional applications for smartphones and tablets.
Those new to App Inventor can have a simple first app up and running in under 30 minutes. The
tool allows anyone to program more complex, impactful apps in significantly less time than with
more traditional programming environments. The MIT App Inventor project seeks to democratize
software development by empowering all people, especially young people, to transition from being
consumers of technology to becoming creators of it. MIT students and staff, led by Professor Hal
Abelson, form the nucleus of an international movement of inventors. In addition to leading
educational outreach around MIT App Inventor and conducting research on its impacts, this core
team maintains the free online app development environment that serves more than four million
registered users.
April 2016
Page 63
Mitchel Resnick, Philipp Schmidt, Natalie Rusk, Grif Peterson, Katherine McConachie,
Srishti Sethi, Alisha Panjwani
Learning Creative Learning (http://learn.media.mit.edu/lcl) is an online course that introduces ideas
and strategies for supporting creative learning. The course engages educators, designers, and
technologists from around the world in applying creative learning tools and approaches from the
MIT Media Lab. We view the course as an experimental alternative to traditional Massive Open
Online Courses (MOOCs), putting greater emphasis on peer-to-peer learning, hands-on projects,
and sustainable communities.
The Lemann Creative Learning Program is a collaboration between the MIT Media Lab and the
Lemann Foundation to foster creative learning in Brazilian public education. Established in
February 2015, the program designs new technologies, support materials, and innovative
initiatives to engage Brazilian public schools, afterschool centers, and families in learning practices
that are more hands-on and centered on students' interests and ideas. For additional information,
please contact lclp@media.mit.edu.
Improving adult learning, especially for adults who are unemployed or unable to financially support
their families, is a challenge that affects the future wellbeing of millions of individuals in the US. We
are working with the Joyce Foundation, employers, learning researchers, and the Media Lab
community to prototype three to five new models for adult learning that involve technology
innovation and behavioral insights.
Philipp Schmidt, Juliana Nazare, Katherine McConachie, Srishti Sethi, and Guy Zyskind
The Media Lab Digital Certificates project develops software and strategies to store and manage
digital credentials. Certificates are registered on the blockchain, cryptographically signed, and
tamper-proof. They can be designed to represent or recognize many different achievements.
Through blockchain-based credentials, we critically explore notions of social capital and
reputation, empathy and gift economies, and social behavior. We issue digital credentials to Media
Lab Director's Fellows and Media Lab alumni, and we are open-sourcing our code so that other
organizations can start experimenting with the idea of digital certificates.
Media Lab Virtual Visit is intended to open up the doors of the Media Lab to people from all around
the world. The visit is hosted on the Unhangout platform, a new way of running large-scale
unconferences on the web that was developed at the Media Lab. It is an opportunity for students
or potential collaborators to talk with current researchers at the Lab, learn about their work, and
share ideas.
377. ML Open
Page 64
April 2016
378. Para
Jennifer Jacobs, Mitchel Resnick, Joel Brandt, Sumit Gogia, and Radomir Mech
Procedural representations, enabled through programming, are a powerful tool for digital
illustration, but writing code conflicts with the intuitiveness and immediacy of direct manipulation.
Para is a digital illustration tool that uses direct manipulation to define and edit procedural artwork.
Through creating and altering vector paths, artists can define iterative distributions and parametric
constraints. Para makes it easier for people to create generative artwork, and creates an intuitive
workflow between manual and procedural drawing methods.
381. Scratch
Mitchel Resnick, Natalie Rusk, Kasia Chmielinski, Andrew Sliwinski, Eric Schilling, Carl
Bowman, Saskia Leggett, Christan Balch, Ricarose Roque, Sayamindu Dasgupta, Ray
Schamp, Matt Taylor, Chris Willis-Ford, Tim Mickel, Colby Gutierrez-Kraybill, Juanita
Scratch is a programming language and online community (http://scratch.mit.edu) that makes it
easy to create your own interactive stories, games, animations, and simulations and share your
creations online. As young people create and share Scratch projects, they learn to think creatively,
reason systematically, and work collaboratively, while also learning important mathematical and
computational ideas. Young people around the world have shared more than 10 million projects on
the Scratch website, with thousands of new projects every day. (For information on who has
contributed to Scratch, see the Scratch Credits page: http://scratch.mit.edu/info/credits/).
Alumni Contributors: Amos Blanton, Karen Brennan, Gaia Carini, Michelle Chung, Shane
Clements, Margarita Dekoli, Evelyn Eastmond, Champika Fernando, John H. Maloney, Amon
Millner, Andres Monroy-Hernandez, Eric Rosenbaum, Jay Saul Silver and Tamara Stern
Saskia Leggett, Lisa O'Brien, Kasia Chmielinski, Carl Bowman, and Mitchel Resnick
Scratch Day (day.scratch.mit.edu) is a network of face-to-face local gatherings, on the same day in
all parts of the world, where people can meet, share, and learn more about Scratch, a
programming environment that enables people to create their own interactive stories, games,
animations, and simulations. We believe that these types of face-to-face interactions remain
essential for ensuring the accessibility and sustainability of initiatives such as Scratch. In-person
interactions enable richer forms of communication among individuals, more rapid iteration of ideas,
and a deeper sense of belonging and participation in a community. The first Scratch Day took
place in 2009. In 2015, there were 350 events in 60 countries.
Alumni Contributor: Karen Brennan
April 2016
Page 65
others to use Scratch to program hardware devices such as the LEGO WeDo, get data from online
web-services such as weather.com, and use advanced web-browser capabilities such as speech
recognition.
Alumni Contributors: Amos Blanton, Shane Clements, Abdulrahman Y. idlbi and John H. Maloney
385. ScratchJr
Mitchel Resnick, Chris Garrity, Tim Mickel, Marina Bers, Paula Bonta, and Brian Silverman
ScratchJr makes coding accessible to younger children (ages 5-7), enabling them to program their
own interactive stories, games, and animations. To make ScratchJr developmentally appropriate
for younger children, we revised the interface and provided new structures to help young children
learn relevant math concepts and problem-solving strategies. ScratchJr is available as a free app
for iPads, Android, and Chromebook. ScratchJr is a collaboration between the MIT Media Lab,
Tufts University, and Playful Invention Company.
Alumni Contributors: Sayamindu Dasgupta and Champika Fernando
386. Spin
Alisha Panjwani, Natalie Rusk, Jie Qi, Chris Garrity, Tiffany Tseng, Jennifer Jacobs, Mitchel
Resnick
The Lifelong Kindergarten group is collaborating with the Museum of Science in Boston to develop
materials and workshops that engage young people in "maker" activities in Computer Clubhouses
around the world, with support from Intel. The activities introduce youth to the basics of circuitry,
coding, crafting, and engineering. In addition, graduate students are testing new maker
technologies and workshops for Clubhouse staff and youth. The goal of the initiative is to help
young people from under-served communities gain experience and confidence in their ability to
design, create, and invent with new technologies.
Alumni Contributors: David A. Mellis and Ricarose Roque
388. Unhangout
Philipp Schmidt, Drew Harry, Charlie DeTar, Srishti Sethi, and Katherine McConachie
Unhangout is an open-source platform for running large-scale unconferences online. We use
Google Hangouts to create as many small sessions as needed, and help users find others with
shared interests. Think of it as a classroom with an infinite number of breakout sessions. Each
event has a landing page, which we call the lobby. When participants arrive, they can see who
else is there and chat with each other. The hosts can do a video welcome and introduction that
gets streamed into the lobby. Participants then break out into smaller sessions (up to 10 people
per session) for in-depth conversations, peer-to-peer learning, and collaboration on projects.
Unhangouts are community-based learning instead of top-down information transfer.
Page 66
April 2016
Deb Roy, Sophie Chou, Pau Kung, Neo Mohsenvand, William Powers
Over the last two decades, digital technologies have flattened old hierarchies in the news business
and opened the conversation to a multitude of new voices. To help comprehend this promising but
chaotic new public sphere, we're building a "social news machine" that will provide a structured
view of the place where journalism meets social media. The basis of our project is a two-headed
data ingest. On one side, all the news published online 24/7 by a sample group of influential US
media outlets. On the other, all Twitter comments of the journalists who produced the stories. The
two streams will be joined through network analysis and algorithmic inference. In future work we
plan to expand the analysis to include all the journalism produced by major news outlets and the
overall public response on Twitter, shedding new light on such issues as bias, originality,
credibility, and impact.
While there are a number of literacy technology solutions developed for individuals, the role of
social--or networked--literacy learning is less explored. We believe that literacy is inherently a
social activity that is best learned within a supportive community network, including peers,
teachers, and parents. By designing an approach that is child-driven and machine-guided, we
hope to empower human learning networks in order to establish an engaging and effective
medium for literacy development while enhancing personal, creative, and expressive interactions
within communities. We are planning to pilot and deploy our system initially in the Boston area,
with eventual focus on low-income families where the need for literacy support is particularly
acute. We aim to create a cross-age peer-tutoring program to engage students from different
communities in socially collaborative, self-expressive, and playful literacy learning opportunities via
mobile devices.
Alumni Contributor: Prashanth Vijayaraghavan
393. Responsive
Communities: Pilot
Project in Jun, Spain
To gain insights into how digital technologies can make local governments more responsive and
deepen citizen engagement, we are studying the Spanish town of Jun (population 3,500). For the
last four years, Jun has been using Twitter as its principal medium for citizen-government
communication. We are mapping the resulting social networks and analyzing the dynamics of the
Twitter interactions, in order to better understand the initiative's impact on the town. Our long-term
goal is to determine whether the system can be replicated at scale in larger communities, perhaps
even major cities.
April 2016
Page 67
Deb Roy, Russell Stevens, Soroush Vosoughi, William Powers, Sophie Chou, Perng-Hwa
Kung, Neo (Mostafa) Mohsenvand, Raphael Schaad and Prashanth Vijayaraghavan
Deb Roy, Russell Stevens, Neo (Mostafa) Mohsenvand, Prashanth Vijayaraghavan, Soroush
Vosoughi and Guolong Wang
NEW LISTING
The Electome project is a comprehensive mapping of the content and network connections among
the campaign's three core public sphere voices: candidates (and their advocates), media
(journalists and other mainstream media sources), and the public. This mapping is used to trace
the election's narratives as they form, spread, morph, and decline among these three
groups -identifying who and what influences these dynamics. We are also developing metrics that
measure narrative alignment and misalignment among the groups, sub-groups (political party,
media channel, advocacy group, etc.), and specific individuals/organizationss (officials, outlets,
journalists, influencers, sources, etc.). The Electome can be used to promote more responsive
elections by deploying analyses, metrics, and data samples that improve the exchange of ideas
among candidates, the media, and/or the public in the public sphere of an election.
The Foodome addresses how to create deeper understanding and predictive intelligence about the
relationships between how we talk and learn about food, and what we actually eat. Our aim is to
build a food learning machine that comprehensively maps, for any given food, its form, function,
production, distribution, marketing, science, policy, history, and culture (as well as the connections
among all of these aspects). We are gathering and organizing a wide variety of data, including
news/social content, recipes and menus, and sourcing and purchase information. We then use
human-machine learning to uncover patterns within and among the heterogeneous food-related
data. Long term, the Foodome is meant to help improve our understanding of, access to, and trust
in food that is good for us; find new connections between food and health; and even predict
impacts of local and global events on food.
Raphael Schaad, Michael Koehrsen and Deb Roy
Better maps and local knowledge increase the efficiency and effectiveness of community health
workers (CHW) in developing countries. In a post-Ebola world, the World Health Organization and
the United Nations have elevated the priority of developing such CHW systems. However,
commercial map services do not always reach these regions of the world. We are developing
mapping based on satellite analytics of household- and road-level data in rural areas, coupled with
a mobile app with a navigation interface specifically designed for CHWs to help them work better.
Their bottom-up annotations will continuously improve the machine-learning analytics of the
top-down satellite maps.
398. Activ8
Dhruv Jain, Misha Sra, Raymond Wu, Rodrigo Marques, Jingru Guo and Chris Schmandt
SCUBA diving as a sport has enabled people to explore the magnificent ocean diversity of
beautiful corals, striking fish, and mysterious wrecks. However, only a small number of people are
able to experience these wonders, as diving is expensive, mentally and physically challenging,
needs a large time investment, and requires access to large bodies of water. Most existing SCUBA
diving simulations in VR are limited to visual and aural displays. We propose a virtual reality
system, Amphibian, that provides an immersive SCUBA diving experience through a convenient
terrestrial simulator. Users lie on their torso on a motion platform with their outstretched arms and
legs placed in a suspended harness. Users receive visual and aural feedback through the Oculus
Rift head-mounted display and a pair of headphones. Additionally, we simulate buoyancy, drag,
and temperature changes through various sensors.
Alumni Contributor: Justin Chiu
Page 68
April 2016
400. Meta-Physical-Space
VR
401. NailO
Experience new dimensions and worlds without limits with friends. Encounter a new physical
connection within the virtual world. Explore virtual spaces by physically exploring the real world.
Interact with virtual objects by physically interacting with real-world objects. Initial physical
sensations include touching objects, structures, and people while we work on adding sensations
for feeling pressure, temperature, moisture, smell, and other sensory experiences.
NailO is a nail-mounted gestural input surface inspired by commercial nail stickers. Using
capacitive sensing on printed electrodes, the interface can distinguish on-nail finger swipe
gestures with high accuracy (>92 percent). NailO works in real time: the system is miniaturized to
fit on the fingernail, while wirelessly transmitting the sensor data to a mobile phone or PC. NailO
allows for one-handed and always-available input, while being unobtrusive and discrete. The
device blends into the user's body, is customizable, fashionable, and even removable.
402. OnTheGo
404. Spotz
405. Tattio
April 2016
Page 69
Kevin Slavin, Julie Legault, Taylor Levy, Che-Wei Wang, Dalai Lama Center for Ethics and
Transformative Values and Tinsley Galyean
Time perception is a fundamental component in our ability to build mental models of our world.
Without accurate and precise time perception, we might have trouble understanding speech,
fumble social interactions, have poor motor control, hallucinate, or remember events incorrectly.
Slight distortions in time perception are commonplace and may lead to slight dyslexia, memory
shifts, poor eye-hand coordination, and other relatively benign symptoms, but could a diminishing
sense of time signal the onset of a serious brain disorder? Could time perception training help
prevent or reverse brain disorders? This project is a series of experimental tools built to assist and
increase human time perception. By approaching time-perception training from various
perspectives, we hope to find a tool or collection of tools to increase time perception, and in turn
discover what an increase in time perception might afford us.
20 Day Stranger is a mobile app that creates an intimate and anonymous connection between you
and another person. For 20 days, you get continuous updates about where they are, what they are
doing, and eventually even how they are feeling, and them likewise about you. But you will never
know who this person is. Does this change the way you think about other people you see
throughout your day, any one of which could be your stranger?
411. AutomaTiles
The crystal oscillator inside a quartz wristwatch vibrates at 32,768 times per second. This is too
fast for a human to perceive, and it's even more difficult to imagine its interaction with the
mechanical circulation of a clock. 32,768 Times Per Second is a diagrammatic, procedural, and
fully functional sculpture of the electro-mechanical landscape inside a common wristwatch.
Through a series of electronic transformations, the signal from a crystal is broken down over and
over, and then built back up to the human sense of time.
Amino is a design-driven mini-lab that allows users to carry out a bacterial transformation and
enables the subsequent care and feeding of the cells that are grown. Inspired by Tamagotchis, the
genetic transformation of an organism's DNA is performed by the user through guided interactions,
resulting in a synthetic organism that can be cared for like a pet. Amino is developed using
low-cost ways of carrying out lab-like procedures in the home, and is packaged in a suitcase-sized
continuous bioreactor for cells.
A tabletop set of cellular automata ready to exhibit complex systems through simple behaviors,
AutomaTiles explores emergent behavior through tangible objects. Individually they live as simple
organisms, imbued with a simple personality; together they exhibit something "other" than the sum
of their parts. Through communication with their neighbors, complex interactions arise. What will
you discover with AutomaTiles?
Gregory Borenstein
Case and Molly is a prototype for a game inspired by (and in homage to) William Gibson's novel
Neuromancer. It's about the coordination between virtual and physical, "cyberspace" and "meat."
We navigate the tension between our physical surroundings and our digital networks in a state of
continuous partial attention; Case and Molly uses the mechanics and aesthetics of Neuromancer
to explore this quintessential contemporary dynamic. The game is played by two people mediated
Page 70
April 2016
by smartphones and an Oculus Rift VR headset. Together, and under time pressure, they must
navigate Molly through physical space using information that is only available to Case. In the
game, Case sees Molly's point of view in immersive 3D, but he can only communicate a single bit
of information to her. Meanwhile, Molly traverses physical obstacles hoping Case can solve
abstract puzzles in order to gain access to the information she needs.
416. Darkball
Che-Wei Wang
Cristiano Ronaldo can famously volley a corner kick in total darkness. The magic behind this
remarkable feat is hidden in Ronaldo's brain, which enables him to use advance cues to plan
upcoming actions. Darkball challenges your brain to do the same, distilling that scenario into its
simplest form intercept a ball in the dark. All you see is all you need.
417. DeepView:
Computational Tools
for Chess
Spectatorship
419. Dice++
Food offers a rich multi-modal experience that can deeply affect emotion and memory. We're
interested in exploring the artistic and expressive potential of food beyond mere nourishment, as a
means of creating memorable experiences that involve multiple senses. For instance, music can
change our eating experience by altering our emotions during the meal, or by evoking a specific
time and place. Similarly, sight, smell, and temperature can all be manipulated to combine with
food for expressive effect. In addition, by drawing upon people's physiology and upbringing, we
seek to create individual, meaningful sensory experiences. Specifically, we are exploring the
connection between music and flavor perception.
Today, algorithms drive our cars, our economy, what we read, and how we play. Modern-day
computer games utilize weighted probabilities to make games more competitive, fun, and
addicting. In casinos, slot machines--once a product of simple probability--employ similar
algorithms to keep players playing. Dice++ takes the seemingly straight probability of rolling a die
and determines an outcome with algorithms of its own.
April 2016
Page 71
421. GAMR
423. Homeostasis
424. MicroPsi: An
Architecture for
Motivated Cognition
Joscha Bach
425. radiO_o
Kevin Slavin, Mark Feldmeier, Taylor Levy, Daniel Novy and Che-Wei Wang
The MicroPsi project explores broad models of cognition, built on a motivational system that gives
rise to autonomous social and cognitive behaviors. MicroPsi agents are grounded AI agents, with
neuro-symbolic representations, affect, top-down/bottom-up perception, and autonomous decision
making. We are interested in finding out how motivation informs social interaction (cooperation and
competition, communication and deception), learning, and playing; shapes personality; and
influences perception and creative problem-solving.
radiO_o is a battery-powered speaker worn by hundreds of party guests, turning each person into
a local mobile sound system. The radiO_o broadcast system allows the DJ to transmit sounds
over several pirate radio channels to mix sounds between hundreds of speakers roaming around
the space and the venue's existing sound system.
Page 72
April 2016
428. Storyboards
The boundaries and fabric of human experience are continuously redefined by microorganisms,
interacting at an imperceptible scale. Though hidden, these systems condition our bodies,
environment, and even sensibilities and desires. The proposed works introduce a model of
interaction in which the microbiome is an extension of the human sensory system, accessed
through a series of biological interfaces that enable exchange. Biological Interfaces transfer
discrete behaviors of microbes into information across scales, where it may be manipulated, even
if unseen. In the same way the field of HCI has articulated our exchanges with electronic signals,
Soft Exchange opens up the question of how to design for this other invisible, though present, and
vital material.
Giving opaque technology a glass house, Storyboards present the tinkerers or owners of
electronic devices with stories of how their devices work. Just as the circuit board is a story of
star-crossed lovers--Anode and Cathode--with its cast of characters (resistor, capacitor,
transistor), Storyboards have their own characters driving a parallel visual narrative.
429. Troxes
Jonathan Bobrow
The building blocks we grow up with and the coordinate systems we are introduced to at an early
age shape the design space with which we think. Complex systems are difficult to understand
because they often require transition from one coordinate system to another. We could even begin
to say that empathy is precisely this ability to map easily to many different coordinates. Troxes is a
building blocks kit based on the triangle, where kids get to build their building blocks and then
assemble Platonic and Archimedean solids.
Tal Achituv, Catherine D'Ignazio, Alexis Hope, Taylor Levy, Alexandra Metral, Che-Wei
Wang
In September 2014, 150 parents, engineers, designers, and healthcare practitioners gathered at
the MIT Media Lab for the "Make the Breast Pump Not Suck!" Hackathon. As one of the midwives
at our first hackathon said, "Maternal health lags behind other sectors for innovation." This project
brought together people from diverse fields, sectors, and backgrounds to take a crack at making
life better for moms, babies, and new families.
April 2016
Page 73
433. Code4Rights
Joy Buolamwini
Code4Rights promotes human rights through technology education. By facilitating the
development of rights-focused mobile applications in workshops and an online course,
Code4Rights enables participants to create meaningful technology for their communities in
partnership with local organizations. For example, Code4Rights, in collaboration with It Happens
Here, a grassroots organization focused on addressing sexual violence, created the First
Response Oxford App to address sexual violence at Oxford University. Over 30 young women
contributed to the creation of the app, which provides survivors of sexual violence and friends of
survivors with information about optional ways to respond, essential knowledge about support
resources, critical contact details, and answers to frequently asked questions.
436. DataBasic
437. DeepStream
Page 74
Matthew Stempeck
The Internet has disrupted the aid sector like so many other industries before it. In times of crisis,
donors are increasingly connecting directly with affected populations to provide participatory aid.
The Digital Humanitarian Marketplace aggregates these digital volunteering projects, organizing
them by crisis and skills required to help coordinate this promising new space.
April 2016
Catherine D'Ignazio
Erase the Border is a web campaign and voice petition platform. It tells the story of the Tohono
O'odham people, whose community has been divided along 75 miles of the US-Mexico border by
a fence. The border fence divides the community, prevents tribe members from receiving critical
health services, and subjects O'odham to racism and discrimination. This platform is a pilot that we
are using to research the potential of voice and media petitions for civic discourse.
441. FOLD
Alexis Hope, Kevin Hu, Joe Goldbeck, Nathalie Huynh, Matthew Carroll, Cesar A. Hidalgo,
Ethan Zuckerman
FOLD is an authoring and publishing platform for creating modular, multimedia stories. Some
readers require greater context to understand complex stories. Using FOLD, authors can search
for and add "context cards" to their stories. Context cards can contain videos, maps, tweets,
music, interactive visualizations, and more. FOLD also allows authors to link stories together by
remixing context cards created by other writers.
Willow Brugh
Anurag Gupta, Erhardt Graeff, Huan Sun, Yu Wang and Ethan Zuckerman
This checklist is designed to help projects that include an element of data collection to develop
appropriate consent policies and practices. The checklist can be especially useful for projects that
use digital or mobile tools to collect, store, or publish data, yet understand the importance of
seeking the informed consent of individuals involved (the data subjects). This checklist does not
address the additional considerations necessary when obtaining the consent of groups or
communities, nor how to approach consent in situations where there is no connection to the data
subject. This checklist is intended for use by project coordinators, and can ground conversations
with management and project staff in order to identify risks and mitigation strategies during project
design or implementation. It should ideally be used with the input of data subjects.
Every country has a brand, negative or positive, and that brand is mediated in part by its global
press coverage. We are measuring and ranking the perceptions of the 20 most populous countries
by crowdsourcing those perceptions through a "World News Quiz." Quiz-takers match
geographically vague news stories to the countries they think they occurred in, revealing how they
positively or negatively perceive them. By illustrating the way these biases manifest among
English and Chinese speakers, we hope to help news consumers and producers be more aware of
the incomplete portrayals they have internalized and propagated.
J. Nathan Matias
How do people who lead communities on online platforms join together in mass collective action to
influence platform operators? Going Dark analyzes a protest against the social news platform
reddit by moderators of 2,278 communities in July of 2015. These moderators collectively disabled
their communities, preventing millions of readers from accessing major parts of reddit and
convincing the company to negotiate over their demands. This study reveals social
factors--including the work of moderators, relations among moderators, relations with platform
operators, factors within communities, and the isolation of a community--that can lead to
participation in mass collective action against a platform.
Mapping the Globe is an interactive tool and map that helps us understand where the Boston
Globe directs its attention. Media attention matters in quantity and quality. It helps determine what
we talk about as a public and how we talk about it. Mapping the Globe tracks where the paper's
attention goes and what that attention looks like across different regional geographies in
combination with diverse data sets like population and income. Produced in partnership with the
Boston Globe.
April 2016
Page 75
Ethan Zuckerman, Alexandre Gonalves, Ronaldo Lemos, Carlos Affonso Pereira de Souza,
Hal Roberts, David Larochelle, Renato Souza, and Flavio Coelho
Media Cloud is a system that facilitates massive content analysis of news on the Web. Developed
by the Berkman Center for Internet and Society at Harvard University, Media Cloud already
analyzes content in English and Russian. During the last months, we have been working on
support for Portuguese content. We intend to analyze the online debate on the most controversial
and politically hot topics of the Brazilian Civil Rights Framework for the Internet, namely network
neutrality and copyright reform. At the same time, we are writing a step-by-step guide to Media
Cloud localization. In the near future, we will be able to compare different media ecosystems
around the world.
Ethan Zuckerman, J. Nathan Matias, Matt Stempeck, Rahul Bhargava and Dan Schultz
What have you seen in the news this week? And what did you miss? Are you getting the blend of
local, international, political, and sports stories you desire? We're building a media-tracking
platform to empower you, the individual, and news providers themselves, to see what you're
getting and what you're missing in your daily consumption and production of media. The first round
of modules developed for the platform allow you to compare the breakdown of news topics and
byline by gender across multiple news sources.
Edward L. Platt
Media Perspective brings a data visualization into 3D space. This data sculpture represents
mainstream media coverage of Net Neutrality over 15 months, during the debate over the FCC's
classification of broadband services. Each transparent pane shows a slice in time, allowing users
to physically move and look through the timeline. The topics cutting through the panes show how
attention shifted between aspects of the debate over time.
452. MoboWater
Ethan Zuckerman, Poseidon Hai-Chi Ho, Nickolaos Savidis and Emilie Reiser
NEW LISTING
Page 76
Pure networks and pure hierarchies both have distinct strengths and weaknesses. These become
glaringly apparent during disaster response. By combining these modes, their strengths
(predictability, accountability, appropriateness, adaptability) can be optimized, and their
weaknesses (fragility, inadequate resources) can be compensated for. Bridging these two worlds
is not merely a technical challenge, but also a social issue.
April 2016
453. NetStories
454. NewsPix
Catherine D'Ignazio, Ethan Zuckerman, Matthew Carroll, Emerson College Engagement Lab
and Jay Vachon
NewsPix is a simple news-engagement application that helps users encounter breaking news in
the form of high-impact photos. It is currently a Chrome browser extension (mobile app to come)
that is customizable for small and large news organizations. Currently, when users open a new,
blank page in Chrome, they get a new tab with tiles that show recently visited pages. NewsPix
replaces that view with a high-quality picture from a news site. Users interested in more
information about the photo can click through to the news site. News organizations can upload
photos ranging from breaking news to historic sporting events, with photos changing every time a
new tab is clicked.
455. NGO2.0
Adrienne Debigare, Ethan Zuckerman, Heather Craig, Catherine D'Ignazio, Don Blair and
Public Lab Community
The Open Water Project aims to develop and curate a set of low-cost, open source tools enabling
communities everywhere to collect, interpret, and share their water quality data. Traditional water
monitoring uses expensive, proprietary technology, severely limiting the scope and accessibility of
water quality data. Homeowners interested in testing well water, watershed managers concerned
about fish migration and health, and other groups could benefit from an open source, inexpensive,
accessible approach to water quality monitoring. We're developing low-cost, open source
hardware devices that will measure some of the most common water quality parameters, using
designs that makes it possible for anyone to build, modify, and deploy water quality sensors in
their own neighborhood.
Sasha Costanza-Chock, Becky Hurwitz, Heather Craig, Royal Morris, with support from
Rahul Bhargava, Ed Platt, Yu Wang
The Out for Change Transformative Media Organizing Project (OCTOP) links LGBTQ, Two-Spirit,
and allied media makers, online organizers, and tech-activists across the United States. In
2013-2014, we are conducting a strengths/needs assessment of the media and organizing
capacity of the movement, as well as offering a series of workshops and skillshares around
transmedia organizing. The project is guided by a core group of project partners and advisers who
work with LGBTQ and Two-Spirit folks. The project is supported by faculty and staff at the MIT
Center for Civic Media, Research Action Design and by the Ford Foundation's Advancing LGBT
Rights Initiative.
April 2016
Page 77
459. PageOneX
Ethan Zuckerman, Edward Platt, Rahul Bhargava and Pablo Rey Mazon
Newspaper front pages are a key source of data about our media ecology. Newsrooms spend
massive time and effort deciding what stories make it to the front page. PageOneX makes coding
and visualizing newspaper front page content much easier, democratizing access to newspaper
attention data. Communication researchers have analyzed newspaper front pages for decades,
using slow, laborious methods. PageOneX simplifies, digitizes, and distributes the process across
the net and makes it available for researchers, citizens, and activists.
After an election, how can citizens hold leaders accountable for promises made during the
campaign season? Promise Tracker is a mobile phone-based data collection system that enables
communities to collect information on issues they consider priorities and monitor the performance
of their local governments. Through an easy-to-use web platform, citizens can visualize
aggregated data and use it to advocate for change with local government, institutions, the press,
and fellow community members. We are currently piloting the project in Brazil and the United
States, and prototyping sensor integration to allow citizens to collect a range of environmental
data.
Alumni Contributors: Alexis Hope and Jude Mwenda Ntabathia
462. Readersourcing
Tal Achituv
Scanner Grabber is a digital police scanner that enables reporters to record, playback, and export
audio, as well as archive public safety radio (scanner) conversations. Like a TiVo for scanners, it's
an update on technology that has been stuck in the last century. It's a great tool for newsrooms.
For instance, a problem for reporters is missing the beginning of an important police incident
because they have stepped away from their desk at the wrong time. Scanner Grabber solves this
because conversations can be played back. Also, snippets of exciting audio, for instance a police
chase, can be exported and embedded online. Reporters can listen to files while writing stories, or
listen to older conversations to get a more nuanced grasp of police practices or long-term trouble
spots. Editors and reporters can use the tool for collaborating, or crowdsourcing/public
collaboration.
Page 78
Should students be prosecuted for innovative projects? In December 2014, four undergraduates
associated with the Media Lab were subpoenaed by the New Jersey Attorney General after
winning a programming competition with a bitcoin-related proof of concept. We worked with MIT
administration and the Electronic Frontier Foundation to support the students and establish legal
support for informal innovation. In September 2015, MIT announced the creation of a new clinic for
business and cyberlaw.
Terra Incognita is a global news game and recommendation system. Terra Incognita helps you
discover interesting news and personal connections to cities that you haven't read about. Whereas
many recommendation systems connect you on the basis of "similarity," Terra Incognita connects
you to information on the basis of "serendipity." Each time you open the application, Terra
Incognita shows you a city that you have not yet read about and gives you options for reading
about it. Chelyabinsk (Russia), Hiroshima (Japan), Hagta (Guam), and Dhaka (Bangladesh) are
a few of the places where you might end up.
April 2016
Leo Burd
VoIP Drupal is an innovative framework that brings the power of voice and Internet-telephony to
Drupal sites. It can be used to build hybrid applications that combine regular touchtone phones,
web, SMS, Twitter, IM, and other communication tools in a variety of ways, facilitating community
outreach and providing an online presence to those who are illiterate or do not have regular
access to computers. VoIP Drupal will change the way you interact with Drupal, your phone, and
the web.
468. Vojo.co
470. ZL Vortice
Daniel Paz de Araujo (UNICAMP), Ethan Zuckerman, Adeline Gabriela Silva Gil, Hermes
Renato Hildebrand (PUC-SP/UNICAMP) and Nelson Brissac Peixoto (PUC-SP)
This project is currently promoting a survey of data from the East Side (Zona Leste) of the city of
Sao Paulo, Brazil. The aim is to detect the landscape dynamics: infrastructure and urban planning,
critical landscapes, housing, productive territory, recycling, and public space. The material will be
made available on a digital platform, accessible by computers and mobile devices: a tool specially
developed to enable local communities to disseminate productive and creative practices that occur
in the area, as well as to enable a greater participation in the formulation of public policies. ZL
Vortice is an instrument that will serve to strengthen social, productive, and cultural networks of
the region.
April 2016
Page 79