Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Published Titles

Rendering History: The Women of ACM-W

By: Gloria Childress Townsend

eBook: 9798400717734  |  Paperback: 9798400717727  |  Hardcover: 9798400717741
DOI: 10.1145/3640508
Table of Contents

eBook: $40.00  |  Paperback: $50.00  |  Hardcover: $70.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The Association for Computing Machinery (ACM) has more than 100,000 members circling the globe, including trailblazing women who created ACM-W (ACM’s Committee on Women in Computing) in 1993. This book, published in celebration of ACM-W’s 30th birthday, divides the history of ACM-W into three parts.

The first section provides a traditional history that details the evolution of ACM-W’s projects. In the next section, Rendering History allows the women of ACM-W to tell their own stories. What motivated them to trade personal time and energy for work that would change the face of computing for women and young girls? Among many others, Sue Black relates a story that spans her escape from two abusive homes to recognition for her computing accomplishments by both the late Queen of England and the current King. Kathy Kleiman describes her contributions to the field, including helping to rescue the wireless spectrum (now used by WiFi) from the (U.S.) Federal Communications Commission’s plan to sell it. Bhavani Thuraisingham writes about her birth in Sri Lanka, an arranged marriage to a man eight years her senior, and cutting-edge research in the integration of cyber security and machine learning. The final section of the book provides an annotated bibliography of the research that launched ACM-W and continued to inform its projects over the next 30 years.

ACM-W advocates internationally for the full engagement of women in all aspects of the computing field, providing a wide range of programs and services to ACM members and working in the larger community to advance the contributions of technical women. The main theme of ACM-W’s 30-year history as detailed in this book is the organization’s maturation from a U.S.-centric organization to a global leader in supporting the advancement of women in computer science.

Pick, Click, Flick! The Story of Interaction Techniques

By: Brad Myers

eBook: 9798400709487  |  Paperback: 9798400709470  |  Hardcover: 9798400709494
DOI: 10.1145/3617448
Table of Contents

eBook: $48.00  |  Paperback: $70.00  |  Hardcover: $90.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book provides a comprehensive study of the many ways to interact with computers and computerized devices. An “interaction technique” starts when the user performs an action that causes an electronic device to respond, and it includes the direct feedback from the device to the user. Examples include physical buttons and switches; on-screen menus and scrollbars operated by a mouse; touchscreen widgets and gestures such as flick-to-scroll; text entry on computers and touchscreens; consumer electronic controls such as remote controls; game controllers; input for virtual reality systems like waving a Nintendo Wii wand or your hands in front of a Microsoft Kinect; interactions with conversational agents such as Apple Siri, Google Assistant, Amazon Alexa, or Microsoft Cortana; and adaptations of all of these for people with disabilities.

The author starts with a history of the invention and development of these techniques, discusses the various options used today, and continues on to the future with the latest research on interaction techniques such as that presented at academic conferences. It features summaries of interviews with the original inventors of some interaction techniques such as David Canfield Smith (the desktop and icons), Larry Tesler (copy-and-paste), Ted Selker (IBM TrackPoint pointing stick), Loren Brichter (pull-to-refresh), and many others. Sections also cover how to use, model, implement, and evaluate new interaction techniques.

The book is essential reading for designers creating the interaction techniques of tomorrow who need to know the options and constraints and what has been tried, for implementers and consumers who want to get the most out of their interaction techniques, and for anyone interested in why we interact with electronic devices the way we do.

Digital Dreams Have Become Nightmares: What We Must Do

By: Ronald M. Baecker and Jonathan Grudin

eBook: 9798400717697  |  Paperback: 9798400717680  |  Hardcover: 9798400717703
DOI: 10.1145/3640479
Table of Contents

eBook: $32.00  |  Paperback: $40.00  |  Hardcover: $60.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book offers a compelling discussion of the digital dreams that have come true, their often unintended side effects (nightmares), and what must be done to counteract the nightmares. It is intended as an impetus to further conversation not only in homes and workplaces, but in academic courses and even legislative debates. Equally importantly, the book is a presentation of what digital technology professionals need to know about these topics and the actions they should undertake individually and in support of other citizens, societal initiatives, and government. The author begins by introducing the amazing progress made in digital technologies over the past 80 years. Pioneering engineers dreamed of potential uses of technology through their writing and technical achievements, further inspiring thousands of researchers to bring the dreams to life, and to dream new dreams as well. The second part of the book describes the myriad adverse side effects and unanticipated challenges that arose as those dreams were pursued and achieved. Examples include rampant misinformation on social media, ransomware, autonomous weapons, and the premature use of AI before it is reliable and safe.

The book closes with a positive call to action, outlining ways to address the challenges through ethical career choices, careful analysis, thoughtful design, research, citizen engagement, legislation/regulation, and careful consideration of how bad actors may use technology. Readers of Digital Dreams Have Become Nightmares should become more knowledgeable, wiser, and also cautiously optimistic, determined to affect positive changes through their design, creation, and use of technology.

Spatial Gems, Volume 2

Edited By: John Krumm, Andreas Züfle, and Cyrus Shahabi

eBook: 9798400709357  |  Paperback: 9798400709340  |  Hardcover: 9798400709364
DOI: 10.1145/3617291
Table of Contents

eBook: $40.00  |  Paperback: $50.00  |  Hardcover: $70.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book gives fundamental, new techniques for understanding and processing spatial data. With contributors working at the forefront of geospatial processing, each "spatial gem" falls in the gap between something commonly found in textbooks and something that is the focus of a research paper. A spatial gem teaches how to do something useful with spatial data, in the form of algorithms, code, or equations. Different from a research paper, a spatial gem does not focus on an author saying "Look at what I can do!"' but rather says: "Look at what you can do!'".

Spatial gems are computational techniques for processing spatial data. This book, a follow-up to the first Spatial Gems volume, is a further collection of techniques contributed by leading research experts. Although these approaches were developed by their authors as part of larger research projects, the gems represent fundamental solutions that are generically applicable to many different problems. Our goal is to expose these useful techniques that are not yet in textbooks and often buried inside technical research papers to share them with software developers, graduate students, professors, and professional researchers.

From Algorithms to Thinking Machines: The New Digital Power

By: Domenico Talia

eBook: 9798400708565  |  Paperback: 9798400708558  |  Hardcover: 9798400708572
DOI: 10.1145/3603178
Table of Contents

eBook: $32.00  |  Paperback: $40.00  |  Hardcover: $60.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book introduces and provides an analysis of the basic concepts of algorithms, data, and computation and discusses the role of algorithms in ruling and shaping our world. It provides a clear understanding of the power and impact on humanity of the pervasive use of algorithms.

From Algorithms to Thinking Machines combines a layman’s approach with a well-founded scientific description to discuss both principles and applications of algorithms, Big Data, and machine intelligence. The book provides a clear and deep description of algorithms, software systems, data-driven applications, machine learning, and data science concepts, as well as the evolution and impact of artificial intelligence.

After introducing computing concepts, the book examines the relationships between algorithms and human work, discussing how jobs are being affected and how computers and software programs are influencing human life and the labor sphere. Topics such as value alignment, collective intelligence, Big Data impact, automatic decision methods, social control, and political uses of algorithms are illustrated and discussed at length without excessive technical detail. Issues related to how corporations, governments, and autocratic regimes are exploiting algorithms and machine intelligence methods to influence people, laws, and markets are extensively addressed. Ethics principles in software programming and human value insertion into artificial intelligence algorithms are also discussed.

The Societal Impacts of Algorithmic Decision-Making

By: Manish Raghavan

eBook: 9798400708602  |  Paperback: 9798400708596  |  Hardcover: 9798400708619
DOI: 10.1145/3603195
Table of Contents

eBook: $48.00  |  Paperback: $50.00  |  Hardcover: $70.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book demonstrates the need for and the value of interdisciplinary research in addressing important societal challenges associated with the widespread use of algorithmic decision-making. Algorithms are increasingly being used to make decisions in various domains such as criminal justice, medicine, and employment. While algorithmic tools have the potential to make decision-making more accurate, consistent, and transparent, they pose serious challenges to societal interests. For example, they can perpetuate discrimination, cause representational harm, and deny opportunities.

The Societal Impacts of Algorithmic Decision-Making presents several contributions to the growing body of literature that seeks to respond to these challenges, drawing on techniques and insights from computer science, economics, and law. The author develops tools and frameworks to characterize the impacts of decision-making and incorporates models of behavior to reason about decision-making in complex environments. These technical insights are leveraged to deepen the qualitative understanding of the impacts of algorithms on problem domains including employment and lending.

The social harms of algorithmic decision-making are far from being solved. While easy solutions are not presented here, there are actionable insights for those who seek to deploy algorithms responsibly. The research presented within this book will hopefully contribute to broader efforts to safeguard societal values while still taking advantage of the promise of algorithmic decision-making.

Linking the World’s Information: Essays on Tim Berners-Lee’s Invention of the World Wide Web

Edited By: Oshani Seneviratne, James Hendler

eBook: 9798400707933  |  Paperback: 9798400707926  |  Hardcover: 9798400707940
DOI: 10.1145/3591366
Table of Contents

eBook: $32.00  |  Paperback: $40.00  |  Hardcover: $60.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

When Sir Tim Berners-Lee first proposed the foundations of the World Wide Web at CERN in 1989, his manager called it “vague, but exciting.” How things have changed since then! Twenty-six years later, Berners-Lee won the ACM Turing Award “for inventing the World Wide Web, the first Web browser, and the fundamental protocols and algorithms allowing the Web to scale.” This book is a compilation of articles on the original ideas of a true visionary and the subsequent research and development work he has led, helping to realize the Web’s full potential. It is intended for readers interested in the Web’s original technical development, how it has changed over time, and the social impacts of the Web as steered by Berners-Lee since the very beginning.

The book covers Berners-Lee's development of the key protocols, naming schemes, and markup languages that led to his “world wide web” program and ultimately to the Web as we know it today. His early efforts were refined as Web technology spread around the world, and he was further guided by the work of the World Wide Web Consortium, which he founded and still directs. He was instrumental in the conceptualization and realization of the Semantic Web, a field that is gaining momentum in the age of big data and knowledge graphs; was a driving force for the field of Web Science, a new and growing research area dedicated to the study of both the engineering and the impacts of the Web; and he continues to innovate through his research work at MIT on open and decentralized information. Berners-Lee is also known for his contributions to keeping the Web open and ubiquitous via his work with the World Wide Web Foundation, the UK's Open Data Institute and his recent call for a crowdsourced magna carta for the Web. This book will help the reader to understand how Sir Tim’s invention of the World Wide Web has revolutionized not just Computer Science, but global society itself.

Geospatial Data Science: A Hands On Approach for Developing Geospatial Applications

Edited By: Manolis Koubarakis

eBook: 9798400707391  |  Paperback: 9798400707384  |  Hardcover: 9798400707407
DOI: 10.1145/3581906
Table of Contents

eBook: $40.00  |  Paperback: $50.00  |  Hardcover: $70.00
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Geospatial data science is the science of collecting, organizing, analyzing, and visualizing geospatial data. The book introduces a new generation of geospatial technologies based on the Semantic Web and the Linked Data paradigms, and shows how data scientists can use them to build environmental applications easily. The book is aimed at researchers and practitioners who would like to know more about this research area and can also be used as a textbook for a last year undergraduate or graduate course. Every chapter of the book contains exercises that can help the readers master the material covered by the chapter.

The topics covered by the book in detail are: geospatial data modeling, geospatial data and metadata, geospatial data formats and OGC standards, geospatial ontologies and linked geospatial data models, querying geospatial data expressed in RDF, querying evolving linked geospatial data, visualizing linked geospatial data, transforming geospatial data into RDF, interlinking geospatial data sources, geospatial ontology-based data access and incomplete geospatial information.

Logic, Automata, and Computational Complexity: The Works of Stephen A. Cook

By: Bruce M. Kapron

eBook: 9798400707780  |  Paperback: 9798400707773  |  Hardcover: 9798400707797
DOI: 10.1145/3588287
Table of Contents

eBook: $40.00  |  Paperback: $49.95  |  Hardcover: $69.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Stephen A. Cook was awarded the ACM Turing Award in 1982. His theory of NP-completeness is one of the most fundamental and enduring contributions in computer science. This volume presents works on NP-completeness and other contributions which had a significant impact on computing theory and mathematical logic. With additional material, including a biographical chapter, Professor Cook's Turing Award address, and a full bibliography of his work, the volume provides an excellent resource for anyone wishing to understand the foundations of Cook's work as well as its ongoing significance and relevance to current research problems in computing and beyond.

Effective Theories in Programming Practice

By: Jayadev Misra

eBook: 9781450399722  |  Paperback: 9781450399715  |  Hardcover: 9781450399739
DOI: 10.1145/3568325
Table of Contents

eBook: $32.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Set theory, logic, discrete mathematics, and fundamental algorithms (along with their correctness and complexity analysis) will always remain useful for computing professionals and need to be understood by students who want to succeed. This textbook explains a number of those fundamental algorithms to programming students in a concise, yet precise, manner. The book includes the background material needed to understand the explanations and to develop such explanations for other algorithms. The author demonstrates that clarity and simplicity are achieved not by avoiding formalism, but by using it properly.

The book is self-contained, assuming only a background in high school mathematics and elementary program writing skills. It does not assume familiarity with any specific programming language. Starting with basic concepts of sets, functions, relations, logic, and proof techniques including induction, the necessary mathematical framework for reasoning about the correctness, termination and efficiency of programs is introduced with examples at each stage. The book contains the systematic development, from appropriate theories, of a variety of fundamental algorithms related to search, sorting, matching, graph-related problems, recursive programming methodology and dynamic programming techniques, culminating in parallel recursive structures.

On Monotonicity Testing and the 2-to-2 Games Conjecture

By: Dor Minzer

eBook: 9781450399678  |  Paperback: 9781450399661  |  Hardcover: 9781450399685
DOI: 10.1145/3568031
Table of Contents

eBook: $48.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book discusses two questions in Complexity Theory: the Monotonicity Testing problem and the 2-to-2 Games Conjecture.

Monotonicity testing is a problem from the field of property testing, first considered by Goldreich et al. in 2000. The input of the algorithm is a function, and the goal is to design a tester that makes as few queries to the function as possible, accepts monotone functions and rejects far-from monotone functions with a probability close to 1.

The first result of this book is an essentially optimal algorithm for this problem. The analysis of the algorithm heavily relies on a novel, directed, and robust analogue of a Boolean isoperimetric inequality of Talagrand from 1993.

The probabilistically checkable proofs (PCP) theorem is one of the cornerstones of modern theoretical computer science. One area in which PCPs are essential is the area of hardness of approximation. Therein, the goal is to prove that some optimization problems are hard to solve, even approximately. Many hardness of approximation results were proved using the PCP theorem; however, for some problems optimal results were not obtained. This book touches on some of these problems, and in particular the 2-to-2 games problem and the vertex cover problem.

The second result of this book is a proof of the 2-to-2 games conjecture (with imperfect completeness), which implies new hardness of approximation results for problems such as vertex cover and independent set. It also serves as strong evidence towards the unique games conjecture, a notorious related open problem in theoretical computer science. At the core of the analysis of the proof is a characterization of small sets of vertices in Grassmann graphs whose edge expansion is bounded away from 1.

Prophets of Computing: Visions of Society Transformed by Computing

Edited By: Dick van Lente

eBook: 9781450398169  |  Paperback: 9781450398152  |  Hardcover: 9781450398176
DOI: 10.1145/3548585
Table of Contents

eBook: $48.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

When electronic digital computers first appeared after World War II, they appeared as a revolutionary force. Business management, the world of work, administrative life, the nation state, and soon enough everyday life were expected to change dramatically with these machines’ use. Ever since, diverse prophecies of computing have continually emerged, through to the present day.

As computing spread beyond the US and UK, such prophecies emerged from strikingly different economic, political, and cultural conditions. This volume explores how these expectations differed, assesses unexpected commonalities, and suggests ways to understand the divergences and convergences.

This book examines thirteen countries, based on source material in ten different languages—the effort of an international team of scholars. In addition to analyses of debates, political changes, and popular speculations, we also show a wide range of pictorial representations of "the future with computers."

The Handbook on Socially Interactive Agents, Volume 2: Interactivity, Platforms, Application

Edited By: Birgit Lugrin, Catherine Pelachaud, David Taum

eBook: 9781450398954  |  Paperback: 9781450398947  |  Hardcover: 9781450398961
DOI: 10.1145/3563659
Table of Contents

eBook: $56.00  |  Paperback: $69.95  |  Hardcover: $89.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The Handbook on Socially Interactive Agents provides a comprehensive overview of the research fields of Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics. Socially Interactive Agents (SIAs), whether virtually or physically embodied, are autonomous agents that are able to perceive an environment including people or other agents, reason, and decide how to interact, and express attitudes such as emotions, engagement, or empathy. They are capable of interacting with people and each other in a socially intelligent manner using multimodal communicative behaviors with the goal to support humans in various domains.

Written by international experts in their respective fields, the book summarizes research in the many important research communities pertinent for SIAs, while discussing current challenges and future directions. The handbook provides easy access to modeling and studying SIAs for researchers and students and aims at further bridging the gap between the research communities involved.

In two volumes, the book clearly structures the vast body of research. The first volume starts by introducing what is involved in SIAs research, in particular research methodologies and ethical implications of developing SIAs. It further examines research on appearance and behavior, focusing on multimodality. Finally, social cognition for SIAs is investigated by different theoretical models and phenomena such as theory of mind or pro-sociality. The second volume starts with perspectives on interaction, examined from different angles such as interaction in social space, group interaction, or long-term interaction. It also includes an extensive overview summarizing research and systems of human-agent platforms and of some of the major application areas of SIAs such as education, aging support, autism or games.

Democratizing Cryptography: The Work of Whitfield Diffie and Martin Hellman

Edited By: Rebecca Slayton

eBook: 9781450398268  |  Paperback: 9781450398251  |  Hardcover: 9781450398275
DOI: 10.1145/3549993
Table of Contents

eBook: $80.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

In the mid-1970s, Whitfield Diffie and Martin Hellman invented public key cryptography, an innovation that ultimately changed the world. Today public key cryptography provides the primary basis for secure communication over the internet, enabling online work, socializing, shopping, government services, and much more.

While other books have documented the development of public key cryptography, this is the first to provide a comprehensive insiders’ perspective on the full impacts of public key cryptography, including six original chapters by nine distinguished scholars. The book begins with an original joint biography of the lives and careers of Diffie and Hellman, highlighting parallels and intersections, and contextualizing their work. Subsequent chapters show how public key cryptography helped establish an open cryptography community and made lasting impacts on computer and network security, theoretical computer science, mathematics, public policy, and society. The volume includes particularly influential articles by Diffie and Hellman, as well as newly transcribed interviews and Turing Award Lectures by both Diffie and Hellman.

The contributed chapters provide new insights that are accessible to a wide range of readers, from computer science students and computer security professionals, to historians of technology and members of the general public. The chapters can be readily integrated into undergraduate and graduate courses on a range of topics, including computer security, theoretical computer science and mathematics, the history of computing, and science and technology policy.

Spatial Gems, Volume 1

By: John Krumm, Andreas Züfle, Cyrus Shahabi

eBook: 9781450397285  |  Paperback: 9781450398114  |  Hardcover: 9781450398138
DOI: 10.1145/3548732
Table of Contents

eBook: $40.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book presents fundamental new techniques for understanding and processing geospatial data. These "spatial gems" articulate and highlight insightful ideas that often remain unstated in graduate textbooks, and which are not the focus of research papers. They teach us how to do something useful with spatial data, in the form of algorithms, code, or equations. Unlike a research paper, Spatial Gems, Volume 1 does not focus on "Look what we have done!" but rather shows "Look what YOU can do!" With contributions from researchers at the forefront of the field, this volume occupies a unique position in the literature by serving graduate students, professional researchers, professors, and computer developers in the field alike.

Weaving Fire into Form: Aspirations for Tangible and Embodied Interaction

By: Brygg Ullmer, Orit Shaer, Ali Mazalek, Caroline Hummels

eBook: 9781450397681  |  Paperback: 9781450397674  |  Hardcover: 9781450397698
DOI: 10.1145/3544564
Table of Contents

eBook: $60.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book investigates multiple facets of the emerging discipline of Tangible, Embodied, and Embedded Interaction (TEI). This is a story of atoms and bits. We explore the interweaving of the physical and digital, toward understanding some of their wildly varying hybrid forms and behaviors. Spanning conceptual, philosophical, cognitive, design, and technical aspects of interaction, this book charts both history and aspirations for the future of TEI. We examine and celebrate diverse trailblazing works, and provide wide-ranging conceptual and pragmatic tools toward weaving the animating fires of computation and technology into evocative tangible forms. We also chart a path forward for TEI engagement with broader societal and sustainability challenges that will profoundly (re)shape our children’s and grandchildren’s futures. We invite you all to join this quest.

Edsger Wybe Dijkstra: His Life, Work and Legacy

Edited By: Krzysztof R. Apt, Tony Hoare

eBook: 9781450397728  |  Paperback: 9781450397711  |  Hardcover: 9781450397735
DOI: 10.1145/3544585
Table of Contents

eBook: $72.00  |  Paperback: $89.95  |  Hardcover: $129.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Edsger Wybe Dijkstra (1930–2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. Early in his career, he proposed the single-source shortest path algorithm, now commonly referred to as Dijkstra's algorithm. He wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and designed and implemented with his colleagues the influential THE operating system. Dijkstra invented the field of concurrent algorithms, with concepts such as mutual exclusion, deadlock detection, and synchronization. A prolific writer and forceful proponent of the concept of structured programming, he convincingly argued against the use of the Go To statement. In 1972 he was awarded the ACM Turing Award for "fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; for illuminating perception of problems at the foundations of program design." Subsequently he invented the concept of self-stabilization relevant to fault-tolerant computing. He also devised an elegant language for nondeterministic programming and its weakest precondition semantics, featured in his influential 1976 book A Discipline of Programming in which he advocated the development of programs in concert with their correctness proofs. In the later stages of his life, he devoted much attention to the development and presentation of mathematical proofs, providing further support to his long-held view that the programming process should be viewed as a mathematical activity.

In this unique new book, 31 computer scientists, including five recipients of the Turing Award, present and discuss Dijkstra's numerous contributions to computing science and assess their impact. Several authors knew Dijkstra as a friend, teacher, lecturer, or colleague. Their biographical essays and tributes provide a fascinating multi-author picture of Dijkstra, from the early days of his career up to the end of his life.

Circuits, Packets, and Protocols: Entrepreneurs and Computer Communications, 1968-1988

By: James L. Pelkey, Andrew L. Russell, Loring Robbins

eBook: 9781450397285  |  Paperback: 9781450397278  |  Hardcover: 9781450397261
DOI: 10.1145/3502372
Table of Contents

eBook: $48.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

As recently as 1968, computer scientists were uncertain how best to interconnect even two computers. The notion that within a few decades the challenge would be how to interconnect millions of computers around the globe was too far-fetched to contemplate. Yet, by 1988, that is precisely what was happening. The products and devices developed in the intervening years—such as modems, multiplexers, local area networks, and routers—became the linchpins of the global digital society. How did such revolutionary innovation occur? This book tells the story of the entrepreneurs who were able to harness and join two factors: the energy of computer science researchers supported by governments and universities, and the tremendous commercial demand for Internetworking computers. The centerpiece of this history comes from unpublished interviews from the late 1980s with over 80 computing industry pioneers, including Paul Baran, J.C.R. Licklider, Vint Cerf, Robert Kahn, Larry Roberts, and Robert Metcalfe. These individuals give us unique insights into the creation of multi-billion dollar markets for computer-communications equipment, and they reveal how entrepreneurs struggled with failure, uncertainty, and the limits of knowledge.

Probabilistic and Causal Inference: The Works of Judea Pearl

By: Hector Geffner, Rina Dechter, Joseph Y. Halpern

eBook: 9781450395885  |  Paperback: 9781450395878  |  Hardcover: 9781450395861
DOI: 10.1145/3501714
Table of Contents

eBook: $80.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Professor Judea Pearl won the 2011 Turing Award “for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.” This book contains the original articles that led to the award, as well as other seminal works, divided into four parts: heuristic search, probabilistic reasoning, causality, first period (1988–2001), and causality, recent period (2002–2020). Each of these parts starts with an introduction written by Judea Pearl. The volume also contains original, contributed articles by leading researchers that analyze, extend, or assess the influence of Pearl’s work in different fields: from AI, Machine Learning, and Statistics to Cognitive Science, Philosophy, and the Social Sciences. The first part of the volume includes a biography, a transcript of his Turing Award Lecture, two interviews, and a selected bibliography annotated by him.

Applied Affective Computing

By: Leimin Tian, Sharon Oviatt, Michal Muszynski, Brent C. Chamberlain, Jennifer Healey, Akane Sano

eBook: 9781450395922  |  Paperback: 9781450395915  |  Hardcover: 9781450395908
DOI: 10.1145/3502398
Table of Contents

eBook: $60.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Affective computing is a nascent field situated at the intersection of artificial intelligence with social and behavioral science. It studies how human emotions are perceived and expressed, which then informs the design of intelligent agents and systems that can either mimic this behavior to improve their intelligence or incorporate such knowledge to effectively understand and communicate with their human collaborators. Affective computing research has recently seen significant advances and is making a critical transformation from exploratory studies to real-world applications in the emerging research area known as applied affective computing.

This book offers readers an overview of the state-of-the-art and emerging themes in affective computing, including a comprehensive review of the existing approaches to affective computing systems and social signal processing. It provides in-depth case studies of applied affective computing in various domains, such as social robotics and mental well-being. It also addresses ethical concerns related to affective computing and how to prevent misuse of the technology in research and applications. Further, this book identifies future directions for the field and summarizes a set of guidelines for developing next-generation affective computing systems that are effective, safe, and human-centered.

For researchers and practitioners new to affective computing, this book will serve as an introduction to the field to help them in identifying new research topics or developing novel applications. For more experienced researchers and practitioners, the discussions in this book provide guidance for adopting a human-centered design and development approach to advance affective computing.

Theories of Programming: The Life and Works of Tony Hoare

By: Cliff B. Jones, Jayadev Misra

eBook: 9781450387309  |  Paperback: 9781450387293  |  Hardcover: 9781450387286
DOI: 10.1145/3477355
Table of Contents

eBook: $32.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Sir Tony Hoare has had an enormous influence on computer science, from the Quicksort algorithm to the science of software development, concurrency and program verification. His contributions have been widely recognised: He was awarded the ACM’s Turing Award in 1980, the Kyoto Prize from the Inamori Foundation in 2000, and was knighted for “services to education and computer science” by Queen Elizabeth II of England in 2000.

This book presents the essence of his various works—the quest for effective abstractions—both in his own words as well as chapters written by leading experts in the field, including many of his research collaborators. In addition, this volume contains biographical material, his Turing award lecture, the transcript of an interview and some of his seminal papers.

The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition

Edited By: Birgit Lugrin, Catherine Pelachaud, David Taum

eBook: 9781450387224  |  Paperback: 9781450387217  |  Hardcover: 9781450387200
DOI: 10.1145/3477322
Table of Contents

eBook: $48.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The research areas of Intelligent Virtual Agents (IVAs), Embodied Conversational Agents (ECAs), Socially Intelligent Agents (SIAs) and Social Robotics (SRs) have a common goal: to develop artificial agents (with either a physical or virtual embodiment) that are able to interact with human users and each other in a natural and intuitive manner. While the communities researching in these areas are aware of one another in principle, they do not yet benefit from each other’s research as much as they could. This comprehensive handbook on Socially Interactive Agents (SIAs) that will summarize the research that has taken place over the last 20 years in the fields of IVAs, ECAs (which will in the following be used interchangeably) and SRs, highlighting the similarities and differences of the different types of agents. With it the editors aim to bring the communities closer together and to attempt to close the gap between these very related research fields.

Software: A Technical History

By: Kim W. Tracy

eBook: 9781450387262  |  Paperback: 9781450387255  |  Hardcover: 9781450387248
DOI: https://doi.org/10.1145/3477339
Table of Contents

eBook: $28.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Software history has a deep impact on current software designers, computer scientists, and technologists. System constraints imposed in the past and the designs that responded to them are often unknown or poorly understood by students and practitioners, yet modern software systems often include “old” software and “historical” programming techniques. This work looks at software history through specific software areas to develop student-consumable practices, design principles, lessons learned, and trends useful in current and future software design. It also exposes key areas that are widely used in modern software, yet infrequently taught in computing programs. Written as a textbook, this book uses specific cases from the past and present to explore the impact of software trends and techniques.

Building on concepts from the history of science and technology, software history examines such areas as fundamentals, operating systems, programming languages, programming environments, networking, and databases. These topics are covered from their earliest beginnings to their modern variants. There are focused case studies on UNIX, APL, SAGE, GNU Emacs, Autoflow, internet protocols, System R, and others. Extensive problems and suggested projects enable readers to deeply delve into the history of software in areas that interest them most.

Event Mining for Explanatory Modeling

By: Laleh Jalali
Ramesh Jain

eBook: 9781450384858  |  Paperback: 9781450384834  |  Hardcover: 9781450384827
DOI: 10.1145/3462257
Table of Contents

eBook: $24.00  |  Paperback: $29.95  |  Hardcover: $49.94
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book introduces the concept of Event Mining for building explanatory models from analyses of correlated data. Such a model may be used as the basis for predictions and corrective actions. The idea is to create, via an iterative process, a model that explains causal relationships in the form of structural and temporal patterns in the data. The first phase is the data-driven process of hypothesis formation, requiring the analysis of large amounts of data to find strong candidate hypotheses. The second phase is hypothesis testing, wherein a domain expert’s knowledge and judgment is used to test and modify the candidate hypotheses.

The book will be useful for both practitioners and researchers working in different computer science fields. Data miners/scientists and data analysts can benefit from high-performance event mining techniques introduced in this book. Also, The book is accessible to many readers and not necessarily just those with strong backgrounds in computer science. Public health professionals, epidemiologists, physicians, and social scientists can benefit from the new perspective of this book in harnessing the value of heterogeneous big data for building diverse real-life applications.

Intelligent Computing for Interactive System Design: Statistics, Digital Signal Processing and Machine Learning in Practice

By: Parisa Eslambolchilar, Andreas Komninos, and Mark Dunlop

eBook: 9781450390286  |  Paperback: 9781450390262  |  Hardcover: 9781450390293
DOI: 10.1145/3447404
Table of Contents

eBook: $56.00  |  Paperback: $69.95  |  Hardcover: $89.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Intelligent Computing for Interactive System Design provides a comprehensive resource on what has become the dominant paradigm in designing novel interaction methods, involving gestures, speech, text, touch and brain-controlled interaction, embedded in innovative and emerging human-computer interfaces. These interfaces support ubiquitous interaction with applications and services running on smartphones, wearables, in-vehicle systems, virtual and augmented reality, robotic systems, the Internet of Things (IoT), and many other domains that are now highly competitive, both in commercial and in research contexts.

This book presents the crucial theoretical foundations needed by any student, researcher or practitioner working on novel interface design, with chapters on statistical methods, digital signal processing (DSP) and machine learning (ML). These foundations are followed by chapters that discuss case studies on smart cities, brain computer interfaces, probabilistic mobile text entry, secure gestures, personal context from mobile phones, adaptive touch interfaces and automotive user interfaces. The case studies chapters also highlight an in-depth look at the practical application of DSP and ML methods used for the processing of touch, gesture, biometric or embedded sensor inputs. A common theme throughout the case studies is ubiquitous support for humans in their daily professional or personal activities.
In addition, the book provides walk-through examples of different DSP and ML techniques and their use in interactive systems. Common terms are defined, and information on practical resources is provided (e.g., software tools, data resources) for hands-on project work to develop and evaluate multimodal and multi-sensor systems. In a series of in-chapter commentary boxes, an expert on the legal and ethical issues explores the emergent deep concerns of the professional community, on how DSP and ML should be adopted and used in a socially appropriate ways, to most effectively advance human performance during ubiquitous interaction with omnipresent computers.
This carefully edited collection is written by international experts and pioneers in the field of DSP and ML. It provides a textbook for students, and a reference and technology roadmap for developers and professionals working in interaction design on emerging platforms.

Semantic Web for the Working Ontologist, Third Edition: Effective Modeling in RDFS and OWL

By: James Hendler, Dean Allemang, and Fabien Gandon

eBook: 9781450376167  |  Paperback: 9781450376143  |  Hardcover: 9781450376174
DOI: 10.1145/3382097
Table of Contents

eBook: $50.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book discusses the capabilities of Linked-Data and the Semantic Web modeling languages, such as RDFS (Resource Description Framework Schema) and OWL (Web Ontology Language) as well as more recent standards based on these. The book provides examples to illustrate the use of Semantic Web technologies in solving common modeling problems with many exercises and examples of the use of the techniques.

The book provides an overview of the Semantic Web and aspects of the Web and its architecture relevant to Linked Data. It then discusses semantic modeling and how it can support the development from chaotic information gathering to one characterized by information sharing, cooperation, and collaboration. It also explains the use of RDF and linked-data to implement the Semantic Web by allowing information to be distributed over the Web or over intranets, along with the use of SPARQL to access RDF data.

Moreover, the reader is introduced to components that make up a Semantic Web deployment and how they fit together, the concept of inferencing in the Semantic Web, and how RDFS differs from other schema languages. In addition, the 2015 “Linked Data Platform” standard is also explored. The book also considers the use of SKOS (Simple Knowledge Organization System) to manage vocabularies by taking advantage of the inferencing structure of RDFS-Plus. It also presents SHACL, a language for checking graph constraints in linked data systems, and a number of useful ontologies includingschema.org, the most successfully deployed Semantic Web technology to date.

This book is intended for the linked data and semantic Web practitioner looking for clues on how to add more expressivity to allow better linking and use both on the web and in the enterprise, and for the working ontologist who is trying to create a domain model on the Semantic Web.

Code Nation: Personal Computing and the Learn to Program Movement in America

By: Michael J. Halvorson

eBook: 9781450377560  |  Paperback: 9781450377577  |  Hardcover: 9781450377584
DOI: 10.1145/3368274
Table of Contents

eBook: $32.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Code Nation is a popular history of programming and software culture from the first years of personal computing in the 1970s to the early commercial infrastructure of the World Wide Web. This illustration-rich book offers profiles of ACM members and luminaries who have had an important influence on programming practices, as well as the formative experiences of students, power users, and tinkerers who learned to code on early PCs and built captivating games and applications.

Central to this history is the learn to program movement, an educational agenda that germinated in government labs, gained momentum through business and counterculture experiments, and became a broad-based computer literacy movement in the 1970s and 80s.

Despite conflicts about languages, operating systems, and professional practices, the number of active programmers in America jumped from tens of thousands in the late 1950s to tens of millions by the early 1990s. This surge created a groundswell of popular support for programming culture, resulting in a “Code Nation”—a globally-connected society saturated with computer software and enchanted by its use.

Computing and the National Science Foundation, 1950–2016: Building a Foundation for Modern Computing

By: Peter A. Freeman, W. Richards Adrion, William Aspray

eBook: 9781450372756  |  Paperback: 9781450372763  |  Hardcover: 9781450372770
DOI: ​10.1145/3336323
Table of Contents

eBook: $32.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This organizational history relates the role of the National Science Foundation (NSF) in the development of modern computing. Drawing upon new and existing oral histories, extensive use of NSF documents, and the experience of two of the authors as senior managers, this book describes how NSF’s programmatic activities originated and evolved to become the primary source of funding for fundamental research in computing and information technologies.

The book traces how NSF’s support has provided facilities and education for computing usage by all scientific disciplines, aided in institution and professional community building, supported fundamental research in computer science and allied disciplines, and led the efforts to broaden participation in computing by all segments of society.

Today, the research and infrastructure facilitated by NSF computing programs are significant economic drivers of American society and industry. For example, NSF supported work that led to the first widely-used web browser, Netscape; sponsored the creation of algorithms at the core of the Google search engine; facilitated the growth of the public Internet; and funded research on the scientific basis for countless other applications and technologies. NSF has advanced the development of human capital and ideas for future advances in computing and its applications.

This account is the first comprehensive coverage of NSF’s role in the extraordinary growth and expansion of modern computing and its use. It will appeal to historians of computing, policy makers and leaders in government and academia, and individuals interested in the history and development of computing and the NSF.

Providing Sound Foundations for Cryptography: On the Work of Shafi Goldwasser and Silvio Micali

By: Oded Goldreich

eBook: 9781450372688  |  Paperback: 9781450372671  |  Hardcover: 9781450372664
DOI: 10.1145/3335741
Table of Contents

eBook: $80.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Cryptography is concerned with the construction of schemes that withstand any abuse. A cryptographic scheme is constructed so as to maintain a desired functionality, even under malicious attempts aimed at making it deviate from its prescribed behavior. The design of cryptographic systems must be based on firm foundations, whereas ad hoc approaches and heuristics are a very dangerous way to go. These foundations were developed mostly in the 1980s, in works that are all co-authored by Shafi Goldwasser and/or Silvio Micali. These works have transformed cryptography from an engineering discipline, lacking sound theoretical foundations, into a scientific field possessing a well-founded theory, which influences practice as well as contributes to other areas of theoretical computer science.

This book celebrates these works, which were the basis for bestowing the 2012 A.M. Turing Award upon Shafi Goldwasser and Silvio Micali. A significant portion of this book reproduces some of these works, and another portion consists of scientific perspectives by some of their former students. The highlight of the book is provided by a few chapters that allow the readers to meet Shafi and Silvio in person. These include interviews with them, their biographies and their Turing Award lectures.

Concurrency: The works of Leslie Lamport

By: Dahlia Malkhi

eBook: 9781450372725  |  Paperback: 9781450372718  |  Hardcover: 9781450372701
DOI: 10.1145/3335772
Table of Contents

eBook: $64.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

This book is a celebration of Leslie Lamport’s work on concurrency, interwoven in four-and-a-half decades of an evolving industry: from the introduction of the first personal computer to an era when parallel and distributed multiprocessors are abundant. His works lay formal foundations for concurrent computations executed by interconnected computers. Some of the algorithms have become standard engineering practice for fault tolerant distributed computing – distributed systems that continue to function correctly despite failures of individual components. He also developed a substantial body of work on the formal specification and verification of concurrent systems, and has contributed to the development of automated tools applying these methods.

Part I consists of technical chapters of the book and a biography. The technical chapters of this book present a retrospective on Lamport’s original ideas from experts in the field. Through this lens, it portrays their long-lasting impact. The chapters cover timeless notions Lamport introduced: the Bakery algorithm, atomic shared registers and sequential consistency; causality and logical time; Byzantine Agreement; state machine replication and Paxos; temporal logic of actions (TLA). The professional biography tells of Lamport’s career, providing the context in which his work arose and broke new grounds, and discusses LaTeX – perhaps Lamport’s most influential contribution outside the field of concurrency. This chapter gives a voice to the people behind the achievements, notably Lamport himself, and additionally the colleagues around him, who inspired, collaborated, and helped him drive worldwide impact. Part II consists of a selection of Leslie Lamport’s most influential papers.

This book touches on a lifetime of contributions by Leslie Lamport to the field of concurrency and on the extensive influence he had on people working in the field. It will be of value to historians of science, and to researchers and students who work in the area of concurrency and who are interested to read about the work of one of the most influential researchers in this field.

The Essentials of Modern Software Engineering: Free the Practices from the Method Prisons

By: Ivar Jacobson, Harold “Bud” Lawson, Pan-Wei Ng, Paul E. McMahon, Michael Goedicke

eBook: 9781947487260  |  Paperback: 9781947487246  |  Hardcover: 9781947487277
DOI: 10.1145/3277669
Table of Contents

eBook: $56.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The first course in software engineering is the most critical. Education must start from an understanding of the heart of software development, from familiar ground that is common to all software development endeavors. This book is an in-depth introduction to software engineering that uses a systematic, universal kernel to teach the essential elements of all software engineering methods.

This kernel, Essence, is a vocabulary for defining methods and practices. Essence was envisioned and originally created by Ivar Jacobson and his colleagues, developed by Software Engineering Method and Theory (SEMAT) and approved by The Object Management Group (OMG) as a standard in 2014. Essence is a practice-independent framework for thinking and reasoning about the practices we have and the practices we need. Essence establishes a shared and standard understanding of what is at the heart of software development. Essence is agnostic to any particular method, lifecycle independent, programming language independent, concise, scalable, extensible, and formally specified. Essence frees the practices from their method prisons.

The first part of the book describes Essence, the essential elements to work with, the essential things to do and the essential competencies you need when developing software. The other three parts describe more and more advanced use cases of Essence. Using real but manageable examples, it covers the fundamentals of Essence and the innovative use of serious games to support software engineering. It also explains how current practices such as user stories, use cases, Scrum, and micro-services can be described using Essence, and illustrates how their activities can be represented using the Essence notions of cards and checklists. The fourth part of the book offers a vision how Essence can be scaled to support large, complex systems engineering.

Essence is supported by an ecosystem developed and maintained by a community of experienced people worldwide. From this ecosystem, professors and students can select what they need and create their own way of working, thus learning how to create ONE way of working that matches the particular situation and needs.

Data Cleaning

By: Ihab Ilyas

eBook: 9781450371544  |  Paperback: 9781450371537  |  Hardcover: 9781450371520
DOI: 10.1145/3310205
Table of Contents

eBook: $56.00  |  Paperback: $69.95  |  Hardcover: $89.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and incorrect business decisions. Poor data across businesses and the U.S. government are reported to cost trillions of dollars a year. Multiple surveys show that dirty data is the most common barrier faced by data scientists. Not surprisingly, developing effective and efficient data cleaning solutions is challenging and is rife with deep theoretical and engineering problems.

This book is about data cleaning, which is used to refer to all kinds of tasks and activities to detect and repair errors in the data. Rather than focus on a particular data cleaning task, we give an overview of the end-to-end data cleaning process, describing various error detection and repair methods, and attempt to anchor these proposals with multiple taxonomies and views. Specifically, we cover four of the most common and important data cleaning tasks, namely, outlier detection, data transformation, error repair (including imputing missing values), and data deduplication. Furthermore, due to the increasing popularity and applicability of machine learning techniques, we include a chapter that specifically explores how machine learning techniques are used for data cleaning, and how data cleaning is used to improve machine learning models.

This book is intended to serve as a useful reference for researchers and practitioners who are interested in the area of data quality and data cleaning. It can also be used as a textbook for a graduate course. Although we aim at covering state-of-the-art algorithms and techniques, we recognize that data cleaning is still an active field of research and therefore provide future directions of research whenever appropriate.

Hardness of Approximation Between P and NP

By: Aviad Rubinstein

eBook: 9781947487222  |  Paperback: 9781947487208  |  Hardcover: 9781947487239
DOI: 10.1145/3241304
Table of Contents

eBook: $72.00  |  Paperback: $89.95  |  Hardcover: $109.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Nash equilibrium is the central solution concept in Game Theory. Since Nash’s original paper in 1951, it has found countless applications in modeling strategic behavior of traders in markets, (human) drivers and (electronic) routers in congested networks, nations in nuclear disarmament negotiations, and more. A decade ago, the relevance of this solution concept was called into question by computer scientists, who proved (under appropriate complexity assumptions) that computing a Nash equilibrium is an intractable problem. And if centralized, specially designed algorithms cannot find Nash equilibria, why should we expect distributed, selfish agents to converge to one? The remaining hope was that at least approximate Nash equilibria can be efficiently computed.

Understanding whether there is an efficient algorithm for approximate Nash equilibrium has been the central open problem in this field for the past decade. In this book, we provide strong evidence that even finding an approximate Nash equilibrium is intractable. We prove several intractability theorems for different settings (two-player games and many-player games) and models (computational complexity, query complexity, and communication complexity). In particular, our main result is that under a plausible and natural complexity assumption (“Exponential Time Hypothesis for PPAD”), there is no polynomial-time algorithm for finding an approximate Nash equilibrium in two-player games.

The problem of approximate Nash equilibrium in a two-player game poses a unique technical challenge: it is a member of the class PPAD, which captures the complexity of several fundamental total problems, i.e., problems that always have a solution; and it also admits a quasipolynomial time algorithm. Either property alone is believed to place this problem far below NP-hard problems in the complexity hierarchy; having both simultaneously places it just above P, at what can be called the frontier of intractability. Indeed, the tools we develop in this book to advance on this frontier are useful for proving hardness of approximation of several other important problems whose complexity lies between P and NP: Brouwer’s fixed point, market equilibrium, CourseMatch (A-CEEI), densest k-subgraph, community detection, VC dimension and Littlestone dimension, and signaling in zero-sum games.

Conversational UX Design: A Practitioner's Guide to the Natural Conversation Framework

By: Robert J. Moore, Raphael Arar

eBook: 9781450363037  |  Paperback: 9781450363020  |  Hardcover: 9781450363013
DOI: 10.1145/3304087
Table of Contents

eBook: $56.00  |  Paperback: $69.95  |  Hardcover: $89.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

With recent advances in natural language understanding techniques and far-field microphone arrays, natural language interfaces, such as voice assistants and chatbots, are emerging as a popular new way to interact with computers. They have made their way out of the industry research labs and into the pockets, desktops, cars and living rooms of the general public. But although such interfaces recognize bits of natural language, and even voice input, they generally lack conversational competence, or the ability to engage in natural conversation. Today’s platforms provide sophisticated tools for analyzing language and retrieving knowledge, but they fail to provide adequate support for modeling interaction. The user experience (UX) designer or software developer must figure out how a human conversation is organized, usually relying on commonsense rather than on formal knowledge. Fortunately, practitioners can rely on conversation science.

This book adapts formal knowledge from the field of Conversation Analysis (CA) to the design of natural language interfaces. It outlines the Natural Conversation Framework (NCF), developed at IBM Research, a systematic framework for designing interfaces that work like natural conversation. The NCF consists of four main components: 1) an interaction model of “expandable sequences,” 2) a corresponding content format, 3) a pattern language with 100 generic UX patterns and 4) a navigation method of six basic user actions. The authors introduce UX designers to a new way of thinking about user experience design in the context of conversational interfaces, including a new vocabulary, new principles and new interaction patterns. User experience designers and graduate students in the HCI field as well as developers and conversation analysis students should find this book of interest.

Heterogeneous Computing: Hardware and Software Perspectives

By: Mohamed Zahran

eBook: 9781450360982  |  Paperback: 9781450362337  |  Hardcover: 9781450360975
DOI: 10.1145/3281649
Table of Contents

eBook: $32.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

If you look around you will find that all computer systems, from your portable devices to the strongest supercomputers, are heterogeneous in nature. The most obvious heterogeneity is the existence of computing nodes of different capabilities (e.g. multicore, GPUs, FPGAs, …). But there are also other heterogeneity factors that exist in computing systems, like the memory system components, interconnection, etc. The main reason for these different types of heterogeneity is to have good performance with power efficiency.

Heterogeneous computing results in both challenges and opportunities. This book discusses both. It shows that we need to deal with these challenges at all levels of the computing stack: from algorithms all the way to process technology. We discuss the topic of heterogeneous computing from different angles: hardware challenges, current hardware state-of-the-art, software issues, how to make the best use of the current heterogeneous systems, and what lies ahead.

The aim of this book is to introduce the big picture of heterogeneous computing. Whether you are a hardware designer or a software developer, you need to know how the pieces of the puzzle fit together. The main goal is to bring researchers and engineers to the forefront of the research frontier in the new era that started a few years ago and is expected to continue for decades. We believe that academics, researchers, practitioners, and students will benefit from this book and will be prepared to tackle the big wave of heterogeneous computing that is here to stay.

Making Databases Work: The Pragmatic Wisdom of Michael Stonebraker

By: Michael Lawrence Brodie

eBook: 9781947487185  |  Paperback: 9781947487161  |  Hardcover: 9781947487192
DOI: 10.1145/3226595
Table of Contents

eBook: $80.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

At the ACM Awards banquet in June 2017, during the 50th anniversary celebration of the A.M. Turing Award, ACM announced the launch of the ACM A.M. Turing Book Series, a sub-series of ACM Books, to honor the winners of the A.M. Turing Award, computing’s highest honor, the “Nobel Prize” for computing. This series aims to celebrate the accomplishments of awardees, explain their major contributions of lasting importance in computing. Michael R. Stonebraker: 2014 A.M. Turing Award Winner, the first book in the series, is intended to celebrate Mike’s contributions and impact to experts who should value the book for comprehensiveness and to non-experts who may value the book for impact. What accomplishments warranted computing’s highest honor? How did Stonebraker do it? Who is Mike Stonebraker—researcher, professor, CTO, lecturer, innovative product developer, serial entrepreneur, and decades-long leader, and, as Phil Bernstein has said, research evangelist for the database community. This book is intended to evaluate the enormous amount of published work on Mike and his contributions, and evaluate it in light of the Turing Award and place it in context.

The Handbook of Multimodal-Multisensor Interfaces, Volume II

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger

eBook: 9781970001709  |  Paperback: 9781970001686  |  Hardcover: 9781970001716
DOI: 10.1145/3107990
Table of Contents

eBook: $80.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users’ states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities, and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on two timely and controversial challenge topics, including interdisciplinary approaches to optimizing strategic fusion and to multimodal deep learning. These discussions focus on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.


The Handbook of Multimodal-Multisensor Interfaces, Volume III

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger

eBook: 9781970001747  |  Paperback: 9781970001723  |  Hardcover: 9781970001754
DOI: 10.1145/3233795
Table of Contents

eBook: $80.00  |  Paperback: $119.95  |  Hardcover: $139.96
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test-bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted for medicine, robotics, interaction with smart spaces, and similar topics. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.

Declarative Logic Programming: Theory, Systems, and Applications

By: Michael Kifer and Yanhong Annie Liu

eBook: 9781970001983  |  Paperback: 9781970001969  |  Hardcover: 9781970001990
DOI: 10.1145/3191315
Table of Contents

eBook: $64.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Logic Programming (LP) is at the nexus of Knowledge Representation, AI, Mathematical Logic, Databases, and Programming Languages. It allows programming to be more declarative, by specifying "what" to do instead of "how" to do it. This field is fascinating and intellectually stimulating due to the fundamental interplay among theory, systems, and applications brought about by logic.

Several books cover the basics of LP but they focus mostly on the Prolog language. There is generally a lack of accessible collections of articles covering the key aspects of LP, such as the well-founded vs. stable semantics for negation, constraints, object-oriented LP, updates, probabilistic LP, and implementation methods, including top-down vs. bottom-up evaluation and tabling.

For systems, the situation is even less satisfactory, lacking expositions of LP inference machinery that supports tabling and other state-of-the-art implementation techniques. There is also a dearth of articles about systems that support truly declarative languages, especially those that tie into first-order logic, mathematical programming, and constraint programming. Also rare are surveys of challenging application areas of LP, such as Bioinformatics, Natural Language Processing, Verification, and Planning, as well as analysis of LP applications based on language abstractions and implementations methods.

The goal of this book is to help fill in the void in the literature with state-of-the-art surveys on key aspects of LP. Much attention was paid to making these surveys accessible to researchers, practitioners, and graduate students alike.

The Continuing Arms Race: Code-Reuse Attacks and Defenses

By: Per Larsen, Ahmad-Reza Sadeghi

eBook: 9781970001822  |  Paperback: 9781970001808  |  Hardcover: 9781970001839
DOI: 10.1145/3129743
Table of Contents

eBook: $72.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

As human activities moved to the digital domain, so did all the well-known malicious behaviors including fraud, theft, and other trickery. There is no silver bullet, and each security threat calls for a specific answer. One specific threat is that applications accept malformed inputs, and in many cases it is possible to craft inputs that let an intruder take full control over the target computer system. The nature of systems programming languages lies at the heart of the problem. Rather than rewriting decades of well-tested functionality, this book examines ways to live with the (programming) sins of the past while shoring up security in the most efficient manner possible. We explore a range of different options, each making significant progress towards securing legacy programs from malicious inputs. The solutions explored include enforcement-type defenses, which excludes certain program executions because they never arise during normal operation. Another strand explores the idea of presenting adversaries with a moving target that unpredictably changes its attack surface thanks to randomization. We also cover tandem execution ideas where the compromise of one executing clone causes it to diverge from another thus revealing adversarial activities. The main purpose of this book is to provide readers with some of the most influential works on run-time exploits and defenses. We hope that the material in this book will inspire readers and generate new ideas and paradigms.

The Sparse Fourier Transform: Theory & Practice

By: Haitham Hassanieh

eBook: 9781947487062  |  Paperback: 9781947487048  |  Hardcover: 9781947487079
DOI: 10.1145/3166186
Table of Contents

eBook: $64.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The Fourier transform is one of the most fundamental tools for computing the frequency representation of signals. It plays a central role in signal processing, communications, audio and video compression, medical imaging, genomics, astronomy, as well as many other areas. Because of its widespread use, fast algorithms for computing the Fourier transform can benefit a large number of applications. The fastest algorithm for computing the Fourier transform is the Fast Fourier Transform (FFT), which runs in near-linear time making it an indispensable tool for many applications. However, today, the runtime of the FFT algorithm is no longer fast enough especially for big data problems where each dataset can be few terabytes. Hence, faster algorithms that run in sublinear time, i.e., do not even sample all the data points, have become necessary.

This book addresses the above problem by developing the Sparse Fourier Transform algorithms and building practical systems that use these algorithms to solve key problems in six different applications: wireless networks, mobile systems, computer graphics, medical imaging, biochemistry, and digital circuits.

This is a revised version of the thesis that won the 2016 ACM Doctoral Dissertation Award.

Frontiers of Multimedia Research

By: Shih-Fu Chang

eBook: 9781970001068  |  Paperback: 9781970001044  |  Hardcover: 9781970001075
DOI: 10.1145/3122865
Table of Contents

eBook: $72.00  |  Paperback: $89.95  |  Hardcover: $109.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The field of multimedia is unique in offering a rich and dynamic forum for researchers from "traditional" fields to collaborate and develop new solutions and knowledge that transcend the boundaries of individual disciplines. Despite the prolific research activities and outcomes, however, few efforts have been made to develop books that serve as an introduction to the rich spectrum of topics covered by this broad field. A few books are available that either focus on specific subfields or basic background in multimedia. Tutorial-style materials covering the active topics being pursued by the leading researchers at frontiers of the field are currently lacking.

In 2015, ACM SIGMM, the special interest group on multimedia, launched a new initiative to address this void by selecting and inviting 12 rising-star speakers from different subfields of multimedia research to deliver plenary tutorial-style talks at the ACM Multimedia conference for 2015. Each speaker discussed the challenges and state-of-the-art developments of their prospective research areas in a general manner to the broad community. The covered topics were comprehensive, including multimedia content understanding, multimodal human-human and human-computer interaction, multimedia social media, and multimedia system architecture and deployment.

Following the very positive responses to these talks, the speakers were invited to expand the content covered in their talks into chapters that can be used as reference material for researchers, students, and practitioners. Each chapter discusses the problems, technical challenges, state-of-the-art approaches and performances, open issues, and promising direction for future work. Collectively, the chapters provide an excellent sampling of major topics addressed by the community as a whole. This book, capturing some of the outcomes of such efforts, is well positioned to fill the aforementioned needs in providing tutorial-style reference materials for frontier topics in multimedia.

Computational Prediction of Protein Complexes from Protein Interaction Networks

By: Sriganesh Srihari, Chern Han Yong, and Limsoon Wong

eBook: 9781970001549  |  Paperback: 9781970001525  |  Hardcover: 9781970001556
DOI: 10.1145/3064650
Table of Contents

eBook: $64.00  |  Paperback: $89.95  |  Hardcover: $109.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Complexes of physically interacting proteins constitute fundamental functional units that drive biological processes within cells. A faithful identification of the entire set of complexes (the ‘complexosome’) is therefore essential not only to understand complex formation but also the functional organization of cells. Advances over the last several years, particularly through the use of high-throughput yeast two-hybrid and affinity-purification based experimental (proteomics) techniques, extensively map interactions (the ‘interactome’) in model organisms, including Saccharomyces cerevisiae (budding yeast), Drosophila melanogaster (fruit fly) and Caenorhabditis elegans (roundworm). These interaction data have enabled systematic reconstruction of complexes in these organisms, thereby revealing novel insights into the constituents, assembly and functions of complexes. Computational methods have played a significant role towards these advancements by contributing more accurate, efficient and exhaustive ways to analyse the enormous amounts of data, and also by complementing for several of the limitations, including presence of biological and technical noise and lack of credible interactions (sparsity) arising from experimental protocols. In this book, we systematically walk through all the important computational methods devised to date (approximately between 2003 and 2015) for identifying complexes from the network of protein interactions (PPI network).

We present a detailed taxonomy of these methods, and comprehensively evaluate them for their ability to accurately identify complexes across a variety of scenarios, including presence of noise in PPI networks and inferring of sparse complexes. By covering challenges faced by these methods more lately, for instance in identifying sub- or small complexes, and discerning of overlapping complexes, we reveal how a combination of strategies is required to accurately reconstruct the entire complexosome. The experience gained from model organisms is now paving the way for identification of complexes from higher-order organisms including Homo sapiens (human). In particular, with the increasing use of ‘pan-omics’ techniques spanning genomics, transcriptomics, proteomics and metabolomics to map human cells across multiple layers of organization, the need to understand the rewiring of the interactome between conditions – e.g. between normal development and disease – and consequently, the dynamic reorganization of complexes across these conditions are gaining immense importance. Towards this end, more recent computational methods have integrated these pan-omics datasets to decipher complexes in diseases including cancer, which in turn have revealed novel insights into disease mechanisms and highlighted potential therapeutic targets. Here, we will cover several of these latest methods, thus emphasizing how a fundamental problem such as complex identification can have far-reaching applications towards understanding the biology underlying sophisticated functional and organizational transformations in cells.

Shared-Memory Parallelism Can be Simple, Fast, and Scalable

By: Julian Shun

eBook: 9781970001907  |  Paperback: 9781970001884  |  Hardcover: 9781970001914
DOI: 10.1145/3018787
Table of Contents

eBook: $80.00  |  Paperback: $89.95  |  Hardcover: $109.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to enable the solutions developed to run efficiently under various settings. This book, a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award, addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The book provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results serve to ease the transition into the multicore era.

The book starts by introducing tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The book then introduces Ligra, the first high-level shared-memory framework for parallel graph traversal algorithms. The framework enables short and concise implementations that deliver performance competitive with that of highly-optimized code and up to orders of magnitude better than previous systems designed for distributed memory. Finally, the book bridges the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice.

The Handbook of Multimodal-Multisensor Interfaces, Volume I

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger

eBook: 9781970001662  |  Paperback: 9781970001648  |  Hardcover: 9781970001679
DOI: 10.1145/3015783
Table of Contents

eBook: $80.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smartphones, wearables, in-vehicle, robotic, and many other applications that are now highly competitive commercially.

This edited collection is written by international experts and pioneers in the field. It provides a textbook for students, and a reference and technology roadmap for professionals working in this rapidly emerging area.

Volume 1 of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. They also highlight an in-depth look at the most common multimodal-multisensor combinations— for example, touch and pen input, haptic and non-speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users—including the world’s rapidly growing population of seniors.

These handbook chapters provide walk-through examples and video illustrations of different system designs and their interactive use. Common terms are defined, and information on practical resources is provided (e.g., software tools, data resources) for hands-on project work to develop and evaluate multimodal-multisensor systems. In the final chapter, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

Communities of Computing: Computer Science and Society in the ACM

By: Thomas J. Misa

eBook: 9781970001860  |  Paperback: 9781970001846  |  Hardcover: 9781970001877
DOI: 10.1145/2973856
Table of Contents

eBook: $48.00  |  Paperback: $69.95  |  Hardcover: $89.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Communities of Computing is the first book-length history of the Association for Computing Machinery (ACM), founded in 1947 and with a membership today of 100,000 worldwide.It profiles ACM’s notable SIGs, active chapters, and individual members, setting ACM’s history into a rich social and political context.The book’s 12 core chapters are organized into three thematic sections. “Defining the Discipline” examines the 1960s and 1970s when the field of computer science was taking form at the National Science Foundation, Stanford University, and through ACM’s notable efforts in education and curriculum standards.“Broadening the Profession” looks outward into the wider society as ACM engaged with social and political issues—and as members struggled with balancing a focus on scientific issues and awareness of the wider world.

Chapters examine the social turbulence surrounding the Vietnam War, debates about the women’s movement, efforts for computing and community education, and international issues including professionalization and the Cold War.“Expanding Research Frontiers” profiles three areas of research activity where ACM members and ACM itself shaped notable advances in computing, including computer graphics, computer security, and hypertext.

Featuring insightful profiles of notable ACM leaders, such as Edmund Berkeley, George Forsythe, Jean Sammet, Peter Denning, and Kelly Gotlieb, and honest assessments of controversial episodes, the volume deals with compelling and complex issues involving ACM and computing.It is not a narrow organizational history of ACM committees and SIGS, although much information about them is given.All chapters are original works of research.Many chapters draw on archival records of ACM’s headquarters, ACM SIGs, and ACM leaders.This volume makes a permanent contribution to documenting the history of ACM and understanding its central role in the history of computing.

Text Data Management and Analysis

By: ChengXiang Zhai, Sean Massung

eBook: 9781970001181  |  Paperback: 9781970001167  |  Hardcover: 9781970001198
DOI: 10.1145/2915031
Table of Contents

eBook: $45.00  |  Paperback: $99.95  |  Hardcover: $119.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content.As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text.The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic.

This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks.The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.

Reactive Internet Programming: State Chart XML in Action

By: Franck Barbier

eBook: 9781970001785  |  Paperback: 9781970001761  |  Hardcover: 9781970001792
DOI: 10.1145/2872585
Table of Contents

eBook: $61.00  |  Paperback: $89.95  |  Hardcover: $109.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Is Internet software so different from “ordinary” software? This book practically answers this question through the presentation of a software design method based on the State Chart XML W3C standard along with Java. Web enterprise, Internet-of-Things, and Android applications, in particular, are seamlessly specified and implemented from “executable models.”

Internet software puts forward the idea of event-driven or reactive programming, as pointed out in Bonér et al.’s “Reactive Manifesto” ( http://www.reactivemanifesto.org). It tells us that reactiveness is a must. However, beyond concepts, software engineers require effective means with which to put reactive programming into practice. This book’s purpose is to outline and explain such means.

The lack of professional examples in the literature that illustrate how reactive software should be shaped can be quite frustrating. Therefore, this book helps to fill in that gap by providing in-depth professional case studies that contain comprehensive details and meaningful alternatives. Furthermore, these case studies can be downloaded for further investigation.

Internet software requires higher adaptation, at run time in particular. After reading Reactive Internet Programming, the reader therefore will be ready to enter the forthcoming Internet era.

Reviews

If I may harken back to my initial disclaimer, namely, that I’m not an Internet programmer, I wish to say that in 1988 (in an aerospace venue) I urged my employer to purchase Harel’s StateMate tool. The proverbial bean counters did their job in rejecting the request. I’m happy to see this excellent book’s arrival, along with its open-source superstructure. It is very much worth reading, both for theory and for real-world application to complex event-driven systems.
George Hacken - Computing Reviews

An Architecture for Fast and General Data Processing on Large Clusters

By: Matei Zaharia

eBook: 9781970001587  |  Paperback: 9781970001563  |  Hardcover: 9781970001594
DOI: 10.1145/2886107
Table of Contents

eBook: $44.00  |  Paperback: $39.95  |  Hardcover: $59.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters.

At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.

This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.

We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.

This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.

Verified Functional Programming in Agda

By: Aaron Stump

eBook: 9781970001266  |  Paperback: 9781970001242  |  Hardcover: 9781970001273
DOI: 10.1145/2841316
Table of Contents

eBook: $64.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Agda is an advanced programming language based on Type Theory. Agda’s type system is expressive enough to support full functional verification of programs, in two styles. In external verification, we write pure functional programs and then write proofs of properties about them. The proofs are separate external artifacts, typically using structural induction. In internal verification, we specify properties of programs through rich types for the programs themselves. This often necessitates including proofs inside code, to show the type checker that the specified properties hold. The power to prove properties of programs in these two styles is a profound addition to the practice of programming, giving programmers the power to guarantee the absence of bugs, and thus improve the quality of software more than previously possible.

Verified Functional Programming in Agda is the first book to provide a systematic exposition of external and internal verification in Agda, suitable for undergraduate students of Computer Science. No familiarity with functional programming or computer-checked proofs is presupposed. The book begins with an introduction to functional programming through familiar examples like booleans, natural numbers, and lists, and techniques for external verification. Internal verification is considered through the examples of vectors, binary search trees, and Braun trees. More advanced material on type-level computation, explicit reasoning about termination, and normalization by evaluation is also included. The book also includes a medium-sized case study on Huffman encoding and decoding.

Reviews

Verified Functional Programming in Agda is an excellent introduction to the field of dependently typed programming. Stump does a great job of making the subject accessible to beginners without shying away from the more advanced topics. (Ulf Norell, Chalmers University of Technology, Sweden)

Previously, when someone asked me how to learn dependently typed programming, I'd point them to various tutorials, papers, and blog posts about Agda. Now, I just give them this book. Jesper Cockx, K. U. Leuven, Belgium

Reviews

Verified Functional Programming in Agda is an excellent introduction to the field of dependently typed programming. Stump does a great job of making the subject accessible to beginners without shying away from the more advanced topics.
Ulf Norell, Chalmers University of Technology, Sweden

The VR Book: Human-Centered Design for Virtual Reality

By: Jason Jerald

eBook: 9781627051143  |  Paperback: 9781970001129  |  Hardcover: 9781970001150
DOI: 10.1145/2792790
Table of Contents

eBook: $64.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Virtual reality (VR) potentially provides our minds with direct access to digital media in a way that at first seems to have no limits.However, creating compelling VR experiences is an incredibly complex challenge.When VR is done well, the results are brilliant and pleasurable experiences that go beyond what we can do in the real world.When VR is done badly, not only is the system frustrating to use, but sickness can result.Reasons for bad VR are numerous; some failures come from the limitations of technology, but many come from a lack of understanding perception, interaction, design principles, and real users. This book discusses such issues, focusing upon the human element of VR rather than technical implementation, for if we do not get the human element correct, then no amount of technology will make VR anything more than an interesting tool confined to research laboratories. Even when VR principles are fully understood, first implementations are rarely novel and never ideal due to the complex nature of VR and the countless possibilities. However, the VR principles discussed within enable us to intelligently experiment with the rules and iteratively design towards innovative experiences.

Chapter 1. Introduction

Chapter 2. Perception

Chapter 3. Cybersickness

Chapter 4. Interaction

Chapter 5. Content Creation

Chapter 6. Iterative Design

Chapter 7. Conclusions and the Future

Reviews

The definitive guide for creating VR user interactions.
Amir Rubin - CEO, Sixense


Conceptually comprehensive, yet at the same time practical and grounded in real-world experience.
Paul Mlyniec - President of Digital ArtForms and father of MakeVR

The summative guidelines provide quick access with back references for further understanding.
Chris Pusczak - Creative Director of SymbioVR

I was able to briefly preview it at SIGGRAPH this last summer and it seemed like it would be a good fit for my class. Now that have had a chance to take a deeper dive, I am even more certain I will be using the book. I am looking forward to using this as a required text when I teach the course again.
Dr. David Whittinghill, Purdue University

Ada’s Legacy: Cultures of Computing from the Victorian to the Digital Age

By: Robin Hammerman and Andrew L. Russell

eBook: 9781970001501  |  Paperback: 9781970001488  |  Hardcover: 9781970001518
DOI: 10.1145/2809523
Table of Contents

eBook: $32.00  |  Paperback: $49.95  |  Hardcover: $69.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Ada's Legacy illustrates the depth and diversity of writers, thinkers, and makers who have been inspired by Ada Lovelace, the English mathematician and writer. The volume, which commemorates the bicentennial of Ada's birth in December 1815, celebrates Lovelace's many achievements as well as the impact of her life and work, which reverberated widely since the late nineteenth century. In the 21st century we have seen a resurgence in Lovelace scholarship, due to the growth of interdisciplinary thinking and the expanding influence of women in science, technology, engineering and mathematics. Ada's Legacy is a unique contribution to this scholarship, thanks to its combination of papers on Ada's collaboration with Charles Babbage, Ada's position in the Victorian and Steampunk literary genres, Ada's namesake programming language, Ada's representation in and inspiration of contemporary art, and her continued relevance in discussions about gender and technology in the digital age.

Because of its broad focus on subjects that reach far beyond the life and work of Ada herself, Ada's Legacy will appeal to readers who are curious about her enduring importance in computing and the wider world.

Edmund Berkeley and the Social Responsibility of Computer Professionals

By: Bernadette Longo

eBook: 9781627051389  |  Paperback: 9781970001365  |  Hardcover: 9781970001396
DOI: 10.1145/2787754
Table of Contents

eBook: $49.00  |  Paperback: $79.95  |  Hardcover: $99.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Edmund C. Berkeley (1909 – 1988) was a mathematician, insurance actuary, inventor, publisher, and a founder of the Association for Computing Machinery (ACM). His book Giant Brains or Machines That Think (1949) was the first explanation of computers for a general readership. His journal Computers and Automation (1951-1973) was the first journal for computer professionals. In the 1950s, Berkeley developed mail-order kits for small, personal computers such as Simple Simon and the Braniac. In an era when computer development was on a scale barely affordable by universities or government agencies, Berkeley took a different approach and sold simple computer kits to average Americans. He believed that digital computers, using mechanized reasoning based on symbolic logic, could help people make more rational decisions. The result of this improved reasoning would be better social conditions and fewer large-scale wars. Although Berkeley’s populist notions of computer development in the public interest did not prevail, the events of his life exemplify the human side of ongoing debates concerning the social responsibility of computer professionals.

This biography of Edmund Berkeley, based on primary sources gathered over 15 years of archival research, provides a lens to understand social and political decisions surrounding early computer development, and the consequences of these decisions in our 21st century lives.

Candidate Multilinear Maps

By: Sanjam Garg

eBook: 9781627055482  |  Paperback: 9781627055376  |  Hardcover: 9781627055499
DOI: 10.1145/2714451
Table of Contents

eBook: $41.00  |  Paperback: $49.95  |  Hardcover: $69.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

Cryptography to me is the “black magic,” of cryptographers, enabling tasks that often seem paradoxical or simply just impossible. Like the space explorers, we cryptographers often wonder, “what are the boundaries of this world of “black magic?” This work lays one of the founding stones in furthering our understanding of these edges.

We describe plausible lattice-based constructions with properties that approximate the sought after multilinear maps in hard-discrete-logarithm groups. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function. These new constructions radically enhance our tool set and open a floodgate of applications. We present a survey of these applications. This book is based on my PhD thesis which was an extended version of a paper titled “Candidate Multilinear Maps from Ideal Lattices” co-authored with Craig Gentry and Shai Halevi. This paper was originally published at EUROCRYPT 2013.

Smarter Than Their Machines: Oral Histories of Pioneers in Interactive Computing

By: John Cullinane

eBook: 9781627055529  |  Paperback: 9781627055505  |  Hardcover: 9781627055536
DOI: 10.1145/2663015
Table of Contents

eBook: $48.00  |  Paperback: $69.95  |  Hardcover: $89.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The oral histories of the pioneers that led to interactive computing have much to offer today's leaders in business, industry, and academia on how to get complex things done. After all, industry, government, and academia working together created the computer industry, which led to the Internet as we know it today. To do so, the pioneers had to get around the various “systems” of the day that are always impediments to implementing new ideas. In their case, it was the voice-dominated communications industry. For example, packet switching was invented to get around such an issue. This was key to allowing incompatible computers to “talk” to each other across academia, and later industry, which would be the key to the Internet. Cullinane Corporation, the computer industry's first successful software products company, benefited from this technology as it focused on database software as the foundation for interactive computer systems for industry, government, and academia. As such, this book is a personal walk through the history that led to interactive computing as John Cullinane witnessed it and participated in it, with the help of the oral histories of some key pioneers, organized and introduced in a way that illustrates the close interaction of the various individuals and organizations involved in the evolution of interactive computing. These oral histories, including John's, were drawn from the archives of over 300 such histories located at the Charles Babbage Institute, University of Minnesota.

Embracing Interference in Wireless Systems

By: Shyamnath Gollakota

eBook: 9781627054768  |  Paperback: 9781627054744  |  Hardcover: 9781627055444
DOI: 10.1145/2611390
Table of Contents

eBook: $49.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

The wireless medium is a shared resource. If nearby devices transmit at the same time, their signals interfere, resulting in a collision. In traditional networks, collisions cause the loss of the transmitted information. For this reason, wireless networks have been designed with the assumption that interference is intrinsically harmful and must be avoided.

This book takes an alternate approach: Instead of viewing interference as an inherently counterproductive phenomenon that should to be avoided, we design practical systems that transform interference into a harmless, and even a beneficial phenomenon. To achieve this goal, we consider how wireless signals interact when they interfere, and use this understanding in our system designs. Specifically, when interference occurs, the signals get mixed on the wireless medium. By understanding the parameters of this mixing, we can invert the mixing and decode the interfered packets; thus, making interference harmless. Furthermore, we can control this mixing process to create strategic interference that allow decodability at a particular receiver of interest, but prevent decodability at unintended receivers and adversaries. Hence, we can transform interference into a beneficial phenomenon that provides security.

Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers

By: Bryan Parno

eBook: 9781627054799  |  Paperback: 9781627054775  |  Hardcover: 9781627055451
DOI: 10.1145/2611399
Table of Contents

eBook: $48.00  |  Paperback: $59.95  |  Hardcover: $79.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

As society rushes to digitize sensitive information and services, it is imperative to adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted.

In this book we describe how to resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. At a high level, we support this premise by developing techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. Rather than entrusting her data to the mountain of buggy code likely running on her computer, we construct an on-demand secure execution environment which can perform security-sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.

A Framework for Scientific Discovery through Video Games

By: Seth Cooper

eBook: 9781627055062  |  Paperback: 9781627055048  |  Hardcover: 9781627055437
DOI: 10.1145/2625848
Table of Contents

eBook: $42.00  |  Paperback: $49.95  |  Hardcover: $69.95
ACM Members receive a 25% discount on all books, and Student Members receive a 30% discount. See the selections in our shopping cart by clicking Buy Individual Copy.

As science becomes increasingly computational, the limits of what is computationally tractable become a barrier to scientific progress. Many scientific problems, however, are amenable to human problem solving skills that complement computational power. By leveraging these skills on a larger scale – beyond the relatively few individuals currently engaged in scientific inquiry – there is the potential for new scientific discoveries.

This book presents a framework for mapping open scientific problems into video games. The game framework combines computational power with human problem solving and creativity to work toward solving scientific problems that neither computers nor humans could previously solve alone. To maximize the potential contributors to scientific discovery, the framework designs a game to be played by people with no formal scientific background and incentivizes long-term engagement with a myriad of collaborative on competitive reward structures. The framework allows for the continual coevolution of the players and the game to each other: as players gain expertise through gameplay, the game changes to become a better tool.

The framework is validated by being applied to proteomics problems with the video game Foldit. Foldit players have contributed to novel discoveries in protein structure prediction, protein design, and protein structure refinement algorithms. The coevolution of human problem solving and computer tools in an incentivized game framework is an exciting new scientific pathway that can lead to discoveries currently unreachable by other methods.

View Titles in Development