En-Law Book 6
En-Law Book 6
En-Law Book 6
Criminology
Research Methods in Criminal Justice and
Criminology
Callie Marie Rennison
University of Colorado Denver
Timothy C. Hart
Griffith University
FOR INFORMATION:
SAGE Publications, Inc.
2455 Teller Road
Thousand Oaks, California 91320
E-mail: order@sagepub.com
SAGE Publications Ltd.
1 Oliver’s Yard
55 City Road
London EC1Y 1SP
United Kingdom
SAGE Publications India Pvt. Ltd.
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road, New Delhi 110 044
India
SAGE Publications Asia-Pacific Pte. Ltd.
3 Church Street
#10–04 Samsung Hub
Singapore 049483
Copyright © 2019 by SAGE Publications, Inc.
All rights reserved. No part of this book may be reproduced or utilized in any
form or by any means, electronic or mechanical, including photocopying,
recording, or by any information storage and retrieval system, without
permission in writing from the publisher.
All trademarks depicted within this book, including trademarks appearing as
part of a screenshot, figure, or other image are included solely for the purpose
of illustration and are the property of their respective holders. The use of the
trademarks in no way indicates any relationship with, or endorsement by, the
holders of said trademarks. SPSS is a registered trademark of International
Business Machines Corporation.
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
Names: Rennison, Callie Marie, author. | Hart, Timothy C., author.
Title: Research methods in criminal justice and criminology / Callie M.
Rennison, University of Colorado, Denver, Timothy C. Hart, Griffith
University.
Description: First Edition. | Thousand Oaks : SAGE Publications, [2018] |
Includes bibliographical references and index.
Identifiers: LCCN 2017042992 | ISBN 9781506347813 (pbk. : alk. paper)
Subjects: LCSH: Criminal justice, Administration of—Research—
Methodology. | Criminology—Research—Methodology.
Classification: LCC HV7419.5 .R46 2018 | DDC 364.072/1—dc23 LC record
available at https://lccn.loc.gov/2017042992
This book is printed on acid-free paper.
Acquisitions Editor: Jessica Miller
Content Development Editor: Laura Kirkhuff
Editorial Assistant: Rebecca Lee
Marketing Manager: Jillian Ragusa
Production Editor: Veronica Stapleton Hooper
Copy Editor: Sheree Van Vreede
Typesetter: C&M Digitals (P) Ltd.
Proofreader: Wendy Jo Dymond
Indexer: Jeanne R. Busemeyer
Cover Designer: Anupama Krishnan
Brief Contents
Preface
Acknowledgments
About the Authors
Part 1 Getting Started
Chapter 1 Why Study Research Methods?
Part 2 Setting the Stage of Your Research
Chapter 2 Identifying a Topic, a Purpose, and a Research Question
Chapter 3 Conducting a Literature Review
Part 3 Designing Your Research
Chapter 4 Concepts, Conceptualizations, Operationalizations,
Measurements, Variables, and Data
Chapter 5 Sampling
Part 4 Collecting Your Data
Chapter 6 Research Using Qualitative Data
Chapter 7 Survey Research
Chapter 8 Experimental Research
Chapter 9 Research Using Secondary Data
Chapter 10 GIS and Crime Mapping
Chapter 11 Evaluation Research
Part 5 Analysis, Findings, and Where to Go From There
Chapter 12 Analysis and Findings
Chapter 13 Making Your Research Relevant
Chapter 14 Research Methods as a Career
Appendix
Glossary
References
Index
Detailed Contents
Preface
Acknowledgments
About the Authors
Part 1 Getting Started
Chapter 1 Why Study Research Methods?
Learning Objectives
Introduction
Why Are Research Methods Important?
Knowledge and Ways of Knowing
Information From Everyday Life
College Student Victimization
Violent Crime in the United States
Other Sources of Knowledge
Typical Stages of Research
Developing a Research Question
▶ Research in Action: Postrelease Behavior: Does
Supermax Confinement Work?
Conducting a Literature Review
Designing the Research
Collecting Data
Selecting an Analytic Approach
Generating Findings, Conclusions, and Policy
Implications
Essential Role of Ethics in Research
Unethical Research Examples
Nazi Research on Concentration Camp Prisoners
Tuskegee Syphilis Experiment
Milgram’s Obedience to Authority
Stanford Prison Experiment
Foundational Ethical Research Principles and
Requirements
Role of Institutional Review Boards (IRBs)
Researcher Case Studies and a Road Map
Featured Researchers
Rachel Boba Santos, PhD
Rod Brunson, PhD
Carlos Cuevas, PhD
Mary Dodge, PhD
Chris Melde, PhD
Heather Zaykowski, PhD
Road Map to the Book
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Part 2 Setting the Stage FOR Your Research
Chapter 2 Identifying a Topic, a Purpose, and a Research Question
Learning Objectives
Introduction
Wheel of Science
Why Identify a Topic, a Purpose, and a Research Question?
How to Identify a Research Topic
Published Research
Data
Theory
Requests for Proposals (RFPs)
Personal Experiences
Reading
▶ Research in Action: Mental Illness and
Revictimization: A Comparison of Black and White Men
Viewing
Listening
Working on Research Projects With Professors
Internet
How to Identify the Purpose/Goal of Research
Exploratory Research
Descriptive Research
Explanatory Research
Evaluation Research
Gathering More Information and Refining the Topic
How to Construct the Research Question
Why Have a Research Question?
Evaluating the Research Question to Avoid Common
Pitfalls
Research Questions From Our Case Studies
Common Pitfalls When Developing Topics, Purposes, and
Research Questions
Ethical Considerations When Developing Your Topic,
Purpose, and Research Question
The Contemporary Role of IRB
Vulnerable Populations
Pregnant Women, Human Fetuses, and Neonates
Prisoners
Children
Potentially Vulnerable Populations
Exempt, Expedited, or Full Panel Review at the IRB
Training in Protecting Human Subjects
IRB Expert—Sharon Devine, JD, PhD
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 3 Conducting a Literature Review
Learning Objectives
Introduction
Why Conduct a Literature Review?
A Road Map: How to Conduct a Literature Review
About Sources
What Are the Best Sources?
Empirical Peer-Reviewed Journal Articles
Theoretical Journal Articles
Literature Review Journal Articles
Government Research and Reports and Policy Briefs
Avoiding Predatory Publishers and Predatory Journals
Inappropriate Sources
▶ Research in Action: Police Impersonation in the United
States
Finding Primary or Original Sources
Develop Search Terms
Search Using Boolean Operators and Filters
Identify Initial Primary Sources
Read Abstracts to Narrow the List of Sources
The Anatomy of an Empirical Research Article
Writing the Literature Review
Summarize Each Original Source
Someone Has Already Focused on My Topic!
Create a Summary Table
Preparing for the First Rough Draft
Organizational Approaches
A Writing Strategy: MEAL
Write the First Draft
Edit, Proof, and Polish
Common Pitfalls of Literature Reviews
Not Allowing Enough Time
Failing to Focus on Themes
Lack of Organization and Structure
Quoting Problems
Miscellaneous Common Errors
Failure to Justify the Need for the Proposed Research
Ethics Role and the Literature Review
Plagiarism
Accurate Portrayal of Existing Research
Literature Review Expert—Sean McCandless, PhD
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Part 3 Designing Your Research
Chapter 4 Concepts, Conceptualizations, Operationalizations,
Measurements, Variables, and Data
Learning Objectives
Introduction
Two Primary Types of Data: Quantitative and Qualitative
Why Focus on Concepts, Conceptualizations,
Operationalizations, Measurements, Variables, and Data?
What Are Concepts?
Examples of Concepts
What Is Conceptualization?
Example: Conceptualizing College Student
What Is Operationalization?
▶ Research in Action: Victim Impact Statements, Victim
Worth, and Juror Decision Making
What Are Variables?
Revisiting Research Questions With a Focus on Variation
Type of Variables: Dependent, Independent, and Control
Variables
Dependent Variables
Independent Variables
Control Variables
Memorizing IVs, DVs, or CVs—It Doesn’t Work
What Are Measures?
Example: Measuring College Student
How Many Measures?
What Are Data?
Attributes
Mutual Exclusiveness and Exhaustiveness
Levels of Measurement
Collect Data at the Highest Level of Measurement
Possible
Discrete and Continuous Variables
The Role of Validity
Face Validity
Content Validity
Criterion Validity
The Role of Reliability
Reliability and Validity—Don’t Necessarily Exist
Together
Overview of the Road From Concepts to Variables
Common Pitfalls in Concepts, Conceptualizations,
Operationalizations, Measurements, Variables, and Data in
Research Design
Ethics Role Associated With Concepts, Conceptualizations,
Operationalizations, Measurements, Variables, and Data
Concepts, Conceptualizations, Operationalizations,
Measurements, Variables, and Data Expert—Brenidy Rice
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 5 Sampling
Learning Objectives
Introduction
Why Is Sampling Important?
What Is Sampling?
Populations and Samples
Census or a Sample?
Census Advantages and Disadvantages
Sample Advantages and Disadvantages
Sampling Error
Bias
Unit of Analysis
Individual
▶ Research in Action: Low Self-Control and Desire for
Control: Motivations for Offending?
Groups or Organizations
Geographic Regions
Social Artifacts and Interactions
Unit of Analysis Versus Unit of Observation
Ecological Fallacy
Individualist Fallacy
Choosing a Sampling Approach
Probability Sampling
Simple Random Sampling
Systematic Sampling
Stratified Sampling
Cluster Sampling
Multistage Sampling
Nonprobability Sampling
Convenience, Accidental, Availability, or
Haphazard Sampling
Quota Sampling
Purposive or Judgmental Sampling
Snowball Sampling
How Large Should My Sample Be?
Purpose of the Research
Type of Research
Nature of the Population
Resources Available
Common Pitfalls Related to Sampling
Ecological and Individualist Fallacies
Generalizing Findings That Are Not Generalizable
Ethics Role Associated With Sampling
Sampling Expert—Sam Gallaher, PhD
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Part 4 Collecting Your Data
Chapter 6 Research Using Qualitative Data
Learning Objectives
Introduction
Why Conduct Research Using Qualitative Data?
What Is Research Using Qualitative Data?
Stages of Research Using Qualitative Data
Benefits and Limitations of Research Using Qualitative
Data
Considerations: Research Using Qualitative Data
Inductive Reasoning
Sampling Considerations
Sample Size
Approaches Used to Gather Qualitative Data
Interviews
Individual Interviewing
Focus Groups
Observation and Fieldwork
▶ Research in Action: Public Health Problem or Moral
Failing? That Might Depend on the Offender
Complete Participant
Participant as Observer
Observer as Participant
Complete Observer
Documents
Content Analysis
Recording Qualitative Data
Organizing and Analyzing Qualitative Data
Examples of Qualitatively Derived Themes: Brunson and
Weitzer’s (2009) Research
Computer-Assisted Qualitative Data Analysis
(CAQDAS)
Advantages and Disadvantages of CAQDAS
Common Pitfalls in Research Using Qualitative Data
Loss of Objectivity—Going Native
Ethics Role Associated With Research Using Qualitative Data
Qualitative Data Research Expert—Carol Peeples
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 7 Survey Research
Learning Objectives
Introduction
Why Conduct Survey Research?
What Are Surveys?
General Steps in Survey Research
Surveys Across Research Purposes
Surveys and Exploratory Research
Surveys and Descriptive Research
Surveys and Explanatory Research
Surveys and Evaluation Research
How Are Surveys Distributed?
Mail/Written Surveys (Postal Surveys)
Advantages
Disadvantages
Online and Mobile Surveys
Advantages
Disadvantages
Telephone Surveys
Advantages
Disadvantages
Face-to-Face (In-Person) Interviews
Advantages
Disadvantages
Designing Your Own Survey
Survey Questions
Design and Layout
▶ Research in Action: Understanding Confidence in the
Police
Pretesting of Survey Instruments
Survey Administration
Notification Letters
Fielding the Survey
Follow-Up to Nonresponders
Survey Processing and Data Entry
Easy-to-Use Survey Software
SurveyMonkey
Qualtrics
LimeSurvey
Common Pitfalls in Survey Research
Ethical Considerations in Survey Research
Surveying Expert—Bridget Kelly, MA
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 8 Experimental Research
Learning Objectives
Introduction
Why Conduct Experimental Research?
What Is Causation?
Association Is Not Causation
What Is Experimental Research?
True Experiments
Experimental and Control Group
Random Assignment
Matching in True Experiments
Researcher Manipulation of Treatment
True Experimental Designs
Two-Group Posttest-Only Design
Two-Group Pretest–Treatment–Posttest Design
Solomon Four Group Design
Validity
Internal Validity Threats
Experimental Mortality
History
Instrumentation
Maturation
Selection Bias
Statistical Regression
Testing
External Validity Threats
Interaction of Selection Biases and Experimental
Variables
Interaction of Experimental Arrangements and
Experimental Variables
▶ Research in Action: Acupuncture and Drug Treatment:
An Experiment of Effectiveness
Interaction of Testing and Experimental Variables
Reactivity Threats
Reliability
Beyond True Experiments
Pre-Experimental Research
One-Shot Case Design
One-Group Pretest–Posttest Design
Static-Group Comparisons
Quasi-Experimental Research
Nonequivalent Groups Design
Before-and-After Design
Natural Experiments
Common Pitfalls in Experimental Research
Ethics Role and Experimental Research
Experimental Research Expert—Chris Keating, PhD
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 9 Research Using Secondary Data
Learning Objectives
Introduction
Why Conduct Research Using Secondary Data?
What Are Secondary Data?
Frequently Used Secondary Data
ICPSR
Federal Statistical System (FedStats)
U.S. Department of Justice
Federal Bureau of Investigation (FBI)
▶ Research in Action: Do Offenders “Forage” for Targets
to Victimize?
Bureau of Justice Statistics (BJS)
U.S. Department of Commerce
Census Bureau
Geospatial Data
Center for Disease Control and Prevention
State Statistical Agencies
Local Statistical Agencies
Disadvantages of Using Secondary Data
Reporting Findings From Secondary Data Analysis
Common Pitfalls in Secondary Data Analysis
Ethics Role Associated With Secondary Data Analysis
Secondary Data Expert—Jenna Truman, PhD
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 10 GIS and Crime Mapping
Learning Objectives
Introduction
Why Conduct Research Using GIS and Crime Mapping
Techniques?
What Is GIS?
Data
Technology
Application
People
What Is Crime Mapping and Analysis?
Administrative Crime Analysis
Tactical Crime Analysis
Strategic Crime Analysis
Crime Analysis in Academic Research
Crime Hot Spot Mapping
Predictive Policing
Risk Terrain Modeling
Aoristic Analysis
Repeat Victimization and Near Repeats
▶ Research in Action: Using Risk Terrain Modeling to
Predict Child Maltreatment
Geographic Profiling
Reporting Findings From GIS and Crime Mapping Studies
Common Pitfalls in GIS and Crime Mapping Analysis
Geocoding
Modifiable Areal Unit Problem (MAUP)
Ethical Considerations in GIS and Crime Mapping Research
GIS and Crime Mapping Expert—Henri Buccine-Schraeder,
MA
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 11 Evaluation Research
Learning Objectives
Introduction
Why Use Evaluation Research?
What Is Evaluation Research?
Guiding Principles of Evaluation Research
Seven Steps of Evaluation Research
The Policy Process and Evaluation Research
Types of Evaluation Research
Formative Evaluation
Needs Assessment
Process Evaluation
Summative Evaluation
Outcome Evaluation
Impact Evaluation
Distinctive Purposes of Evaluation and Basic Research
Knowledge for Decision Making
Origination of Research Questions
Comparative and Judgmental Nature
Working With Stakeholders
Challenging Environment
Findings and Dissemination
Characteristics of Effective Evaluations
Utility
Feasibility
Propriety
Accuracy
▶ Research in Action: Evaluating Faculty Teaching
Common Pitfalls in Evaluation Research
Evaluations as an Afterthought
Political Context
Trust
Overselling Your Skills as an Evaluator
Ethics Role Associated With Evaluation Research
Failure to Be Nimble
Confidentiality
Politics
Losing Objectivity and Failure to Pull the Plug
Evaluation Research Expert—Michael Shively, PhD
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Part 5 Analysis, Findings, and Where to go From There
Chapter 12 Analysis and Findings
Learning Objectives
Introduction
Why Analysis?
How Should Data Be Analyzed?
Analysis of Quantitative Data
Describing Your Data
Distributions
Measures of Central Tendency
Mean
Median
Mode
Measures of Dispersion
Range
Interquartile Range
Variance
Standard Deviation
Beyond Descriptives
Associations
Differences
Qualitative Data Analysis
Data Analysis Software
Software Applications Used in Quantitative Research
Excel
SPSS
Other Commercial Packages
Software Applications Used in Qualitative Research
QDA Miner
▶ Research in Action: Reducing Bullying in Schools
NVivo
ATLAS.ti
HyperRESEARCH
Alternative Analytic Approaches
Conjunctive Analysis of Case Configurations (CACC)
SPSS
STATA
SAS
R
Geostatistical Approaches
Introduction to Spatial Statistics
Spatial Description
Mean Center
Standard Distance
Convex Hull
Spatial Dependency and Autocorrelation
Spatial Interpolation and Regression
Reporting Findings From Your Research
Tables
Figures
Common Pitfalls in Data Analysis and Developing Findings
Ethics Role Associated With Analyzing Your Data and
Developing Your Findings
Analysis and Findings Expert—Sue Burton
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 13 Making Your Research Relevant
Learning Objectives
Introduction
Why Conduct Policy-Relevant Research?
What Is Policy-Relevant Research?
What Is Policy?
Who Are Policy Makers?
The Policy Process
Problem Identification/Agenda Setting
Policy Formulation
Policy Adoption
Policy Implementation
Policy Evaluation
Challenges of Getting Research to Policy Makers
Relationship and Communication Barriers
Nonaccessible Presentation of Research
Competing Sources of Influence
Media
Fear
Advocacy and Interest Groups
Ideology
Budget Constraints
▶ Research in Action: Type of Attorney and Bail
Decisions
Maximizing Chances of Producing Policy-Relevant Research
Plan to Be Policy Relevant From the Start
Relationship
Translating Your Research
Common Pitfalls in Producing Policy-Relevant Research
Producing Research That Is Not Policy Relevant
Failing to Recognize How Your Research Is Relevant
Failure to Know Relevant Policy Makers
Going Beyond Your Data and Findings
Ethics Role and Conducting Policy Relevant Research
Policy Expert—Katie TePas
Chapter Wrap-Up
Making a Policy Brief
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Chapter 14 Research Methods as a Career
Learning Objectives
Introduction
Why Research Methods as a Career?
Where Do I Get Started?
Career Search Documents
Cover Letters
▶ What Makes a Great Résumé?
Résumés
▶ Research in Action: Unexpected Career Choices for
Criminal Justice Graduates
Sample Résumé
Letters of Recommendation
Public- Versus Private-Sector Jobs
Working in the Public Sector
USAJOBS
Applying to State and Local Positions
Working in the Private Sector
Where to Look for a Career
Online Job Searches
Social Media
Facebook
LinkedIn
Twitter
Top Twitter Job Search Hashtags
Internships
Tips on Turning an Internship Into a Full-time Job
Interviewing Well
Before the Interview
Preparing Questions by (and for) the Interviewer
During the Interview
Asking the Right Questions
Your Professional Portfolio
After the Interview
Professional Interviewing Tips
Pitfalls and Career Searches
Ethics Role and Career Searches
Research Methods as a Career Expert—Nora Scanlon, MA
Chapter Wrap-Up
▶ Applied Assignments
Key Words and Concepts
Key Points
Review Questions
Critical Thinking Questions
Appendix
Glossary
References
Index
Preface
Our Approach
The most widely used research methods texts tend to use an approach that
convinces students that research methods are irrelevant to their lives and
future careers (unless they are going into academia), and the course they are
sitting in is one through which they must suffer and complete to graduate.
Current texts tend to introduce research as an abstract set of concepts that are
used according to a rigid set of rules that leave students questioning the
relevance of the material for them. These books fail to ignite imagination,
motivate, and engage students. They fail to share the creative and exciting
process of generating knowledge—a process that students can engage in.
Existing texts are too often like a recipe that results in dry, crumbly,
flavorless cookies. In reality, all cookie recipes should result in irresistible,
tasty, buttery, and delicious cookies! Our text shows students how they can
create equally amazing research.
We also believe that current textbooks miss the opportunity to illustrate to
students the very real importance of understanding research methods. This is
true for all students, including those seeking a wide range of careers.
Research methods matter for those whose careers will generate research,
those whose jobs may be affected by research findings, and those who will be
consuming research to create policy. Simply, current texts focus too much on
the nuts and bolts of “how” and fail to bring to life the exciting and vivid
“why” of research methods.
After years of teaching research methods, we have developed interactive,
informative, and intellectually stimulating ways to engage and excite students
about research methods. We have homed in on ways that demonstrate the
challenges, creativity, enjoyment, and overall value of conducting research on
relevant and current criminal justice topics. One way we do this is by
illustrating and providing a narrative focused repeatedly on the “why” of
research, which is a theme carried throughout the text. We emphasize
strategies for students to generate exciting questions, ways to go about
gathering data to address their questions, challenges inherent in conducting
studies that will answer those questions, and how to deal with research when
things inevitably go wrong. We emphasize to students how their research can
influence policy (in positive and negative ways) and how they absolutely can
conduct research that can address important issues.
We find that our approach resonates with the learning styles of today’s
students who flourish in team environments, value applied research, and want
to understand how these research methods skills will enable them to make a
positive difference in society. With this text, students will leave class excited
about what they have learned. In many cases, their motivation and passion
fuel them to continue with additional methodological and statistical courses
as well as to consider career paths that use these valuable skills.
In short, this text provides an appealing and competitive alternative to the
existing texts by using a narrative approach to first introduce students to
exciting research and then get them engaged with the topic. They can learn
how researchers from a variety of backgrounds tackled real-life issues
associated with research. Students hear about successes and failures. They
also see the connection between current issues in the news and research (in
terms of both being critical consumers of existing work as well as engaging
in that research themselves in their future careers). By using this book,
students are able to see themselves in the role of a researcher and the
relevance of research for them.
Coverage
The text provides broad, comprehensive, and contemporary material about
research methods, especially as they pertain to criminology and the criminal
justice system. This includes introductory materials as well as practical
strategies for how to develop a research question, write a literature review,
identify the framework for designing a research study, gather data (via a
variety of strategies, including survey research, qualitative data, and the use
of secondary data), and present findings. In addition, we go beyond
traditional approaches and include a chapter on making research relevant, and
how to use these skills to begin a career. A way we diverge from existing
texts is that the topic of ethics is not confined to a single chapter. We feel
strongly that in the context of studying criminology and the criminal justice
system, ethics are extremely important and should be a discussion that begins
in Chapter 1 and ends only when the book ends.
Relevance
Perhaps the greatest limitation of current research methods texts is their
failure to convey the relevance of research methods, especially for students
whose career paths may not be clearly associated with research presented in a
more academic-centered format. As noted, extant books are heavy on the
“how” element of research methods while almost neglecting the “why”
element. Presented as many texts currently do, students see research methods
as limited to publishing scholarly articles rather than as a process of posing
questions and systematically answering them to create a body of knowledge
that can be disseminated to a variety of audiences. Recognizing they are
answering an important and relevant question helps students to understand
why the research endeavor is important. Furthermore, it illustrates why the
approaches used in methods are important. This helps students understand the
need to be knowledgeable consumers of research. A wide variety of careers
are affected by research directly (as analysts or policy makers) or indirectly
(via policy initiatives that influence those working in the criminal justice
system as well as related fields). A text that identifies the variety of ways
research can be used and that enforces the need to discern quality research
from problematic studies will engage a range of students. This approach
makes clearer the “why” of research methods.
Voice
Our text is not akin to an encyclopedia full of information. Rather, our text is
speaking directly to the students about research and how it is conducted. By
talking directly to students, we can show them why parts and approaches
associated with research methods matter. By talking directly to them, we can
tell them that they too can do this. We can encourage them to try and show
them many examples of people just like them that are using these skills every
day. We tell students directly that this material can be used to put themselves
in a fulfilling career and life. Research methods skills are not magic. They are
something that all students can master. Our text and our directly speaking to
students are designed to show them this.
Distinctive Characteristics of This Text
Case Studies
One way our text engages students is by telling the story of the research
process as it unfolds. This is done by following six researchers who describe
actual research they have conducted. The text reveals elements of each piece
of research slowly as the student moves throughout the chapters. This
approach demonstrates how the material in the chapters is interconnected as a
system versus being discrete unrelated elements. We find in the classroom
that the slow revelation of research case studies holds students’ attention and
facilitates learning more readily. The information we share about each of
these six pieces of research is based on hours of interviews that cover the full
research process from the initial steps of identifying a research question to the
completion of the research project and how it has influenced policy, society,
or both. The research covered in these case studies covers issues of interest to
students such as the relationship between gang members, violence and fear,
police interaction with young males in the St. Louis area, Latino adolescent
victimization, high-intensity policing strategies, help seeking behavior among
violent crime victims, and policewomen posing as prostitutes. We recognize
that contemporary students are public-service minded, and applied research
resonates with them. We show them how six researchers feel the same, and
how they can join the ranks of researchers making a difference.
Equal Treatment of Qualitative and Quantitative
Data
Most research methods books are simply books focused on gathering
quantitative data with a chapter devoted to qualitative data. Our book differs.
We discuss the elements of research such as sampling, research questions,
ethics, and pitfalls for both quantitative and qualitative data throughout. We
do devote a specific chapter to qualitative research and data gathering—just
like we do to gathering other approaches. But make no mistake, qualitative
research and data are not an afterthought in our text.
Careers
A theme in this text is that research methods skills can lead to a variety of
rewarding careers. To highlight this, we have included career sections
highlighting an expert using these skills in each chapter. Not only do these
sections show clearly how these skills are used in a variety of rewarding ways
(that pay good salaries), they show that those in these careers lead interesting
and fun lives. The goal of this feature is for these professionals to
demonstrate why research methods are vital to the work they do, as well as to
describe exactly how they use these skills. To this end, the book will educate
students on the myriad of careers these skills can help them obtain. Plus, we
devote the final chapter in the text to finding a career with these skills. This
includes material on how to effectively find work and conduct interviews
with those working in these types of jobs. This approach will better allow
students to visualize themselves in these careers.
Ethics
© whiterabbit83/iStockphoto.com
Hopefully, this chapter prompts you to recognize that research differences in
the research methodology used by each federal agency to generate these
crime rates account for these differences in estimates. This is exactly the case.
BJS violent crime estimates are based on data from the National Crime
Victimization Survey (NCVS) on nonfatal violent criminal victimization.
This survey defines violent crime as including rape, sexual assault, robbery,
aggravated assault, and simple assault. It does not include murder because
NCVS data are gathered directly from the victims of violence (victims of
murder cannot be interviewed). The NCVS gathers data from a sample of
people (not every person in the United States is interviewed), and the
published NCVS violent crime rate tells us how many violent victimizations
(not how many victims or how many incidents or offenses) occurred per
1,000 people age 12 or older.2 That the NCVS crime rate focuses on
victimizations (and not on victims or incidents) is an important piece of
methodological information in understanding these figures because there can
be multiple victims in each incident, and each victim could experience
multiple victimizations in each incident or offense.
2. The NCVS data do allow for the calculation of incident and prevalence
rates; nevertheless, most BJS reports are focused on victimizations.
The FBI produces violent crime estimates using a completely different
approach or methodology. First, FBI crime data are gathered from police
agencies who submit crime information to the FBI on a voluntary basis. This
means if a victim of a crime does not report the incident to the police, or if
the police do not record an incident, or report crime data to the FBI, it will
not be reflected in the FBI numbers or estimates. The FBI defines violent
crime as including murder, rape, robbery, aggravated assault, and arson. Note
that sexual assault along with simple assault (the most common form of
violence in the United States according to the NCVS) are not included in FBI
violent crime estimates. Also, as noted, murder, the least common form of
violence in the United States according to the FBI, is included in FBI violent
crime estimates. Moreover, the FBI includes arson, which is not recorded in
the NCVS. FBI crime estimates describe crime committed against all people
in the United States, regardless of their age (the NCVS focuses only on
victims age 12 or older), regardless of where they live (recall that the NCVS
focuses only on persons in a housing unit—no people living in institutional
housing or the homeless are included), and includes commercial crimes (the
NCVS does not count crimes against a business). Also, FBI crime rates refer
to offenses, and offenses are counted differently depending on the specific
violent crime considered. According to the FBI, when considering assault and
rape, an offense is equal to the number of victims. For robbery, an offense
equals the number of incidents. This means that a single robbery counted by
the FBI could include numerous victims. These are only some of the
differences in the methodologies used to generate violent crime estimates by
the NCVS and the FBI. See Table 1.1 for some UCR and NCVS
methodological differences.
The NCVS and FBI estimates are based on different approaches, definitions,
measurements, and ways to count violent crime. Now with a better
understanding of differences in methodologies used by BJS and the FBI, do
you have a more informed understanding about the extent of violent crime in
the United States? Can you now articulate some reasons why the NCVS
suggests stability in crime rates over the year, whereas the FBI measures a
decline? Can you offer evidence for why you know this? Can you better
understand why NCVS estimates might be higher than FBI estimates? Can
you see why stating that statistics and researchers are biased or that “statistics
lie” is not only intellectually lazy but also incorrect? Given all of these
differences, is it at all surprising to you that the two pieces of research reach
different conclusions? It should not be.
Adapted from http://www.bjs.gov/index.cfm?ty=pbdetail&iid=5112;
and Rand, Michael R. and Callie Marie Rennison. (2002). True crime
stories? Accounting for differences in our national crime indicators.
Chance, 15(1), 47–51.
Other Sources of Knowledge
Chances are that each of us has knowledge we hold dear that is not based on
a scientific approach. In fact, having only knowledge from scientific inquiry
would make day-to-day living impossible. Imagine requiring scientific
evidence to determine how to best cook, eat, bathe, drive, read, study, interact
with others, converse, or any number of other daily activities. This section
identifies common sources of nonscientific knowledge and notes some
limitations of these sources.
Tradition, customs, and norms. Tradition, customs, and norms are
used to pass on knowledge or beliefs from person to person over time.
This knowledge is thought to be true and valuable because people have
always believed them to be true and valuable. Examples include how
strangers are to be greeted, treatment of the U.S. flag, manners used
while eating, and what is viewed as appropriate food sources (e.g., no
horse or dog for dinner [or breakfast either]). A limitation of this type of
knowledge is that it is subjective, nonresearch based, and not concluded
from systematically collected data. Also, this information is often not
falsifiable or reproducible. The knowledge is simply accepted as fact.
Personal experience. Personal experience is a powerful source of
nonscientific and nonresearch-based knowledge. This knowledge is
believed to be valuable and true because you have personally
experienced it. For example, you may have the personal experience that
babies in restaurants are loud and messy (based on seeing one or two
loud and messy babies; you probably didn’t notice the clean and quiet
babies, although they were there too). Or you may hold particular
stereotypes of others based on a few personal experiences. This may be
your experience; your experience is subjective, however, and does not
necessarily accurately reflect the larger truth. This knowledge also
suffers from not being concluded from systematically collected data. It
is not falsifiable, and it is not reproducible.
Some find babies to be poor restaurant patrons based on personal
experience. Does this meant that all babies are unruly at restaurants, or
might that knowledge be based on personal experience only?
© kali9/iStockphoto.com
Authoritative sources. Knowledge also can include information taken
from authoritative sources including parents, clergy, news sources,
bloggers, social media, professors, or others. For some, if a source is
trusted, the information they share is trusted. Examples can include
knowledge such as that prayer is useful, the president is a jerk, crime is
out of control, and research methods are useful. Although knowledge
gathered in these ways can be valuable (and even be based on scientific
research), you should research that information to assess its value
because it too can be imperfect or incorrect.
Intuition. Knowledge based on intuition is believed and valued because
you have a feeling, sense, or gut instinct it is “good” information.
Examples include initial perceptions about others or feelings about
particular places or situations. Imagine meeting an individual at a party
where your intuition immediately suggests he or she is a shyster and not
to be trusted. This is your intuition talking, and what it is saying may or
may not be true. It is also the case that this information is not research
based or scientific (even if you have been correct in other initial
assessments). Like the other categories described here, this knowledge is
not falsifiable, not reproducible, and not scientifically based.
Tradition, customs, and norms: knowledge or beliefs passed on
from person to person over time. This knowledge is thought to be
true and valuable because people have always believed it to be
true and valuable.
Personal experience: Knowledge accepted based on one’s own
observations and experiences.
Authoritative sources: Knowledge based on information
accepted from people or sources that are trusted such as parents,
clergy, news sources, bloggers, social media, or professors.
Intuition: Knowledge developed based on a feeling or gut
instinct.
Typical Stages of Research
Subsequent chapters provide greater detail about the foundational elements
and stages important for conducting scientific research. For now, this section
briefly outlines these major steps and things used in research, including
developing a research question, conducting a literature review, selecting
appropriate research methods (such as samples and ways of gathering data
such as surveys and observations), selecting analytic techniques, and
developing and disseminating findings and conclusions.
Developing a Research Question
Research begins with, and is guided by, a research question. The research
question when answered increases our understanding and knowledge about a
topic. Research is never guided by a statement of fact. As basic as this
distinction between question and statement seems, students new to research
methodology frequently offer a statement rather than a question when asked
to pose possible research questions. Every step that is taken to accomplish
research is informed by that research question. There is an endless number of
possible research questions. Some examples include
Research question: Question that guides research designed to
generate knowledge. This question guides the research endeavor.
What is the effect of gang membership on self-reported violent
victimization among adolescents?
Is violence against college students more likely to occur on or off
campus?
What are the differences between the perceptions and experiences of
Black and White youth with police officers?
How do female police officers serving as prostitution decoys view this
work?
Will an offender-focused, high-intensity policing strategy in a hot spot
lead to a reduction in crime?
What role does reporting violence to the police play on a victim’s
likelihood to access victim services?
What are the rates of dating violence by Latino victim gender?
Do you have a research question in mind that you would like to explore? If
not, you are not alone. Many students are anxious and feel that they cannot
possibly think of a research question. Happily, all students can pose research
questions once they recognize that they can be developed in many ways,
including listening to others speak, reading a text and research literature,
learning about theories, reading and watching the media, going to
professional meetings, and so on. Sometimes it takes practice to see that
one’s innate curiosity or a desire to develop information can lead to research
questions. Imagine you are in class listening to a police officer identifying the
ways you can distinguish a police impersonator from a legitimate police
officer. This officer discusses subtle characteristics in an officer’s uniform or
personal appearance (e.g., facial hair) that can be used to identify an
impersonator. The officer’s presentation gets you wondering more broadly
about police impersonators, which leads to several questions:
Research in Action: Postrelease Behavior: Does Supermax
Confinement Work?
From 1970 to the early 2000s, the incarceration rates in the United
States exploded, as did the amount of research focused on
incarceration. That research overwhelmingly shows that
incarceration does not effectively reduce a person’s odds of
recidivating compared with other approaches that include
probation or shorter prison sentences. What has not received
much attention, however, is how the type of incarceration affects
recidivism and other postrelease behaviors. Specifically, it is
unclear how supermax confinement influences odds of recidivism
and other postrelease behaviors. Supermax facilities are costly to
operate, and confinement typically involves confinement to a cell
for 23 hours a day with few or no opportunities for socialization
with staff or other inmates. Butler, Steiner, Makarious, and Travis
(2017) found two studies that examined the influence of supermax
confinement on recidivism. Neither study found that supermax
confinement had an effect on offenders’ odds of recidivism. To
build on that research, the current examination by Butler et al.
compares recidivism rates of offenders exposed to supermax
confinement in Ohio with those derived from a matched sample of
offenders not exposed to supermax confinement. In addition, this
research considers both short- (1 year) and long-term (7 years)
effects of exposure to supermax confinement. Finally, Butler et al.
consider the influence of supermax confinement on other
postrelease behaviors such as employment and treatment
completion.
To address these research purposes, the researchers used a
randomly selected sample of 1,569 men taken from a list of all
men released under postrelease supervision in Ohio from about
2003 to 2005. Data about each offender were collected from
several official sources such as case files, and offenders were
followed for a full year after release. Measures of whether each
offender was reincarcerated for any reason or reincarcerated for a
new crime within 7 years of his release were also obtained and
used in the analysis. The analysis consisted of finding a similar
sample of incarcerated men who differed from the supermax
sample only in terms of having not spent time in a supermax
facility. By comparing their outcomes with the supermax sample,
the researchers can now point to the effects of supermax
exposure.
Findings from Butler et al. show that exposure to supermax
confinement had no effect on recidivism in the short term. Similar
to the findings from the analysis of recidivism in the short term,
the odds of recidivism over the long term were nonsignificant.
Thus, supermax confinement did not affect offenders’ odds of
recidivism over the long term either. When considering other
postrelease behaviors, findings show that exposure to supermax
confinement does not affect other postrelease outcomes for
offenders released under postrelease supervision. In sum,
supermax exposure leads to equivalent outcomes when compared
with offenders not exposed to supermax.
The policy implications of this work by Butler et al. point to the
costs versus the benefits of supermax. The cost of operating a
supermax facility is far greater than the cost of operating a typical
maximum security prison. If outcomes are equivalent, this
suggests the need to consider the feasibility of running expensive
supermax operations. It appears cheaper and equally effective
alternatives exist.
Butler, H. D., Steiner, B., Makarious, M. D., & Travis, L. F.
(2017). Assessing the effects of exposure to supermax
confinement on offender postrelease behaviors. The Prison
Journal, 97(3), 275–295.
What is the gender, race, and age of most police impersonators?
What types of victims do most police impersonators target?
What are the main motivations of police impersonators?
Are police impersonators more likely than other types of criminals to
brandish or use a weapon? To injure the victim?
These are all interesting questions, and all are suitable research questions.
Imagine now that you taking a university-required training about college
student sexual violence. You find that the material in the training is focused
on sexual violence against female students only. This raises several questions
in your mind:
Are college women victims of nonsexual violence such as robbery? If
so, to what extent?
What is the extent to which male college students are sexually and
nonsexually victimized?
Does violence against college students differ from violence against
noncollege students in terms of rates and characteristics?
Are bystanders more likely to be present during a college versus a
noncollege student victimization? Is this the same for male versus
female college students?
These are all suitable research questions. With the skills learned in this book,
each of these research questions can be answered by you. Once a research
question (or questions) has been identified, the next step is to learn what is
already known about that topic. That is accomplished via a literature review.
Conducting a Literature Review
A research question is the foundation of proposed research. Once you have
identified a research question (or questions), the next step is to conduct a
review of scientific literature on that research topic. A literature review
serves many purposes. It
summarizes and synthesizes existing understanding on the topic of
interest,
identifies limitations and gaps in existing research,
offers justification for the proposed study, and
places the new study in context of the existing literature.
Literature review: Review, summary, and synthesis of extant
knowledge on a topic. Literature review sections in journal
articles review, present, organize, and synthesize existing
understanding on a topic at the time the research was conducted.
They are used to place the published research into context and to
demonstrate how it adds to our understanding of a topic.
Although you may develop a creative and fascinating research question, it
might be that others have already addressed it. That is okay! Reading about
existing studies focused on the same question will assist in refining your
research question. Understanding details about the methodology used in prior
research offers the opportunity to identify possible improvements on that
methodology in your project. Perhaps the existing studies are very old. This
means a new look at this old question using newer or improved data can
increase our understanding of the topic. Or perhaps the older study used very
basic analytic approaches because computer power was not available at the
time that research was conducted. It may be that the research question can be
reexamined with more powerful analytic approaches and technology
available today. This means the new study can provide an enhanced
understanding of the issue.
Designing the Research
Designing the research study is the next major step in conducting research.
Designing the research is where you identify the precise steps that will be
used to answer the research question. Some of those steps may be identifying
concepts of interest, making them measurable (operationalization), measuring
those concepts, and selecting a sample. It is imperative that the precise steps
taken to conduct research be thoroughly considered and documented.
Documentation of your methodology is needed for consumers of your work
to critically assess it, and so future researchers who want to replicate your
study can do so exactly.
Crime mapping is the source of much criminology and criminal justice
knowledge. How do you think it added to our understanding about where
crime occurs?
© Mikael Karlsson/Alamy Stock Photo
Collecting Data
Once the research methodology has been identified, the next step is to gather
the data or information that will be analyzed to answer the research question.
Researchers gather data that are most effective, efficient, and affordable to
answer the research question. Data may be gathered in any number of ways,
including survey research, in-person interviews, focus groups, observations,
experiments, quasi-experiments, document analysis, and so on. At the
conclusion of data gathering, a researcher has the data needed to answer his
or her research question. The next step is to analyze those data.
Selecting an Analytic Approach
To answer the research question posed, a researcher takes the systematically
gathered data and analyzes it. How the data are analyzed depends on the
nature of the research question and the data gathered. If a researcher wishes
to explore or describe a topic using numeric data, he or she might use
percentages or rates. If a researcher is working with non-numeric data in the
form of text, interviews, or observations, he or she might use an approach
that identifies themes, concepts, or core meanings. If a researchers wishes to
identify associations among variables, then correlations might be an
appropriate approach. If the researcher is interested in investigating causal
relationships using numeric data, a statistical technique such as regression
might be the most suitable approach. Many considerations including whether
the data sought are numeric or non-numeric in nature go into selecting the
appropriate analytic technique, but all ultimately are selected based on the
best way to answer the research question.
Generating Findings, Conclusions, and Policy
Implications
Answering the research question to create knowledge is the goal of the
research. Nevertheless, answering the question is not enough. You as a
researcher must also make sense of the findings. This can be accomplished by
placing the findings in the context of the existing literature (again, the
literature review is useful during this step). Do your findings support what
was found in other literature? Do findings in the current research deviate
from the findings in the literature? What are possible reasons for this support
of, or deviation from, existing literature? How might this new knowledge be
used to affect policy and improve everyday life? This step requires thinking
about the research and what it means in the larger context of the issue.
Essential Role of Ethics in Research
© Everett Collection/Newscom
At the beginning of the experiment, 399 of the 600 participants were known
to be infected with syphilis (the remaining 201 were considered the
comparison group). None of the infected men were told they had syphilis,
however. When the Tuskegee Syphilis Experiment began in 1932, there was
no cure for syphilis. In an astoundingly unethical turn of events, when
penicillin was identified as a cure to treat syphilis in 1947, penicillin was
withheld from the infected participants in the study. In fact, efforts were
made to obstruct study participants from receiving penicillin anywhere so as
not to jeopardize the study. In contrast, although these men were denied
available treatment for syphilis, the study’s sponsor was establishing “Rapid
Treatment Centers” to treat syphilis in the general population.
This ghastly experiment was finally halted in 1972, more than 20 years after
the discovery of penicillin. Many had called for the study’s termination
earlier given its unethical nature, but those demands were ignored. Only
when a whistle-blower, Peter Buxtun, leaked information regarding the
experiment to journalists was the research halted. At the time the experiment
was stopped, only 74 of the original 399 infected men were still living.
Twenty eight had died from syphilis, 100 had died from related
complications, and 40 spouses and 19 children had been infected. In 1997,
President Bill Clinton formally apologized for this government-sponsored
study. In attendance at this formal presidential apology were five of the eight
surviving experimental research subjects.
Milgram’s Obedience to Authority
Unethical research in the United States has not been confined to federal
government sponsors either. Consider the infamous work of Stanley Milgram
(1963). The purpose of his 1961 research was to identify the willingness of
people to obey authority figures even when that requested behavior conflicts
with a person’s conscience. The impetus for the study was the death of
millions of people killed in gas chambers in concentration camps during the
Holocaust. Milgram (1975, p. 1) noted that “[t]hese inhumane policies may
have originated in the mind of a single person, but they could only have been
carried out on a massive scale if a very large number of people obeyed
orders.” To conduct the study, Milgram advertised for volunteers to
participate in an experiment about learning. Forty male volunteers were
selected to participate. The experiment began as two men showed up as
volunteers. The first order of business was to determine who would be the
teacher and who would be the learner in the research on “learning.” These
roles were determined by drawing slips of paper out of a hat. Both men drew
a slip of paper, and each announced his role. Both slips of paper in the hat
had “teacher” written on them. In reality, one volunteer was a confederate
working with the research team. Although the confederate drew a slip that
stated “teacher,” he stated he was the learner. The true volunteer would serve
as the teacher, and his behavior was the focus on the actual experiment.
In one iteration of the Milgram experiments, the teacher and learner sat next
to one another. In this case, the learner had to place his own hand on the
shock plate. When he refused to do so at 150 volts, the teacher was ordered to
physically force the learner’s hand on the shock plate. Thirty percent of
teachers forced the learner’s hand onto the plate all the way up to the
maximum 450 volts. Would you have done so? Why or why not?
To illustrate the dynamic and fun nature of research and research methods,
several engaging researchers will be sharing stories about their own research
experiences in this text. These accomplished researchers have studied a range
of criminal justice and criminology topics, gathering a variety of types of
data, using a variety of research methodologies. Throughout the remainder of
the book, they share, often in their own words, stories about successes and
hurdles they have encountered as they have attempted to answer important
and interesting research questions. This information was gathered in a series
of personal interviews conducted via videoconference, many of which were
recorded.
The next section introduces each researcher we follow throughout the
research process in the text. This section also introduces one piece of research
each researcher published in an academic journal. We will discuss the process
each engaged in throughout the text to illustrate research methods in action.
Table 1.2 follows and offers the full citation for each researcher’s work. For
additional information about each researcher, consult the corresponding
academic webpages that are available in the “Web Resources” section at the
end of this chapter.
Featured Researchers
Rachel Boba Santos, PhD
Courtesy of Rachel Boba Santos
Rachel Boba Santos, PhD, is a professor in the Criminal Justice Department
at Radford University. Her research interests include conducting practice-
based research, which is implementing and evaluating evidence-based
practices in the “real world” of criminal justice. In particular, her research
seeks to improve crime prevention and crime reduction efforts by police in
areas such as crime analysis, problem solving, accountability, as well as
leadership and organizational change. Her research has used a variety of
methods, including experimental research and program evaluation. In
addition, she has recently completed the fourth edition of her book, Crime
Analysis With Crime Mapping (2016), the only sole-authored textbook for
crime analysis.
Santos was first exposed to research when she was an undergraduate in
college. She was enrolled in a research methods class where she designed her
own survey. Her professor asked her what her future plans were and
encouraged her to pursue a PhD. As she progressed through her education,
she recognized that she has a talent for identifying and conducting pragmatic
and relevant research that mattered to police agencies and the community
they serve. Throughout the text, we learn more about Santos’s research and
the very practical application of it. Her work offers a useful blend of
experimental and evaluation research that can be quickly translated to the
field to assist police officers on the street. The particular piece of research we
focus on is an examination of the influence of offender-focused, high-
intensity police intervention to identify whether this approach reduces
property crime in known hot spots in a suburban environment. This article
was conducted with her colleague Roberto Santos, PhD (Santos & Santos,
2016). Although we are focusing on Rachel Santos in this text, it is important
to recognize Richard Santos’s collaborative role in this work.
Rod Brunson, PhD
Rod Brunson, PhD, is a professor and dean at the Rutgers University School
of Criminal Justice. His research examines youth experiences in
neighborhood contexts, with a specific focus on the interactions of race,
class, and gender, and their relationship to criminal justice practices. He has
authored or co-authored more than 50 articles, book chapters, and essays
using a variety of research methods, including gathering and analysis of
qualitative data and evaluation research.
Brunson was first exposed to research when he was in graduate school and
got invited to participate on a professor’s research project. In this capacity, he
went into a juvenile detention center and spoke to juveniles involved with
gangs. Brunson was surprised and intrigued by the willingness of people to
share intimate details of their lives. The purpose of that project was gangs,
but Brunson learned so much about the living conditions that these young
people faced every day that it made him more curious. From this experience,
a researcher emerged. Since then, Brunson has studied myriad topics in the
field (and behind the computer) that demonstrate the practical benefits of his
work.
Courtsey of Rod Brunson
To learn more about Brunson’s research, we consider a piece of research he
was once told by a senior faculty member had no relevance: police relations
with urban disadvantaged males. It turns out that critic was wrong and that
understanding how disadvantaged male adolescents view and interact with
police is timely and relevant. In particular, Brunson examines whether there
are differences between those experiences and perceptions between Black
and White youth living in similarly disadvantaged urban neighborhoods. Like
others, Brunson collaborated on this research. His collaborator was Ronald
Weitzer, PhD, a sociologist at George Washington University (Brunson &
Weitzer, 2009). Although Weitzer’s contribution is important (in this piece
and in his body of research more generally), we focus on Brunson throughout
this text.
Carlos Cuevas, PhD
Carlos Cuevas, PhD, is an associate professor in the Criminology and
Criminal Justice Department at Northeastern University. His research
interests are in the area of victimization and trauma, sexual violence and
sexual offending, family violence, and psychological assessment. He focuses
on victimization among Latino women, youth, and understudied populations,
and how it relates to psychological distress and service utilization, as well as
the role cultural factors play on victimization. In addition, he is studying the
impact of psychological factors on the revictimization of children and how it
helps explain the connection between victimization and delinquency. He uses
a variety of methodologies to conduct his research, including secondary data
analysis and original data collection.
Many fail to recognize that they are surrounded by data. Data take on many
forms, including words, actions, interactions, speech, text, and numbers. That
was the case for the youth and police relations research conducted by
Brunson. Brunson recognized he had access to young African American
males in a high-crime, disadvantaged neighborhood, as well as to young
Whites in a similarly disadvantaged place. Being able to interview these
individuals to collect data on their experiences offered a key opportunity to
conduct research that adds to our collective knowledge. The access and the
trust Brunson established with the neighborhood youth in multiple
neighborhoods over time allowed him to undertake this important study.
Theory
Criminal justice and criminological theories are sources of potential research
topics. A theory comprises statements that explain a phenomenon. Theory is
an explanation about how things work. Many argue that theory and research
go hand in hand, and one is not of value without the other. For example, Cao
(2004, p. 9) states that “[o]bservation without theory is chaotic and wasteful,
while a theory without the support of observation is speculative.” To some,
theory is the starting point of the traditional research model, and that theory
guides the entire research process. Conducting research using this model is
respected, but there are practical reasons that nontheory testing research is
conducted. For instance, some theories are so vague as to be difficult to
confirm or disconfirm using research. In some cases, theories include
elements for which there are simply no data available (or gathering those data
is not feasible). In addition, the purpose of some research is to build theory
versus to be guided by established theory.
Many people falsely believe that data comes in the form of numbers only.
The truth is that data are everywhere you look. For Rod Brunson, gathering
information from individuals in an African American Barber Shop in St.
Louis—a type of data—formed the basis for multiple research publications.
Yes, interviewing people in their natural settings doing fun things can be
research and provide valuable data. Where around you do you see data?
© EyeJoy/iStockphoto.com
Valuable research has been conducted outside of the pure theory testing
approach. In fact, none of our case study researchers engage primarily in
theory testing. Their understanding of criminal justice and criminology is
informed by theoretical perspectives, but each researcher describes him- or
herself as a more applied and practical researcher. As Brunson notes, “it is
important that scholars be well-versed in theory. However, I don’t think that
theory–or lack thereof–should restrict intellectual pursuits. Theory should be
used as a framework and guide and not discourage further exploration of
ideas.” Theory and research are never fully divorced, however. Research that
is not focused on theory testing is used to develop or enhance extant theory.
Similarly, theory testing has yielded understanding that guides additional
research.
Whether you are interested in traditional theory testing or other types of
research, understanding theory leads to ideas about research topics. For
example, routine activity theory identifies three necessary, but not sufficient,
components for crime to occur: a capable guardian, a motivated offender, and
a suitable target (see Figure 2.2). Although you may not test this theory
directly, understanding the theory may lead you to ask, “What makes a
guardian capable?” This may lead to a fruitful line of inquiry on this topic
that intrigues you.
Requests for Proposals (RFPs)
In some cases, an organization will advertise the need for research on a
specified topic. These types of requests frequently come in the form of a
request for proposals (RFP). Requests for proposals are formal statements
asking for research proposals on a particular topic. Qualified researchers, or
teams of researchers, submit an in-depth proposal for conducting research on
that topic. The proposals are reviewed by experts, and none, one, or several
of the proposals are funded. The federal government as well as state and local
governments regularly publish requests for proposals. Carlos Cuevas, another
one of our featured researchers, and his colleagues (Sabina, Cuevas, &
Cotignola-Pickens, 2016) were able to secure a grant to fund the data
collection that led to their study on Latino teen dating violence in this way.
Although the RFP that had been posted requested research about teen dating
violence only, Cuevas’s team wanted to examine it, other types of violence,
and the role that cultural influences play on victimization. Even though the
topic of teen dating violence among Latinos was an area of interest to Cuevas
prior to the government’s RFP, the RFP offered Cuevas and his colleagues an
opportunity to collect data that enabled them to conduct research that simply
could not have happened without the grant funding. Together, an interest in
the topic, familiarity with the fact that the literature had little information
about experiences of Latinos, and the presence of an RFP led to a research
topic and an opportunity to conduct research on this topic by Cuevas and his
collaborators.
Personal experience was partially responsible for the hot spots research topic
focused on by Santos (Santos & Santos, 2016). Santos’s colleague, Roberto is
a former law enforcement officer who is now an assistant professor at the
same university. As an officer, he developed a new policing approach. As a
team, they knew that some research takes into account certain crime patterns
and places but that the research frequently fails to consider the role of
offenders. Santos and her colleague recognized (from literature gaps) that
there was a need to focus on offenders. Given these personal experiences,
knowledge of the literature, and a well-timed RFP, Santos and her
collaborator were able to develop a research topic on hot spots that was
practical and applicable to law enforcement agencies.
Reading
Reading is an excellent source for finding interesting research topics.
Consider Coming Out from Behind the Badge, which chronicles Greg
Miraglia’s (2007) experiences as a gay police officer. Given the
hypermasculine culture of law enforcement, Miraglia recognized that coming
out as a gay law enforcement officer would be ill advised while he was an
officer. Reading about Miraglia’s experiences may lead to a curiosity about
the presence of LGBTQ officers in law enforcement or about changes in
acceptance of LGBTQ officers over time.
Greg Miraglia served as a law enforcement officer before taking a role in
higher education. Reading his book titled Coming Out from Behind the
Badge offers many possible research topics.
Courtesy of Greg Miraglia
Or consider reading about the colorful Art Winstanley. Art was a burglar
specializing in safe cracking in the 1960s. What makes his story unusual is
that he was also a Denver police officer, and he operated with a ring of
burglars, most of whom were also Denver police officers. As told in Burglars
in Blue, Winstanley (2009) began burglarizing Denver area businesses as a
rookie. He and the other burglars would case a business during the day,
chatting up store owners who freely shared information such as where the
safe was located. By taking advantage of trusting business owners and the
lack of technology available at the time, Winstanley would return later to
burglarize the business. His partner would wait in the patrol car to monitor
the radio (portable radios did not exist). If a call came over the radio about a
burglar alarm going off where Winstanley was safe cracking, his partner
would radio that he and Winstanley would investigate. By doing this, no one
else showed up who could apprehend these crafty burglars. Winstanley was
eventually caught and sentenced to prison after a safe fell out of the trunk of
his vehicle as he fled a scene. Reading about Winstanley and his co-
conspirators may prompt you to learn more about police officers who commit
crimes, officers who serve time in prison, or changes and improvements in
technology in law enforcement and the criminal justice system.
Research in Action: Mental Illness and Revictimization: A
Comparison of Black and White Men
Although research has focused primarily on the violent offending
of individuals with mental illness, recently attention has been
given to the violent victimization of this group. In addition to
examining victimization of the persons with mental illness,
attention is being given to the revictimization of these individuals.
Research shows that persons with mental illness have a high risk
of being victimized and that this increased risk may extend to
revictimization as well. The researchers point to a single study
that has investigated the revictimizaiton of persons with mental
illness that concluded that revictimization trajectories vary by
diagnosis, symptomology, and alcohol abuse. What remains
unknown is how other individual characteristics influence the
chances of revictimization. Policastro, Teasdale, and Daigle
(2015) fill these gaps in our understanding by exploring
differences, if any, in the trajectories of recurring victimization by
victim’s race. To do this, the research addresses the following
research questions:
1. What types of within-person characteristics influence
recurring victimization over time for each racial group?
2. Do the trajectories of recurring victimization of Black
persons diagnosed with serious mental illness differ from the
trajectories of White persons diagnosed with serious mental
illness?
The sample used in this research by Policastro et al. (2015) was
gathered via a stratified random sample of eligible patients
discharged from in-patient psychiatric facilities at sites in
Pittsburgh, Pennsylvania; Worcester, Massachusetts; and Kansas
City, Missouri. Those eligible for being drawn in sample were
between the ages of 18 and 40, English-speaking, and White or
African American. Participants had to be civil (not criminal)
admissions to each facility and have a medical diagnosis of at
least one of the following disorders: schizophrenia, depression,
mania, schizophreniform disorder, schizoaffective disorder,
dysthymia, brief reactive psychosis, delusional disorder,
substance abuse/dependence, or personality disorder. The
outcome variable of interest is whether the individual had been
threatened, or had been victimized physically, with or without a
weapon. The independent variables in this research includes drug
use, alcohol use, social network, symptomology, use of violence,
level of daily functioning, stress, homelessness, marital status,
employment, socioeconomic status, and diagnosis.
After describing the sample, the researchers identified two major
findings. First that the effect of alcohol abuse differed by victim’s
race. Alcohol abuse is associated with increased risk of
revictimization for White persons with serious mental illness but
not for Black persons with serious mental illness. The second
major finding is that the trajectories of revictimization differed
between Whites and Blacks. Specifically, the trajectory of
revictimization declines for Whites but remains flat for Blacks.
This research indicates that the lived experiences of mental illness
are different among the Black population compared with among
the White population.
The policy implications of this work points to the need for mental
health care services for diverse populations and cultural
sensitivity in their delivery. The results also suggest the need to
ensure the accessibility of mental health treatment exists for
underserved populations. With more mental health care providers
in Black communities, the trajectory of revictimization may be
changed.
Policastro, C., Teasdale, B., & Daigle L. E. (2015). The recurring
victimization of individuals with mental illness: A comparison of
trajectories for two racial groups. Journal of Quantitative
Criminology, 32, 675–693.
Viewing
Viewing the evening news, documentaries, or movies, combined with
curiosity, can lead to excellent research topics. In the previous chapter, there
was a discussion about media depictions of violence against college students.
These media portrayals focus primarily on sexual and relational violence, as
well as primarily on female victims. This may lead you to wonder about other
types of violence and other types of college student victims. Alternatively, it
would not be unusual to be viewing a channel on the evening news that
reports that each year more Whites are killed by police than are Blacks. Yet
on another channel, you may see reports that Blacks are more likely to be
killed by police than are Whites. These seemingly contradictory statements
may lead to an interest in police killings, changes of police killings over time,
or the role of race in the criminal justice system. Each of these, and countless
others, would make an interesting research topic.
Lectures and events where others speak offer a goldmine of research topics.
Thinking back to the last lecture you attended, what topics come to mind?
© monkeybusinessimages/iStockphoto.com
Listening
Listening to presentations, speakers, and even general conversation can lead
to interesting research topics. You may be in a lecture where the speaker is
presenting information on general topic such as policing. What is conveyed
in the lecture may prompt a specific interest in that topic. This is precisely
how the idea to explore police impersonation was developed by Rennison and
Dodge (2012). Rennison was teaching an Introduction to Criminal Justice
class and had invited a police officer in to talk about his job. The officer
shared traditional policing information but also informed students how they
could distinguish a police impersonator from a legitimate officer. After
hearing this, Rennison began wondering what the literature had to say about
police impersonation. Ultimately, this curiosity lead to a publication by
Rennison and Dodge on this topic.
Casual conversations are also excellent sources of ideas. One of the best
things about annual professional conferences of criminologists and criminal
justice professionals are the research projects that originate from casual
conversations. Zaykowski, in a video interview conducted for this book,
related how she was discussing victimization research with a colleague who
studies capital punishment. This exchange led to a discussion about any
potential overlap that may exist between the two areas. What came of this
was the topic of how the changing role of victim rights (i.e., variation in
victim impact statements in court) may lead to disparity in capital punishment
sentencing across trials. A new research topic was born.
Working on Research Projects With Professors
There are often formal and informal opportunities to work with professors
engaged in research. Recall our introduction of our featured researchers in
Chapter 1. Cuevas and Dodge both included students to assist on the research
we are following in this text. Both of Dodge’s collaborators were students
(Starr-Gimeno and Williams), and Cuevas’ collaborator Cotignola-Pickens
was a student. This is a valuable opportunity, and one you should jump at
given the chance. At times there may be a posted request for student research
assistants. Or it may be that a student contacts a professor and asks whether
he or she can assist on a research project. Although these assistant positions
may or may not be paid, and they may or may not allow you to earn course
credits, the greater value of them is the research experience. In volunteering
to assist with research, the student gains a deeper understanding of the
process of research, and from that experience, he or she may discover several
research topics. This is part of the way that Zaykowski (2014) developed a
topic that focused on victim consciousness. As an undergraduate student, she
was intrigued by a call for research assistants for a study of youth perceptions
of the police. She had recently completed a service learning class where she
and others tutored inner-city youth about the police and their communities.
Eventually she reached out to a professor to discuss some research, and they
began studying how youth interpret potentially violent encounters in the inner
city.
Internet
The Internet offers endless opportunities to discover a research topic of
interest. If perusing the Internet does not lead to a research topic of interest,
you could simply search on the phrase “research paper topics in criminal
justice” or “research paper topics in criminology,” and thousands of
possibilities will be returned. If you opt for this approach, remember to
choose a topic you both care about and are curious about.
How to Identify the Purpose/Goal of Research
Once a research topic is identified, you the researcher must work toward
taking a very broad and general topic and narrowing it. The goal is to shape it
into a feasible research question. Topics such as recidivism, sentencing,
victimization, reentry, and policing are very broad, and to construct a
practical research question, narrowing is required. One step toward narrowing
a broad research topic is to identify the purpose of the research you
propose. The purpose of research can be thought of as the “goal” of the
research. It is common to find in a research articles, “the purpose of this
research is to . . .” or “the goal of this research is to . . .” or “the aim of this
research is to . . .” This section identifies four major purposes or goals of
research. Do you wish to explore something about a topic? Describe
something about the topic? Explain something about the topic? Evaluate
something about a topic? Or are several of these aims of interest?
Purpose of the research: Overarching goal of the research.
Broadly, there are four categories of purposes of research:
exploratory, descriptive, explanatory, and evaluation.
Exploratory Research
Let’s consider some purposes and research questions found in the literature
from our featured researchers. In Melde and colleagues’ research (Melde et
al., 2009), they sought to explain something about gangs. In particular, they
wished to explain how gang membership is related to (a) self-reported
victimization, (b) perceptions of victimization risk, and (c) fear of
victimization. Melde and colleagues’ stated purposes and three guiding
research questions are therefore very clear:
1. What is the effect of gang membership on self-reported victimization?
2. What is the effect of gang membership on perceptions of victimization
risk?
3. What is the effect of gang membership on the fear of victimization?
These three research questions address an interesting and contradictory set of
findings in the literature. By investigating these questions, Melde and his
collaborators moved our understanding of gangs and fear forward.
Brunson and his colleague (Brunson & Weitzer, 2009) published descriptive
research focused on accounts of police relations with young White and Black
males in St. Louis, Missouri, from the young males’ perspectives. The
purpose of this research is clearly indicated by the authors’ statements that
[t]his article examines the accounts of young Black and White males
who reside in one of three disadvantaged St. Louis, Missouri,
neighborhoods—one predominantly Black, one predominantly White,
and the other racially mixed.
Although there is no explicitly stated research question, the implied question
is, “How do young Black and White males describe police relations across
disadvantaged neighborhoods?” Brunson has long studied the topic of police
relations with young males in disadvantaged communities. Given recent
events in the United States, the public is more aware of the importance of this
research topic and research questions.
Zaykowski’s (2014) research is based on the knowledge that victims’ access
to and use of services is poor. Her research seeks to understand why that is.
This explanatory work is clearly outlined in her purpose statement:
Brunson’s research frequently focuses on the experiences and views of young
African Americans regarding police relations. What are some interesting
research questions on this topic you can pose?
Denotes compliance with ALL subparts of 45 CFR part 46 but has not
issued the Common Rule in regulations.
To date, almost two dozen agencies and departments of the federal
government, including the Department of Justice, have codified subpart A—
the Common Rule—of the HHS regulations to protect against human subject
abuses in research. The Department of Justice, a source of research funding
for criminal justice and criminology research, is one of those adopting the
Common Rule. Like many other federal agencies, however, the Department
of Justice did not adopt subparts B, C, or D, which provides additional
protections for three groups of vulnerable populations. Nonetheless, in
choosing a research topic and developing a research question, it is important
that a researcher take into account the use of vulnerable populations, whether
codified or not.
Vulnerable Populations
Subparts B, C, and D of the HHS regulations identify three vulnerable
populations that receive an additional layer of review when proposed to
participate in research. Vulnerable populations outlined in the HHS
regulations are pregnant women, human fetuses and neonates, prisoners, and
children. These populations are considered vulnerable in that they may be
more susceptible to coercion or undue influence.
Vulnerable populations: Those that receive an additional layer
of review when proposed to participate in research. According to
Department of Health and Human Services (HHS) regulations,
pregnant women, human fetuses and neonates, prisoners, and
children are vulnerable populations. These populations are
considered vulnerable in that they may be more vulnerable to
coercion or undue influence.
Pregnant Women, Human Fetuses, and Neonates
In general, review of research using pregnant women, fetuses, and neonates is
designed to provide extra scrutiny to ensure no harm comes from the
research. For example, for research involving pregnant women and neonates
(i.e., newborns; a nonviable neonate refers to a newborn who while living
after delivery is not viable) (Health and Human Services, 2009), the oversight
outlined in the Common Rule is required; in addition, all ten requirements
must be met to allow for their participation in research. A few of the
requirements that must be met are “(a) [w]here scientifically appropriate,
preclinical studies, including studies on pregnant animals, and clinical
studies, including studies on nonpregnant women, have been conducted and
provide data for assessing potential risks to pregnant women and fetuses” and
“(c) [a]ny risk is the least possible for achieving the results of the research”
(Health and Human Services, 2009). “(h) No inducements, monetary or
otherwise, will be offered to terminate a pregnancy”; “(i) [i]ndividuals
engaged in the research will have no part in any decisions as to the timing,
method, or procedures used to terminate a pregnancy”; and “(j) [i]ndividuals
engaged in the research will have no part in determining the viability of a
neonate.” Research involving neonates is required to prevent the termination
of the heartbeat and respiration and must ensure that there is no added risk to
the neonate participating in the research (among other requirements).
Neonate: Newborn.
This image from the Holmesberg Prison skin experiments illustrates the
unethical nature of this work. Prisoners were unable to provide informed
voluntary consent given the way the study was conducted. One finding out of
this research led to Retin A which is widely used today to treat acne among
other things. How might you have developed this same product without
taking advantage of impoverished and imprisoned men?
© AP Images/ASSOCIATED PRESS
Prisoners
Prisoners are specifically identified as a vulnerable population in HHS’s
regulations given their confinement and inability to express free choice. A
prisoner is defined by these regulation as the following (Health and Human
Services, 2009, §46.303 Definitions, para. 3):
[A]ny individual involuntarily confined or detained in a penal
institution. The term is intended to encompass individuals sentenced to
such an institution under a criminal or civil statute, individuals detained
in other facilities by virtue of statutes or commitment procedures which
provide alternatives to criminal prosecution or incarceration in a penal
institution, and individuals detained pending arraignment, trial, or
sentencing.
Ex-prisoners are not considered a vulnerable population under subpart B. The
additional scrutiny given research proposing to use prisoners results from a
concern that because of incarceration, their ability to make a truly voluntary
decision without coercion to participate as subjects in research is
compromised. Additional requirements stipulate that the IRB committee
include at least one prisoner or prisoner representative, and that advantages
gained by participating in the research are not of such a magnitude as to make
difficult the weighing of risks to the advantages. In addition, the research
proposed must provide assurances that parole boards will not take into
account research participation when making decisions about parole. Prisoners
must be informed prior to consent that their participation will not have any
effect on the probability of parole.
To protect prisoners even more, contemporary research can only involve one
of four categories of research related to prisoners. The first category requires
that a study examine the possible causes, effects, and processes of
incarceration, as well as of criminal behavior. The second category of
research involves investigations of prisons as institutions. Third, research
focusing on conditions that particularly affect prisoners (e.g., diseases,
victimization, and drug/alcohol abuse) is possible. And finally, research on
policies or programs that would improve the health or well-being of prisoners
is authorized.
Unfortunately, the additional layers of review have proven necessary given
that prisoners have been used in unethical studies in the past. In Chapter 1,
some of the experiments conducted on prisoners in concentration camps were
described. In the United States, there have been a shameful number of
unethical experiments on prisoners as well. “All I saw before me were acres
of skin. It was like a farmer seeing a field for the first time.” These are the
words of Dr. Albert Kligman who performed skin experiments on prisoners
—primarily poorly educated people of color—confined at the Holmesburg
Prison in Pennsylvania from 1951 to 1974. The prisoners referred to
experimentation as perfume tests, but Kligman was conducting experiments
using toothpaste, shampoos, eye drops, hair dye, detergents, mind-altering
drugs, radioactive isotopes, dioxin (an exceedingly toxic compound), herpes,
staphylococcus, and athlete’s foot. Some prisoners had objects placed in
incisions in their skins over time to determine the effects of these foreign
bodies. This experimentation, lasting more than two decades, included
injections and frequent painful biopsies. Kligman paid his subjects from $10
to $300 a day. Most prisoners at that time earned about 15 cents a day.
Offering compensation of this magnitude made it questionable if prisoners’
ability to voluntarily consent was honored. Given the poor information about
the experiments the prisoners were given, and their circumstances as
prisoners, it is clear no voluntary consent was given, or even possible.
Children, by definition, can only assent to participate in research. Why can’t
they consent? What characteristics are taken into account to ascertain if
assent is necessary? Would you handle children in research differently?
How?
© baona/iStockphoto.com
Children
Subpart D of the HHS regulations identifies children as a vulnerable
population. Children are defined as persons younger than 18 years of age
(local laws may vary and may take precedence over the HHS definition of
children. For definitions related to this vulnerable population, see 45 CFR
46.402[a-e]; HHS, 2009). Reviews of research proposing to include children
must meet all requirements in the Common Rule as well as others. For
example, in general, research using children must offer no more than a
minimal risk. If the risk is greater, it must be demonstrated that the research
will lead to generalizable knowledge about a disorder or condition. Because
they are younger than 18, children are not able to consent to participation in
any research. Rather, children are frequently asked to assent or agree to
participate in research that will likely benefit them, with the assurance that
the child can comprehend and understand what it means to be a participant in
research. Assent requirements can be waived as per 45 CFR 46.408(a) by
IRB committees if a consideration of the child’s age, maturity, and
psychological state indicate assent is not possible, and if the research offers
direct benefit of health and well-being to the child. In addition to considering
the need for a child’s assent, an IRB committee will determine whether
parental permission for participation is required. In most cases where
parental consent is needed, permission from one parent or guardian is
sufficient. In some situations, consent from both parents is required unless
one parent has died, is incompetent, is not available, or there is not a second
parent who has custody of the child.
Assent: Agreement by a child to participate in research that will
likely benefit him or her. To gain assent, the child must be able to
comprehend and understand what it means to be a participant in
research.
Permission: Frequently, but not always, required by at least one
parent when a child participates in research.
A repeated theme is that these additional requirements have proven necessary
given unethical research using children in the past. Consider the research of
Farfel and Chisolm, researchers at Johns Hopkins University, with the
approval of the university’s IRB committee (Farfel & Chisolm, 1991). This
research, sponsored by the Department of Health and Human Services,
compared the effectiveness of traditional lead abatement procedures
compared with modified abatement procedures during the 1990s. Interior
lead paint was outlawed in 1978 by Congress, yet years later, many dwellings
still had this paint including older inner-city rental housing in Baltimore
where low-income tenants lived, and where this research took place. Lead
paint is easily absorbed through the skin, and lead dust is especially
problematic around doors and windows. Exposure to even a small amount
results in permanent damage, especially to children less than six years of age.
Lead toxicity has been associated with speech delays, aggressiveness,
ADHD, learning disabilities, and criminal offending.
The five groups of housing units used in the study by Farfel and Chisolm
(1991) were identified by the Baltimore City Health Department, which had
records of dwellings in which children suffering from lead toxicity resided.
One group of dwellings had never had lead paint. A second group included
dwellings in which all lead paint hazards had been removed. The three other
groups of housing units were contaminated with lead paint but differed in
terms of the approach taken to remove the lead. The first group was treated
with traditional abatement techniques including the scraping and repainting
of peeling areas, as well as the addition of a doormat at the main entrance.
The second group of homes was treated in the traditional way in addition to
placing doormats at all entrances, installing easy-to-clean floor coverings,
and covering collapsing walls with plasterboard. The third in the group of
dwellings in abatement groups received the treatment the other groups
received, plus replacement of all windows.
Researchers encouraged owners of these housing units to rent to parents with
children between the ages of six months and four years old since brain
development is very sensitive to lead exposure during this time. Parents
living in these housing units were offered a financial incentive to participate
in the study, which was portrayed as a study about how well different
methods of renovation protected children from lead poisoning. Efforts were
made to dissuade renters from leaving the units during the experiment.
During the two-year experiment, testing of the presence of lead in the
housing units and testing of blood lead levels in the children were taken
periodically. Some children who moved into these units were exposed to
dangerous levels of lead, and their blood tests showed enormous increase in
their blood level. In others, lead toxicity was reduced but remained at
unhealthy levels. Yet the experiment continued, and children continued being
exposed to lead.
When the details of this experiment finally came to light, lawsuits were filed,
and judgment was harsh. One judge compared this study to the Tuskegee
Syphilis Experiment, as well as to the Nazi medical experiments. Others
continue to maintain that there were no ethical lapses with this research and
that some children in the study experienced decreases in their lead toxicity
over the course of the study.
Potentially Vulnerable Populations
Although subparts B, C, and D of the HHS regulations specifically identify
pregnant women, prisoners, and children as vulnerable populations, it is
incumbent on the researcher to consider what other groups may be potentially
vulnerable. Vulnerable populations include any population that requires
additional consideration and augmented protections in research. Consider the
gathering of data for the National Crime Victimization Survey (NCVS). Like
many social surveys, its methodology considers potentially vulnerable
populations given the nature of data gathered. The NCVS identifies
vulnerable populations as pregnant women, prisoners, and children, as well as
“the elderly, minorities, economically or educationally disadvantaged
persons, and other ‘institutionalized’ groups such as students recruited as
research subjects by their teachers and employees recruited as research
subjects by their employer/supervisor” (Winstanley, 2009, p. 40). For these
groups, it is important to contemplate ethical considerations such as whether
consent is informed and autonomous, research participation is free of
coercion, and the language and presentation fit the needs of the particular
population. What is considered a vulnerable population may differ depending
on the topic and purpose of the research.
One potentially vulnerable population not identified in the NCVS
documentation or HHS regulations are veterans. Although veterans are not a
vulnerable population according to the HHS, the Veteran’s Administration
considers veterans as potentially vulnerable populations. The reasons stated
for this are veterans’ rigorous training to obey orders, willingness to sacrifice,
as well as the potential for veterans to be suffering from post-traumatic stress
and other disorders related to their status.
Potentially vulnerable populations: Any population that may be
more vulnerable to coercion or undue influence.
HHS regulations do not identify veterans as a vulnerable population, but the
Veteran’s Administration views them as potentially vulnerable. What make
them potentially vulnerable? What other populations might be vulnerable in
research settings? Why?
U.S. Army photo by Sgt Scott Lamberson, 4IBCT PAO
Given this, would it be ethical to conduct an investigation of suicide ideation
among active military who are currently receiving help after an attempted
suicide? This research was proposed by a graduate student who noted that
this research could offer important information related to this topic including
the role that demographics (e.g., age, income, race, ethnicity, and arm of the
services) played in suicide ideation and attempted suicides. Would this type
of research be wise? Ethical? Would asking these particular individuals to
share their experiences offer information that was greater than the potential
trauma it might invoke? In this real-world example, professors overseeing
this student’s work did not think it was an ethical line of inquiry for this
population, and the student was encouraged to find another topic.
Exempt, Expedited, or Full Panel Review at the IRB
There are three types of research categories when submitting to IRB: exempt,
expedited review, and full board review. Exempt research occurs when
human participants conform to one of the categories from section 46.101(b)
of 45 CFR 46 (HHS, 2009). This includes research using existing data,
documents, or records in which the data offer no means to identify subjects.
Work with the NCVS, like that conducted by Zaykowski (2014) in her victim
services research is an example of exempt research. Expedited review of
research does not mean that an IRB review will be conducted quickly.
Rather, it means that the review of the research protocol will be done by the
IRB chair and possible other committee members. Expedited review can only
be used when the research involves no more than minimal risk, does not
include vulnerable populations, and is not using deception. Full board
review is required in all other types of research. In this instance, the research
protocol is reviewed and discussed by the full IRB committee. Important
elements considered such as informed consent is voted on by the full
committee.
Exempt research: That which does not have information about
respondents, is publicly available, and has no more than minimal
risk.
Expedited review: Review by the institutional review board
(IRB) committee chair (and perhaps one or two other members).
This is not necessarily a fast review but one that does not require
the full board.
Full board review: All research that is not exempt or expedited
and requires the review of the full institutional review board
(IRB).
Much of the research featured in this text went to full board review, and some
required the additional scrutiny required given Subpart D regulations.
Brunson, Melde, and Cuevas and their colleagues gathered data from youth
that required additional scrutiny. The researchers report generally good
experiences with IRB committees and approval. Many also note that others
are frequently negative about IRB but that in general, a negative interaction
involves a lack of preparation, organization, clarity, or thoroughness on the
part of the researcher. The challenges described generally focused on
individual IRB committees (which can vary institution to institution). An
often repeated challenge was the lack of consistency across IRB protocol
consideration, as well as a lack of institutional memory. A researcher can
submit nearly identical protocols at two points in time resulting in two very
different experiences. This may be a result of changing committee members
or of a lack of expertise on a committee (e.g., lack of a researcher with
expertise in gathering and using qualitative data, lack of familiarity of a
particular methodology, and lack familiarity with the area of interest). With
regard to working with juveniles, some researchers note that some
committees make it easier to access delinquent or at-risk juveniles but make
accessing so-called good kids very challenging.
Training in Protecting Human Subjects
If you are conducting research that will be reviewed by an IRB committee,
your institution will likely require you to take training on the topic of human
subjects research. Please note that you do not have to wait until you are
submitting proposed research to the IRB to get trained. Most universities
offer free training to faculty, staff, and students. In addition, anyone can
access free human subjects online training through the National Institutes of
Health (NIH), a federal collection of 27 institutes and centers engaged in
biomedical research. Although the NIH training focuses primarily on medical
and health research, this training is applicable to research in the social
sciences given the similarities of research in the areas. For example,
victimization, injury, and other topics of interest to criminologists and
criminal justice professionals are also of great interest to health scientists.
This overlap is further demonstrated as this training is based on the HHS
regulations described earlier (HHS, 2009). To access the English version of
this free training, go here: https://phrp.nihtraining.com/users/login.php. The
Spanish version can be accessed here:
https://pphi.nihtraining.com/users/login.php. To take this free course, a brief
registration is required. The entire course is estimated to take three hours to
complete and will cover some of the topics presented in this text. You can
save work and return later so it is not necessary to carve out a three-hour
block of time. Once the course is completed, a certificate is made available
online to demonstrate your successful training in protecting human research
subjects!
IRB Expert—Sharon Devine, JD, PhD
Sharon Devine, PhD, is a research assistant professor and chair of the
Exempt/Expedited Panel and chair of the Social and Behavioral Panel for the
Colorado Multiple Institutional Review Board (COMIRB) at the University
of Colorado Denver Anschutz Medical Campus. Her research focuses on
research ethics and evaluation of public health projects. She is currently
examining the evaluations of various public health projects, specifically,
training professionals engaged in clinical and behavioral interventions around
HIV and STDs (funded by the Centers for Disease Control and Prevention),
facilitating integration of family planning into STD clinics (funded by the
Office of Population Affairs), and reducing teen pregnancy (funded by the
Office of Adolescent Health).
Devine was a successful practicing attorney for years when she made the
decision to pursue a master’s degree in anthropology. During her master’s
course work, a professor identified her research skills and encouraged her to
continue for a PhD, noting she would want to conduct her own research.
Recognizing this opportunity, Devine closed her law practice and returned to
school full time to pursue a PhD. Her fondness for research grew when she
recognized that she could create new knowledge and make actionable
recommendation that would have a positive impact on society. In the course
of seeking her PhD, a professor was looking for a student volunteer to sit on
the university IRB, so she volunteered. Since then, she earned her PhD and
continued her work on the university IRB committee. Eventually, she was
asked to sit on the newly devised Social and Behavioral Panel. Prior
protocols had to go through the medical school IRB, which created great
difficulties for social science researchers. Although she never thought, “Hey I
want to be on an IRB committee,” the opportunity presented itself and has
taken her on a great path. Today she oversees all protocols coming through
the system. This offers her a unique perspective on research and ethics.
© MaskaRad/iStockphoto.com
Why Conduct a Literature Review?
How can reviewing the literature aid in refining a research question? First,
reading about existing studies focused on the same research question should
reveal gaps in our understanding that require additional attention. Perhaps the
research question has been considered broadly but not for women, juveniles,
Latinos, the poor, or single individuals. Second, understanding the details
about the methodology used in prior work may indicate an opportunity to
revisit the topic using improved or an alternative methodology. For example,
someone may have studied a topic using a small, local sample, which means
the findings may not accurately describe a larger population. It may be that
the same research question can be addressed using a large national survey
that has recently become available. Third, a review of the literature on a
particular topic may reveal that our existing understanding about the topic is
dated. This may indicate that a new examination of this old question can
increase our current understanding of the topic. A new study focused on the
same question may be possible using newer or improved data. Or the new
study may take place in a different context than the extant work (e.g.,
following a major policy change such as the legalization of marijuana in
several states). In addition, existing work may have used simplistic analytic
approaches because greater computing power was not available at the time of
the original research. It may be that reexamining this research question using
more powerful analytic software available today will lead to an enhanced
understanding of the issue. In short, if your topic has been studied, look for
gaps in our understanding or ways that the work can be improved.
Create a Summary Table
At this point, all primary sources have been summarized individually. Many
new researchers make the terrible mistake of stringing their summary
paragraphs together and calling it a literature review. Simply stringing the
summaries together does not make an appropriate literature review. Aside
from this style of literature review being absolutely brutal to read, others
cannot easily identify the major themes, gaps in the literature, agreement and
disagreement in the literature, and other important information. This type of
literature review doesn’t provide any of the critical information. Identifying
this information requires a thematically focused synthesis of the material, not
an individual-source/author focus. It is worth repeating that stringing together
summaries is not a literature review. Don’t do it!
6. Create a thematically focused table of summarized information
This example highlights the very important use of main point statements
leading off each section. This example also demonstrates the synthesized use
of evidence that is presented based on the topic of interest (versus author
focused). Although the text at the beginning of this section on cultural factors
is not shown here, the concluding sentence in this section ties back to the
earlier stated purpose of that section.
As noted, a well-constructed literature review uses MEAL in two ways. It
structures the full literature review, and it is used to structure each subsection.
When we return to an earlier example of a descriptively organized literature
review, we see that a reader would expect to see MEAL in the following
places:
Writing literature reviews takes time, focus, and patience. A part of the
process is to verify that you have avoided some common errors. This section
identifies several pitfalls that are found in literature reviews. These include
not allowing enough time, constructing the review around authors and not
themes, and a lack of organization and writing strategies.
Not Allowing Enough Time
A common pitfall encountered is not allowing enough time to write your
literature review. You can see from this chapter that no step in writing a
literature review is difficult. What is apparent, though, is that each step and
the whole process take time. Writing a literature review cannot be done in a
night or even two. You must set aside a good amount of time to search for
and through potential primary sources. Time is needed to summarize each
article. Additional time and effort are required to create a summary table and
to consider the material so that main points can be identified. Finally, writing
the review takes time. Writing literature reviews is not a task that can be done
well when rushed. To avoid this pitfall, allow adequate time to do a thorough
and excellent job.
Failing to Focus on Themes
Although we warned against it earlier, and although professors warn students
in classes, it is exceedingly common for students to construct a literature
review focused on authors instead of on themes. A literature review that
describes the work of one author after another, after another, is not a
literature review. When each paragraph focuses on one piece of research and
its author instead of on the substance of the topic, not only is it inhumane to
ask someone to read it, but it is also extremely difficult to identify the overall
state of the literature. A literature review that fails to synthesize the materials
and present main points is not a literature review. To avoid this pitfall, ensure
the review is thematically based, not author or individual research article
based.
Lack of Organization and Structure
A third pitfall of writing literature reviews is to fail to organize and structure
the material in a meaningful way. This chapter presented two useful formats
for the review: descriptive and chronological. A well-constructed literature
review will use one of these approaches. This chapter also presented
information on the importance of MEAL as a writing strategy. MEAL is
useful for the overall review, and it is useful for subsections in the review. If
a review fails to identify main points, offer evidence, analyze the material,
and link it to other sections of the review, the review is incomplete and
poorly executed.
Quoting Problems
Common pitfalls in writing a literature review involve the use of quotes. As
noted in this chapter, a literature review should focus on the ideas in the
literature, not on what any specific author has written. For that reason, the
excessive use of direct quotations should be minimized. Zaykowski shares,
“Avoid using direct quotations–unless there is one or two that really are
important in their original form. Too many quotations make it difficult to
read and also isn’t convincing to the reader that you have an original
argument.” Taken to an extreme, some writers over-quote by quoting
multiple paragraphs and multiple pages of text. Zaykowski notes, “It is not
okay to do this. Even though technically the writer is giving the author credit
(assuming that the author is recognized), it is not enough of the writer’s own
thoughts. The writer didn’t actually write anything, put into their own words,
or in many cases interpret the quote’s significance.” When writing, the writer
needs to include his or her own thoughts, and offering pages of quoted
material fails in that regard.
Sean has seen some common errors when working with students writing
literature reviews. A major error is that students often do not know what the
purpose of their writing is or what the purpose of a literature review is. Not
surprisingly, Sean notes that without clarity in purpose, a writer will never
have clarity in writing. A second common error is that students frequently
cannot identify a main point at the level of the whole paper or at the level of a
paragraph. It is not enough to offer a series of uncoordinated details in a
paper; the writer must identify the main point. Sean notes this skill in
summarizing complex information is one engaged in daily in other contexts.
Think for example of someone asking, “How was your day?” Most people
respond with a summary of the main points of their day: “Had a wonderful
meeting with a new client and celebrated my anniversary out at a nice
restaurant.” This summary offers the main points of the day versus a litany of
each activity, no matter how small, of the day.
To become a better writer, Sean recommends reading other literature reviews
and dissecting them. Can the reader find the main points? The evidence?
Linkages? Furthermore, Sean encourages new writers to practice
summarizing complex material in a few sentences (i.e., practicing developing
main points). This practice can be done by looking at other literature reviews
or any type of writing such as movie reviews and newspaper stories. Identify
how MEAL approaches work and how a failure to offer a framework does
not.
Chapter Wrap-Up
By building on the material presented in the first two chapters, this chapter
presents the steps and skills needed to write a literature review.
Understanding these skills removes or minimizes the anxiety out of writing
and constructing a literature review. As Cuevas notes, writing a good
literature gets easier over time as the skills become more engrained and the
literature becomes more familiar. The steps covered in this chapter include
identifying what appropriate sources for a review are, where to find them, and
how to avoid predatory journal pieces. Steps summarizing the original
sources were offered, culminating in a table (Table 3.4) that has all of the
information needed to write the review. In addition, two organizational
approaches—descriptive and chronological—were presented. A very useful
writing strategy—MEAL—was introduced and described to assist writers
with writing the review. The chapter also provides information on common
pitfalls to avoid when writing a literature review. In addition, the ethics
associated with writing a literature review were highlighted, including
plagiarism in its many forms and misrepresenting existing research. Finally,
the chapter concludes with an interview with Sean McCandless, an expert
literature review writer and academic resources coach. In this interview, Sean
discussed his experience as a writing coach. In his experience, he learned that
everyone has the potential to be a great writer (because of rewriting) when
exposed to the appropriate skills and environment. Although we have
presented a fair amount of text from our case studies in regard to literature
reviews, Table 3.6 offers each study’s abstract. Notice the information that
each abstract offers and how useful it is in identifying whether the article
would be useful in a literature review you are writing.
In the next chapter, we shift gears and begin discussing the information
necessary to design a study. This includes a discussion on concepts,
definitions, measurements, and variables. The chapter discusses measurement
as well as the advantages and disadvantages of different approaches to
measurement. Like the previous chapters, a section is devoted to common
pitfalls in the hopes that they can be avoided. And, of course, ethical
consideration during the nuts and bolts planning of research is emphasized.
Applied Assignments
1. Homework Applied Assignment:
Conducting a Literature Search
Using the same two peer-reviewed journal articles you used for
your homework in Chapter 2, conduct a search for literature
related to those articles. Be sure to use Boolean operators and
filters. Present your findings in a series of tables (e.g., Table 2.2)
shown in this chapter. Given searching is an iterative process, be
sure to show all tables and results for each iteration. Be prepared
to justify why you stopped your search when you did. Be prepared
to discuss your findings in class.
2. Group Work in Class Applied
Assignment: Summarizing Research
Literature
As a group, select two articles from the following case studies:
Dodge, Cuevas, Brunson, and Santos. Once your group has
selected the article, summarize each article following the
approach described in step 5 (see p. 78).Remember to use the
bulleted tips provided in step 5, and write in complete sentences.
Be able to speak to why this approach would be useful in
constructing a literature review. Next, create a thematically based
table using the two selected articles based on the thematically
based table presented in step 6 in the chapter. Be prepared to
discuss and share your summaries in class.
3. Internet Applied Assignment: The Results
of Plagiarism
Search the Internet to find three examples of people who lost their
job, or were denied a high-profile position, because they
plagiarized. Provide a summary of who they are, the jobs they had
(or were seeking), and any reason they gave for the plagiarism.
Provide details on the type of plagiarism they engaged in and why
it was wrong. Describe the outcome of their unethical act and how
you think it may affect them in the future. Please provide a paper
addressing these topics to your professor/instructor.
Key Words and Concepts
Abstract 74
Boolean operators 71
Chronologically organized literature review 84
Cloning 90
Conclusions 77
Descriptive literature review 83
Discussion 77
Empirical 65
Empirical peer-reviewed journal articles 65
Filters 71
Findings 77
Impact factors 68
Introduction section of a journal article 76
Key words 76
Literature review journal articles 66
MEAL 85
Method 77
Mosaic plagiarism 90
Original sources 64
Peer-reviewed journal articles 64
Phrase 71
Plagiarism 89
Predatory journals 67
Predatory publishers 67
Primary sources 64
References 77
Saturation 73
Term 71
Thematically constructed literature review 81
Theoretical journal articles 66
Key Points
A literature review presents an understanding of the overall state of the
literature by surveying, summarizing, and synthesizing existing
literature. Reviews identify major themes, demonstrate where there is
agreement and disagreement, identify limitations of prior research, and
expose gaps in our understanding about a topic. A well-constructed
literature review places the proposed research in the context of extant
literature, and it identifies how the proposed research will create and
enhance existing knowledge.
Writing a literature review can be intimidating, but with the appropriate
skills, and a clear set of steps toward that end, anyone can write an
excellent literature review.
Sources used to construct a literature review, known as original sources
or as primary sources, primarily come in the form of peer-reviewed
journal articles, including empirical pieces, theoretical pieces, and
review pieces. In addition, local and federal governmental reports,
conference papers, and information from conference presentations are
useful sources.
Recent years have seen a proliferation of predatory publishers and
predatory journals that are inappropriate sources for writing an academic
literature review. Predatory publishers and journals are illegitimate
entities that extort fees from unsuspecting authors.
Searching for original or primary sources is easily accomplished using
search tools, terms, phrases, Boolean operators, and filters. Searching is
an iterative process that should begin with the narrowest search. In
addition, searches commonly should be restricted to journal articles
published in the last five to seven years.
Empirical journal articles are published using predictable sections
making finding important information easy. Those sections include an
abstract, introduction, literature review, the method, findings,
discussion, and conclusion. All academic journal articles include
references with full citation information.
Summarizing each primary source and then disaggregating statements
from the summaries into a summary table where main themes are
identified and stated are important steps toward creating a thematically
based literature review.
Prior to writing the first rough draft, it is important that the writer
identify which organizational approach taken (descriptive or
chronological) and be familiar with the writing strategy identified as
MEAL. By using MEAL, one offers a main point, evidence, analysis,
and linkage across the literature review, as well as within each
subsection.
Two primary ethical concerns while writing a literature review are
plagiarism and misrepresentation/unfair criticism of others’ work.
Plagiarism comes in many forms, but all have in common that a writer
has passed off another person’s work as his or her own without crediting
the originator of that material. In a literature review, the work is being
critiqued, not the researcher. Should a person level criticism about
extant research, it is his or her duty to ensure it is fair criticism given the
context and available tools at the time the work was conducted.
Review Questions
1. What are the purposes of a literature review?
2. What are the nine basic steps in writing a literature review? Why
are they important?
3. What are appropriate and inappropriate sources for use in writing
a literature review?
4. What are predatory publishers and journals, and how can you
know they are not using one?
5. What are Boolean operators and filters, and why are they useful?
6. What questions should you address when summarizing an original
source?
7. What is a main point, and how is one developed? What role does a
main point play in the construction of a literature review?
8. What is the anatomy of an empirical research journal article, and
what information does each section offer?
9. What types of organizational approaches are useful in writing a
literature review? What is MEAL, and why is it important?
10. What are the two types of plagiarism discussed in this chapter,
and why are they unethical?
In contrast, Dodge’s work needed to identify what the women who worked as
prostitution decoys identified as important concepts (Dodge et al., 2005).
Similarly, Brunson started his exploratory research on youths’ views and
experiences with police with little notion as to what these youths thought
were important concepts (Brunson & Weitzer, 2009). Beyond knowing that
they wanted to compare experiences and perceptions of the youths in three
neighborhoods, the researchers allowed many important concepts to be
identified by the respondents during the interviews.
Figure 4.3 Several Case Study Concepts
There is an endless number of concepts in criminal justice and criminology.
Consider the examples of criminology and criminal justice concepts shown in
Table 4.3. What characteristics do all concepts have in common? First, all
concepts are abstract. Second, people in society have a general agreement
regarding what concepts refer to. This general agreement about meaning
allows individuals to communicate with one another about ideas and
thoughts. Third, because they are abstract, concepts are not directly
measurable.
Having generally agreed ideas about what abstract concepts refer to may be
adequate to get along successfully in everyday life. It is not adequate when
conducting research. Why? First, because concepts are not directly
measureable, it is impossible to conduct research using concepts. Researchers
who want to measure the concepts must use something measureable to
represent the concepts. Second, although individuals may have a general
agreement about what is meant by a concept, people do not share specific,
detailed agreement about what is meant by each concept. You and those you
know do not share the same specific notion about what constitutes an injury,
intelligence, violence, an elder, kindness, happiness, a victim, or recidivism.
Therefore, when conducting research using quantitative data, it is critical that
the researcher clearly define and identify the specific measureable elements of
each concept prior to gathering the data. When conducting research using
qualitative data, it is critical to allow subjects to define the concepts they
identify as important. The researchers’ ideas about definitions may not match
or even be close to how subjects view them. A great example of learning
about definitions while gathering data (versus clarifying definitions prior to
gathering data) is found in Dodge’s research (Dodge et al., 2005). During
Dodge’s interviews, it was revealed how decoy officers’ defined being a
successful decoy. A successful decoy is one who went barefooted, had
messy/dirty hair, and a particular way of standing and walking. Other
concepts and definitions were also revealed in Dodge’s data gathering
including sexual concepts such as “half-an-half” and “around the world” (see
Dodge et al., 2005, for more information). For more understanding of
research using quantitative and qualitative data, see Figure 4.4.
What Is Conceptualization?
As noted, a researcher cannot directly measure concepts because they are
abstract. Yet, researchers are interested in understanding, exploring,
examining, or investigating concepts and how they are associated or related
to one another. To do so, definitions of the concepts must be identified. For
the researcher using qualitative data, this means gathering data that provide
insight into their subjects’ definitions of important concepts. For example, in
Brunson’s research, the definition of “crooked cops” was revealed (and
supported in other literature) and defined:
Most Barksdale and Hazelcrest youth said that their communities were
plagued by some “crooked cops”—a term frequently used (see also Carr,
Napolitano, and Keating 2007). Martez observed, “They’ll sell you
crack, weed, X [ecstasy], and then will turn around and have you locked
up.” And David explained, “If somebody got pulled over and they had
drugs and money on them, a lot of times the cops will search ’em and
take the drugs and money and then lock ’em up. . . . They don’t turn it in
for evidence; they keep it for they self.” “The undercover ones is
crooked,” Kyle observed. “The undercover ones get away with it
because we can call the station up and report the [officers] in uniform,
but we can’t report the undercover ones” because their identities are
often masked. (Brunson & Weitzer, 2009, p. 873)
Consider the concept of “college student.” How might you conceptualize this
concept? In general, people have broad agreement as to who a college student
is. Nevertheless, as should now be clear, there is no universally specific and
shared definition about who a college student is. For that reason, as a
researcher, you must conceptualize or define this concept to conduct research
on it.
How would you conceptualize “college student?” Here are a few examples:
Someone living on a university campus at a four-year university.
People currently enrolled in any number of college courses whether
online or in person.
Individuals, 18 to 24 years of age, who have attended a university either
online or in person, public or private, within the last six months.
People who have graduated from high school and are obtaining
additional education at a vocational school, community college, junior
college, or four-year university.
As these few examples indicate, there are many ways to conceptualize
“college student.” Each of these ways has ramifications about from whom
data will be gathered. These four examples demonstrate that a researcher
must think about several elements that make a college student. First, you
must consider what is meant by a college or university. Does it refer to only a
traditional brick-and-mortar four-year university? Or does it include
community colleges and vocational schools? Must it be a public university?
Or can it be private? Do religious universities count? Second, you must
consider what is meant by student. Is there an age limit of who is a college
student? Must someone be currently enrolled in courses, or can he or she
have been enrolled in the last six months (or a year)? Must a person be
enrolled at least part time, or must he or she be enrolled full time only? What
if the student takes only online courses—is he or she a college student? Can a
senior citizen taking an online course in ornithology from a university in state
different from their residence be considered a college student? Is where a
person lives related to one’s status as a college student? Must a person live on
a college campus in a dormitory to count as a student? What if someone lives
at home with his or her parents? Which conceptual definition would you use
to accurately define the concept of college student? Given this discussion, are
you wondering how “college student” has been conceptualized in the
research on college student sexual misconduct that is so prominent in the
news these days? You should be because it might surprise you.
Much research focused on college students defines them as persons age 18 to
24. Is this the best approach? Why or why not?
© Sportstock/iStockphoto.com
Keep in mind that there may be frequently used conceptual definitions of
concepts identified in the literature. If appropriate, it is acceptable to build off
this existing work and use that conceptualization. Nevertheless, the
researcher must be careful to ensure that a selected existing conceptualization
is appropriate for the research at hand. Just because a conceptual definition
has been used does not make it the best conceptualization for every research
application. Figure 4.5 offers an illustration of the conceptualization step.
Must college students live on campus to be considered a college student? Do
online students count? What of those taking a class at a nearby community
college.
© monkeybusinessimages/iStockphoto.com
Providing conceptual definitions is challenging. Yet, conceptualization is an
important process, and a researcher must take the time to do it well. In the
end, your research is only as good as its weakest element, and using poor
conceptual definitions weakens the whole product. It also makes following
steps in the research endeavor, including the next step of operationalization,
more challenging. Having engaged in conceptualization, a researcher is one
step closer to the next step: operationalization.
What Is Operationalization?
For most planning on using quantitative data, the next step in the process is to
specify exactly how you will measure the concept based on the conceptual
definition. Identifying how you will measure each concept, being guided by
the conceptual definition, is a process known as operationalization.
Conceptualization focuses on developing specific definitions,
operationalization focuses primarily on how a researcher will indirectly
measure the concept. Although operationalization is engaged in primarily by
researchers who will use quantitative data, researchers using qualitative data
are interested operationalization in terms of how their respondents view the
appropriate measurement of concepts. For instance, a respondent may note
that they identify a bad police officer based on the way she drives or the
language used. How respondents “measure” underlying concepts is of interest
to researchers gathering qualitative data.
Operationalization: Process in which a researcher identifies how
each concept will be measured based on the conceptual definition.
Figure 4.5 Conceptualization
For researchers who will be using quantitative data, operationalization details
the precise way in which they will measure the underlying concept. Will they
use a survey question? Will they weigh something? Will they count
something? Each of these are ways in which a researcher can operationalize a
concept (or part of it). There are many ways to operationalize concepts, and
the researcher must identify exactly how he or she will accomplish it.
Cuevas offers an excellent example regarding concepts and
operationalization: someone says they are going to “travel to work.” The
concept of “travel to work” has shared meaning among people. Yet Cuevas
notes that “traveling to work” can be accomplished in many ways, by taking
different modes of transportation, by using different pathways, and so on.
Operationalization refers to the specific way that person will “travel to work”
by identifying the streets and roads she takes, every turn she makes, and the
type of vehicle used, the speed traveled, and so on. Santos offers an excellent
example in terms of the concept of security. One way a researcher may
conceptualize security is by lighting in an area. Operationalizing it means
addressing the question, “What does this conceptualization mean in terms of
gathering data?” For example, would you gather data on how bright lights
are? Or would you measure where the lights are located? Perhaps you would
measure how many lights there are? Or maybe simply whether there are any
lights? These practical questions must be clarified during operationalization,
so the researcher knows exactly what type of data to gather.
For Melde, the concepts in his research questions indicate what he will gather
data about. In contrast, Brunson’s exploration, identifying and comparing
concepts of interest within these three neighborhoods, is the goal of the
research. Research seeks to identify or understand the importance of
concepts, including relationships among concepts. Yet, the interest goes
much deeper than that. Researchers are fundamentally interested in the
variation in these concepts and in how the variation in one concept
influences or is associated with the variation in another concept. For
example, researchers are not interested in victimization per se; researchers are
interested in variation in victimization experiences among those in the public.
Why are some people victimized and others are not? Researchers are not
interested in perceptions of the police per se; they are interested in variation
in perceptions of police among Black and White youth.
Why place emphasis on variation? Variation is a fundamental element of
research. The presence of variation can be used to establish a relationship
among concepts. If concepts vary together, a researcher has evidence that the
concepts may be related. If the concepts do not vary together, a researcher
does not have evidence of a relationship among them. Because all people
reside on planet Earth (to our best knowledge), it would not make sense to
propose research asking whether the planet of one’s residence is related to
attitudes about police. Why? Because everyone lives on Earth, planet of
residence is a constant and therefore cannot possibly account for any
variation in attitudes toward police. If everyone has the same level of
education, it would not make sense to study whether educational level is
related to income. Why? Because educational level is a constant, meaning it
cannot influence the variation in people’s income. Given this newly identified
emphasis on variation in and among concepts, it is useful to consider some
research questions and what they really are asking.
Research question: Zaykowski (2014) asks: How does reporting to the
police, the victim’s demographic characteristics, the victim’s injury,
offender’s use of a weapon, the victim’s relationship to the offender, and
the victim’s mental and physical distress influence victim service use?
What is really being asked: How does variation in whether victims
report to the police, variation in victims’ demographic characteristics,
variation in whether victims were injured, variation in whether offenders
used a weapon, variation in victim–offender relationships, and variation
in victims’ mental and physical distress influence variation in seeking
victim services?
Research question: Cuevas and colleagues (Sabina et al., 2016) ask: (a)
What are the rates of dating violence by victim gender? (b) What is the
risk of experiencing dating violence over time? (c) Is dating violence
victimization associated with other forms of victimization? (d) What
cultural factors (e.g., immigrant status, familial support) are associated
with dating violence over time?
What is really being asked: (a) How do the rates of dating violence
vary by whether a victim is male or female? (b) Does the risk of
experiencing dating violence vary over time? (c) Is variation in dating
violence victimization associated with variation in other forms of
victimization? (d) Is variation in peoples’ cultural factors (e.g.,
immigrant status, familial support) associated with variation in the
amount of dating violence experienced over time?
Research question: Melde and collaborators (Melde et al., 2009) ask:
(a) What is the effect of gang membership on self-reported
victimization? (b) What is the effect of gang membership on perceptions
of victimization risk? (c) What is the effect of gang membership on the
fear of victimization?
What is really being asked: (a) How does variation in whether one is a
gang member or not affect variation in amount of self-reported
victimization? (b) How does variation in whether one is a gang member
or not affect variation in perceptions of victimization risk reported? (c)
How does variation in whether one is a gang member or not influence
variation in level fear of victimization expressed?
Research question: Brunson and his collaborator (Brunson & Weitzer,
2009) ask: What are differences in views of police relations of Black
and White youth based on where they reside: a Black disadvantaged
neighborhood, a White disadvantaged neighborhood, or a racially mixed
disadvantaged neighborhood?
What is really being asked: How do the views of police relations
among Black and White youths vary based on variation in neighborhood
where they reside (a Black disadvantaged neighborhood, a White
disadvantaged neighborhood, or a racially mixed disadvantaged
neighborhood)?
Note that the arrows in Figure 4.7 point to the outcome or the dependent
variable. You might be wondering, what are these other variables such as
victim distress or demographics if they are not dependent variables? What
role do they play? Variables in this role are independent variables.
Independent Variables
Independent variables are variables thought to influence, be associated
with, or cause the variation in an outcome or a dependent variable. The
purpose of research is not to understand what causes variation in independent
variables (also referred to as IVs). Rather, the purpose of research is to
understand whether and how an independent variable is related to or causes
the variation in the dependent variable.
Independent variable: Type of variable that is believed or
hypothesized to influence, be associated with, or cause the
variation in an outcome or dependent variable.
What are the independent variables in the two examples? In the first example,
focused on education level and recidivism, the independent variable is
education level (shown as a square in Figure 4.8). This illustration indicates
that the research seeks to understand or identify how variation in educational
level (IV) influences, or is associated with, variation in recidivism (DV).
Figure 4.8 Effect Education Level Has on Recidivism
As an example, how might you measure the concept of a gang member? The
conceptual definition of gang member for the purposes of this example is a
person who is actively involved in a street gang, and is recognized by
members of the gang as a member. How might you operationalize this? One
option is to ask individuals whether they are gang members. This is simple
and elegant, and is the approach used by Melde and colleagues (2009).
Nevertheless, research shows that this approach may lead to false positives as
many wannabe gang members claim membership when they actually are not
gang members.
A second option is to ask known gang members who other gang members
are. This is also simple and elegant, but it can be problematic if that gang
member does not know all other members. A third approach could be to ask
law enforcement who the known gang members are. This too is imperfect as
law enforcement may only know of gang members who have tangled with the
law. And a fourth way might be to identify gang members via observation of
tattoos or articles of clothing depicting their membership. Again, this is not a
perfect approach, but it should identify those showing this signs. This
example demonstrates that any one of these approaches measures some, but
misses other, elements of, the underlying concept of gang membership.
A fifth approach of measuring gang membership is to use multiple measures.
Why not plan to use all four of these questions to measure the underlying
concept of gang membership? You could operationalize gang membership
such that an individual must be identified as a gang member for at least two
of the four approaches to be considered a gang member. This offers an
improved measurement of the concept. Figure 4.13 illustrates how
operationalization may lead to the plan to use multiple measures to best
measure a single concept.
Figure 4.13 Operationalization Using Multiple Measures
What Are Data?
Throughout the chapter, we have relied on a general understanding about the
meaning of data. Let’s now offer a more specific definition of data. Data are
the individual pieces of information—numeric or non-numeric—gathered and
later analyzed to answer a research question. The word data is plural; datum
is singular, therefore a researcher states that “data are” or “data were” rather
than “data is” or “data was.” As previously noted, data should vary to be
useful in research. Consider the concept “Trust in Law Enforcement.” To
measure this, 16 people were asked whether they had contacted the police
after a property or a violent crime in their lifetime. The data gathered are
recorded in Table 4.1. The variable name used to reflect that measure is
“Reporting to Police.”
Notice how these data (yes/no responses) vary. Among the 16 individuals
interviewed, 10 stated they had reported the victimization to the police.
Notice also that in this case, the variable and the concept it represents do not
share the same name. The concept is “trust in law enforcement,” and the
variable name is “reporting to police.” In other cases, concepts and variables
share the same name.
Attributes
A part of operationalization when one is gathering quantitative data is to
identify the categories of data that will be gathered. The chapter has made
clear that data must vary. The nature of the variation is determined by the
attributes of the measures. Attributes are the categories of the data collected.
These are also known as response categories. Examples of attributes make
this concept clearer.
Attributes: Categories or grouping of the data collected for a
particular measure.
Response categories: List of options available from which a
respondent selects an answer.
Imagine that a researcher has decided to conduct research on parolees. As a
part of that research, the researcher is interested in the age of the parolee.
During the operationalization stage of the research, the researcher identifies a
variable named “age” and is trying to figure out what attributes or response
categories he will use. One option is to use two attributes for the variable age:
“juvenile” and “adult.” Let’s assume the researcher is using a survey
administered to a group of parolees. On that survey, the age measure used to
gather data on age may look like this:
In this example, there are two response categories: juvenile and adult. A
second option is to collect the data for parolee’s age in this way:
In this second example, there are ten attributes or response categories from
which the parolee can chose. A third option is to gather data on parolee age in
this way:
A second critical skill a researcher must have is strong written skills. She
notes that too many college students with very poor writing skills apply for
work and are turned away for that reason only. Brenidy commonly sees
grammar and editing errors in applications, and these immediately makes
those in the position to hire suspect of the candidate’s entire body of work
and skills in general. Like verbal skills, written skills must reflect an
understanding of the audience the work is intended for. A researcher must
know whether the audience will read one page, the first sentence of each
paragraph, or require the finer details in a longer piece.
A third critical skill that applicants must have is the ability to engage in group
projects. Brenidy notes that everything done in her office is a group project.
Every member must pull his or her own weight, know his or her role, respect
deadlines, and work with a diverse group of individuals who may use
different styles. Those who cannot do this are not useful employees and will
likely not be considered for hire.
Brenidy concludes by noting that understanding research, and translating
research into programs and policies, is challenging. She recognizes that being
able to do so are unique skills but they are extremely valuable. In fact, to be
hired in her office, these skills are required. During interviews, candidates are
given data and asked what recommendations can be made based on the data.
This exercise looks to ascertain whether candidates understand basic research
methods including an understanding of conceptualization, operationalization,
and other topics described in this chapter.
Chapter Wrap-Up
This chapter presents the stages and skills needed to move from abstract
concepts to measurable variables. These steps are crucial, and great attention
must be given them to conduct strong and valuable research. Failure to
address any of these foundational elements will result in weak research of
questionable value. Variables are used to conduct analysis and answer
research questions. The stages covered in this chapter include identifying
concepts and how they vary. Next, the chapter presented what
conceptualization is and why it is performed and important. By using the
conceptual definitions resulting from conceptualization, operationalization
was described and several examples were provided. These examples included
identifying precisely how a researcher can indirectly measure—using one or
multiple measures—each concept. Additionally, the chapter covered the
variables including types of variables such as independent, dependent, and
control variables. Attributes/response categories were also introduced, and
the importance of multiple exclusiveness and exhaustiveness in relation to
attributes were described. Relatedly, levels of measurement including
nominal, ordinal, interval, and ratio were described as well as why these are
important. In addition, validity and reliability, including types of each, were
introduced and described. Each stage, and each element found in each stage,
requires dedicated attention to maximize the value of the research conducted.
How did our case studies deal with the road from concepts to variables?
Table 4.4 presents those items for each featured article. As shown, each piece
of research dealt with different concepts, variables, and even the way they
were handled during research. Finally, the chapter concludes with an
interview with Brenidy Rice, a problem-solving court coordinator, who
identifies the importance of these steps in her daily work. In the courts,
understanding research and thinking deeply about concepts such as success
and recidivism, and how to best operationalize them, is critical. In Chapter 5,
the different types of samples and a variety of approaches for gathering
samples are introduced and described. This chapter takes you one step closer
to gathering data for your research.
Applied Assignments
1. Homework Applied Assignment:
Conceptualization and Operationalization
Select three concepts used by either Cuevas, Santos, Melde, or
Zaykowski in their research. After identifying those concepts,
describe how each researcher conceptualized, and then
operationalized, those concepts. What variables did they
ultimately end up with? What measures were used to measure
each concept? What are other ways that the researcher could have
done this for each concept? Do you think that each measure used
is valid? Why or why not? Be prepared to justify why you stopped
when you did in your search. Be prepared to discuss your findings
in class.
2. Group Work in Class Applied
Assignment: Concepts to Variables in a
Survey
As a group, you need five concepts to work with for this
assignment. The first two are maturity and happiness. As a group,
select three additional criminal justice or criminological concepts
that interest you. As a group, describe in detail how you will
conceptualize and operationalize each of those five concepts for
use in a survey that will be printed and distributed. How will you
ultimately measure each concept? What will your survey question
look like exactly? What will the response categories for each
survey question include? Be sure to focus on mutual
exclusiveness and exhaustiveness if applicable. What are the
strengths of your measures? What are the limitations? Be
prepared to discuss and share your summaries in class.
3. Internet Applied Assignment: Poor
Survey Questions
Select a peer-reviewed academic journal article available online
that includes independent and dependent variables. In your
thought paper, you need to summarize the research conducted.
Identify the dependent variable(s) (DVs) and how it is
operationalized/measured. Specify all independent variables (IVs)
used in this research and why they are important. Describe the
way in which the researcher believes the IVs and DVs are related.
Use an illustration to show the purported relationship among the
variables. Discuss whether you think other independent variables
should be included in their model and why. Identify any missing
IVs and why you believe they were not included. What future
research questions would these missing variables suggest to you?
What were the conclusions of the research? Turn in the journal
article with your thought paper.
Key Words and Concepts
Attributes 118
Categorical variable 121
Concept 103
Conceptual definition 106
Conceptualization 106
Content validity 124
Continuous variable 123
Control variable 113
Criterion validity 124
Deductive reasoning 100
Dependent variable 111
Discrete variable 123
Exhaustive 121
Exhaustiveness 120
Face validity 123
Independent variable 112
Inductive reasoning 101
Inter-rater reliability 124
Interval 122
Level of measurement 121
Likert-type scale 122
Measure 115
Mutually exclusive 120
Nominal 121
Operationalization 106
Ordinal 121
Qualitative data 101
Quantitative data 100
Ratio 122
Reliability 124
Response categories 118
Test–retest reliability 125
Validity 123
Variable 109
Key Points
Concepts are abstract notions residing in our minds about which society
has general agreement. A researcher poses a research question using
concepts. Ultimately, the researcher must be able to measure those
abstract concepts, but he or she must go through several steps to do so
indirectly. Those stages include conceptualization and operationalization
that ultimately lead to variables. A researcher uses variables to conduct
the research as proxies for the concepts.
Conceptualization is the process of precisely and comprehensively
defining each concept. Conceptualization focuses on definitions. The
resulting conceptual definitions are used as guides for
operationalization.
Operationalization is the process of identifying how a researcher
indirectly measures the underlying concepts. This includes identifying
the nature of the measures that will be constructed, the number of
measures used for each concept, and the nature of data the measures will
gather. In addition, operationalization identifies the variables that
ultimately represent the concepts.
Variables can be identified in many ways. One way is the role they play
in a piece of research: dependent variables, independent variables, or
control variables. Another way is based on the nature of the data
gathered including categorical variables, discrete variables, and
continuous variables.
Variables and the data they gather must vary. Research cannot be
conducted on constants.
Levels of measurement identify something about variables as well.
Specifically, they identify the nature of the data collected in terms of the
degree of information gathered. These include nominal-, ordinal-,
interval-, and ratio-level measures. In general, a researcher should opt to
gather the highest level of measurement plausible.
Attributes or response categories must be mutually exclusive and
exhaustive for best measurement.
As a researcher, you must strive to provide attributes that do not mask
important characteristics or that lead to a desired outcome. As a
researcher, you are interested in what the research demonstrates, not in
guiding the research to a desired outcome.
Review Questions
1. What are concepts, and how are they related to research?
2. What is the goal of conceptualization? Why is it important?
3. What is the goal of operationalization? Why is it important? What
is the difference between conceptualization and
operationalization?
4. What are measures, and why are they important? Why would you
want to use multiple measures?
5. What are variables? What characteristics should they have? Why?
6. What are attributes? What are they also called? How are they
related to research?
7. What are the four levels of measurement? All else being equal,
which level of measurement should you strive for? Why?
8. What are categorical, discrete, and continuous variables? How do
they differ? Why does it matter?
9. What makes response categories or attributes mutually exclusive
and exhaustive? Why is this important?
10. How are concepts related to variables? How is validity related to
this? What does reliability refer to? Why are validity and
reliability important?
© monkeybusinessimages/iStockphoto.com
Populations and Samples
Researchers are interested in understanding something about a population.
Depending on the research question of interest, a population can include
elements such as people, organizations, geographic areas, or other things.
Although the term has been used repeatedly already, a definition of
population has not been offered. A population is the collection of all
elements that, when aggregated, make up that complete collective. Figure 5.2
offers one example of a population. In this example shown, there is a
population of pterodactyls. In total, this population is composed of eight
elements or of eight individual pterodactyls. Elements are the individual
parts that when aggregated form a population. Researchers are ultimately
interested in understanding something about the population they study.
Population: Collection of all elements that, when aggregated,
make up that collective.
Elements: Individual parts that when aggregated form a
population.
Zaykowski (2014) was interested in understanding what influences victims to
seek out victim services in a population of criminal victimizations. Cuevas
and his collaborators were interested in understanding something about
dating violence among the population of Latino teens (Sabina et al., 2016).
Brunson and his colleague wished to examine the views of police among a
population of male teens in disadvantaged neighborhoods (Sabina et al.,
2016). And Santos and her colleague sought to understand something about
how police intervention influences crime in a geographic region (Santos &
Santos, 2016). Although hot spots were not sampled (they were constructed
across the full geographic region of interest and divided into experimental
and control groups), Santos and her colleague did sample offenders to target
in these areas. Sampled offenders included those who were convicted and on
active felony probation with a prior burglary arrest, and nonviolent convicted
offenders on felony probation for drug offenses. Zaykowsi, Brunson, Santos,
and Cuevas—like most researchers—do not have the time, money, and other
necessary resources to gather data from every element of a population when
conducting research. The population may be too large. The population
elements may be dispersed across an enormous geographic region. Or it
could be that a full accounting of each element in the population is unknown
or unavailable. As Santos notes, “[t]here is frequently no way you can look at
everything in a population. It is not realistic. This is the reality of research.”
When gathering data from each element of the population is not feasible,
which is most of the time, researchers use a sample. The research is then
conducted on the units that compose that sample. The findings from the
sample are then frequently used to infer back to the population from which
the sample was drawn.
Census or a Sample?
Researchers use data from a sample of units to understand the population. At
times, however, a researcher is working with a population that is small
enough, or easy enough to access, that they can gather data about every
element in the population. When the researcher is gathering data from every
element in the population, a census (rather than a sample) is being conducted.
A census is the gathering of data from a collective that includes every
element in the population. Because data are taken from every element in the
population, a census is also referred to as a full enumeration of the
population. Using the data gathered from the census, a researcher can address
a research question. Whether you are using a sample of population elements,
or a census, the purpose of each is the same: gathering data to answer a
research question and better understand the population.
Census: Gathering of data from a collective that includes every
element of the population.
Full enumeration: Another term used for a census because all
members of the population were used to gather data.
Figure 5.2 Population of Pterodactyls in this Chapter
Bias
Another way a sample can be imperfect is through bias. Bias, or a biased
sample, indicates that the sample fails to include a particular type of
individual or some groups found in the population. Figure 5.4 illustrates this
problem. In this example, there is a population of 21 kids in which 14 kids
have crew cuts and 7 have ponytails. A sample of 7 kids was drawn into a
sample—all of whom have crew cuts. This sample is biased because none of
the kids with ponytails is included in the sample. The findings from research
using this biased sample would not generalize well to the population and
would be characterized by more sampling error than expected. In reality, very
few samples are completely unbiased. A more realistic goal for researchers is
to select a sample in which any bias is so small that the sample closely
approximates the population.
Bias: Describes a sample that fails to include a particular type of
individual or particular groups found in the population.
Biased sample: Indicates that the sample fails to include a
particular type of individual or some groups found in the
population.
Figure 5.4 Population of 21: 14 Kids With Crew Cuts and 7 With Ponytails
Unit of Analysis
An important concept to consider when sampling is the unit of analysis. A
unit of analysis is the what, or who, that is being studied and analyzed for a
particular piece of research. It is the unit being studied, and the level of social
life the research question is focused. For example, in featured researcher
Mary Dodge and colleagues’ exploration of undercover women working john
stings, her unit of analysis—the who or what being studied—was an
individual (i.e., policewomen; Dodge, Starr-Gimeno, & Williams, 2005). In
Zaykowski’s (2014) research on accessing victim services, the who or what
being studied was victimizations. For Cuevas and colleagues, an individual
was the who or what being studied (i.e., Latino youth; Sabina et al., 2016).
Santos’s unit of analysis comprised the offenders being targeted for
intervention (Santos & Santos, 2016). As these examples suggest, several
categories of units of analysis are used in criminology and criminal justice
research. In the broadest sense, they are individuals, groups and
organizations, geographic areas, and social artifacts and interactions.
A researcher can use many sampling methods to obtain a sample, and which
method is best is contingent on a variety of factors, including the research
question, and on knowledge about the population. Resource availability is
also a consideration because different approaches require more or less time,
money, staff, and other considerations. In general, there are two major
approaches of sampling: probability and nonprobability. Also, a variety of
specific types of probability and nonprobability sampling approaches are
used. Figure 5.5 illustrates the variety of probability and nonprobability
sampling approaches used in criminology and criminal justice.
Probability Sampling
Probability sampling refers to the process of selecting a sample from a
population in which every population element has a known, nonzero chance
of being selected. Probability sampling requires the presence of a sampling
frame, which is a comprehensive list that includes all elements of the
population. The elements listed in the sampling frame are sampling
elements. Depending on the research question, a sampling frame could be a
list of all students in a school, all governors in the nation, all prisons in the
state of Texas, all prisoners in a prison, and so on. From this sampling frame,
a researcher draws the desired number of sampling elements to construct the
sample. Access to a 100% complete and accurate sampling frame may not
always be possible, but getting a sampling frame as close to perfect is the
goal. Researchers have to at times be creative about finding sampling frames.
Unfortunately there is no “sampling frames r us” online warehouse. Cuevas
and colleagues’ research needed a sample of Latino youth (Sabina et al.,
2016). To gather their sample, they used two probability approaches. First,
through RDD, telephone numbers were generated in what were known to be
high-density Latino neighborhoods. In addition, additional subjects were
drawn into the sample by using a sampling frame of Latino surnames.
Probability sampling: Process of selecting a subset of elements
from a comprehensive list of the population. This approach
requires that every population element have a known, nonzero
chance of being selected.
Sampling frame: Comprehensive list that includes all elements of
the population. It is from the sampling frame that probability
sampling selects the sampling elements.
Sampling elements: Individual items found on sampling frames.
Specific sampling elements are selected for samples.
Drawing names out of a hat is one simple random sampling approach.
© ClassicStock/Alamy
A researcher can use several types of probability sampling approaches,
including simple random sampling, stratified sampling, systematic sampling,
and cluster sampling. These approaches are not mutually exclusive and may
be used in combination. In all cases, the probability sampling described as
follows is done without replacement. Sampling without replacement
indicates that any sampling element can appear only once in a sample.
Should, during the selection of elements in a sample, an element be selected a
second (or third, or fourth, etc.) time, a researcher should not add that
element to the sample a second time, but the researcher should simply ignore
that selection and move on with the sample selection. Each of the commonly
used approaches to probability sampling is described next.
Sampling without replacement: Any sampling element can
appear in a sample only once. Should, during the selection of a
sample, an element be selected a second (or more) time, a
researcher should simply move on with the sample selection.
Simple Random Sampling
Simple random sampling is a type of probability sampling in which each
element in the population has a known and equal probability of being drawn
or selected into the sample. This is the only type of probability sampling with
that characteristic, as the sampling elements in others approaches do not have
equal probability of selection. To gather a sample using simple random
sampling, you first need to identify the desired sample size. Then, you need
to select sampling elements at random until the desired sample size is
achieved.
Simple random sampling: Type of probability sampling in which
each element in the population has a known, and equal,
probability of being drawn or selected into the sample.
The classic example of simple random sampling is putting the name of every
sampling element in a population on a slip of paper and then drawing the
slips of paper out of a hat one at a time until the sample size needed is
reached. Names drawn from the hat are those selected for the sample.
Another simple random sampling approach would be assign a unique
identification number to every sampling element on the sampling frame and
then using a random number generator (e.g.,
http://stattrek.com/statistics/random-number-generator.aspx) available on the
Internet, or a table of random numbers, selecting elements into the sample.
For example, a random digit generator may first randomly generate the
number “37,” which means the 37th element listed on the sampling frame
would be in the sample. The random digit generator may next generate the
number “173,” meaning the 173rd element on the frame would be included in
the sample. Disregard any number randomly generated multiple times and
proceed. This process continues until the desired sample size is reached.
Simple random sampling produces samples with several desirable qualities.
First, this approach is simple to use given a sampling frame. Second, simple
random sampling works with large and small samples. Third, simple random
sampling results in samples that nearly perfectly represent the population
from which it was drawn. Simple random sampling is also limited in some
ways. One disadvantage of simple random sampling is that complete
sampling frames are frequently not available. For many research questions, it
is difficult or impossible to identify every element of a population. Second,
simple random sampling is often expensive and not feasible in practice. For
example, imagine you have a complete sampling frame from which to draw a
sample but the population elements are dispersed over a huge geographic area
such as the United States. If you need to access each element in the sample
for an in-person interview, using simple random sampling would not only be
time-consuming, but it would also be costly (travel expenses) and resource
intensive in other ways. As a result, although simple random sampling has
desirable qualities, it may not be the best choice for large populations,
especially when they are dispersed widely.
Systematic Sampling
Another probability sampling approach is systematic sampling, which some
argue is the most used type of probability sampling. Systematic sampling
constructs a sample by systematically selecting sampling elements from a
sampling frame based on a sampling interval. A sampling interval is a
number or the distance between each sampling element selected. Stated
differently, a researcher picks every nth element from a sampling frame for
inclusion in the sample. Although several types of systematic sampling
approaches are available, this text focuses on the most commonly used
approach referred to as the 1-in-k method. The 1-in-k method is based on the
following basic formula:
Systematic sampling: Probability sampling approach in which
the sample is constructed by selecting sampling elements from a
sampling frame using a sampling interval.
Sampling interval: Number, or the distance between, each
sampling elements selected to be in a systematically drawn
sample. Stated differently, a researcher picks every nth element
from a sampling frame for inclusion in the sample.
1-in-k method: Most commonly used systematic sampling
approach that is based on the formula k = N / n.
k=N/n
k = sampling interval
N = population size; total sampling elements on the sampling frame
n = total sample size desired; number of sampling units required
Imagine a researcher who has a population size (N) of 144. He or she requires
a sample size (n) of 12. To determine the sampling interval, the researcher
uses the formula to find:
k=N/n
k = 144 / 12
k = 12
This basic calculation indicates a sampling interval of 12 is required. Next,
the researcher must identify a starting point in the sampling frame. This is
selected by randomly selecting a starting point from 1 to k or in this example
from 1 to 12. Assume the researcher randomly selected the number 4 as the
starting point. To draw the sample, the researcher selects the 4th element into
the sample. They next select the 16th element (the 12th element from the
starting point, which is 4 + 12 = 16), and then the 28th element (16 + 12 =
28), the 40th element (28 + 12 = 40), and so on. This continues until the
researcher has the desired sample size of 12.
Systematic sampling offers the advantage of being easy to use. A
disadvantage of this approach is that there may be a pattern hidden—or
periodicity—in the sampling frame. Any hidden pattern in the frame might
lead to a biased sample. An example of periodicity is if a researcher were
selecting a sample of apartments in a high rise for use in a study. The
researcher is given a complete list of all apartment numbers that serves as a
sampling frame. The numbering of apartments on the frame indicates both
the floor on which the apartment is found and the location (and type) of the
unit. For example, apartments 410, 420, 430, and 440 are located on the
fourth floor as designated by the first number (410, 420, 430, and 440).
Because the apartment number ends in a zero, a researcher knows these are
corner (i.e., more expensive and much larger) units in the building. If the
researcher worked with a sampling interval such that only these larger, and
more expensive, corner units were drawn into the sample, bias would be an
issue. Similarly, if the sampling interval selected was such that a unit number
ending with zero was never drawn into the sample, the sample would be
similarly biased. One approach to dealing with this issue would be to ensure
the sampling frame list itself is in random order.
Periodicity: Potential issue when using systematic sampling
approaches that refers to a pattern hidden in the sampling frame.
Periodicity occurs when a characteristic relevant to the study
being conducted is found in the sampling frame and when that
characteristic appears on a cyclical basis that matches the interval
used in systematic sampling.
Stratified Sampling
Another approach to probability sampling is the use of stratified sampling.
Stratified sampling is an approach in which the sampling frame is first
divided into mutually exclusive and exhaustive subgroups meaningful to the
research being conducted. From each strata or subgroup, random sampling is
then conducted. This approach ensures groups/characteristics on which the
strata are based are accurately represented in the population. Note that the
word stratum is singular and that the word strata is plural—the stratum is; the
strata are. Figure 5.6 offers an illustration of stratified sampling.
Stratified sampling: Probability sampling approach that requires
the sampling frame to be first divided into mutually exclusive and
exhaustive subgroups meaningful to the research. The second step
is to independently and randomly sample from each stratum.
Strata: Subdivisions or subgroups of the sampling frame created
in stratified sampling.
A classic example of stratified sampling involves drawing a sample from a
population of high school students. Imagine a researcher wishes to interview
a sample of 100 students from a local high school, and it is vitally important
for that particular piece of research that she has a sample that accurately
represents the four classes (freshman, sophomore, junior, and senior). Her
first step would be to construct strata of the four classes. The strata indicate
that 30% of the population of students are freshman, 25% are sophomores,
25% are juniors, and 20% are seniors. The researcher next randomly selects
30 students from the freshman stratum (which corresponds to 30% of the
population of students), 25 students from the sophomore stratum (which
corresponds to 25% of the population of students, 25 students from the junior
stratum, and 20 students from the senior stratum. After using this approach,
the researcher has a sample of 100 students in which the class level of
students perfectly reflects the population on that characteristic.
Stratified sampling produces samples with a desirable characteristic: a sample
that offers representation of the population on key characteristics. In fact,
stratified sampling produces samples that are more representative than
samples generated using simple random sampling. In this way, the use of
stratified sampling minimizes sample bias on key characteristics as those
subgroups in the population are neither over- nor underrepresented. Like all
things, however, stratified sampling has some disadvantages. It is not an
approach that can be used for all research projects. First, a researcher must
have a sampling frame in order to use it. Second, stratified sampling requires
the sampling frame to have enough information that allows each sampling
element be placed in only one stratum. Third each element must be placed in
only one stratum. Although some population characteristics make placing
each element in one stratum easy (e.g., class in high school), others
characteristics are not as easily subdivided.
Cluster Sampling
In many cases, researchers do not have a sampling frame of the sampling
elements required. Even though they may not have the needed sampling
frame, they may have access to a sampling frame where groups or clusters of
these elements are located. For instance, researchers may wish to gather data
from people in a city. They may have a sampling frame of all housing unit
addresses in a city but not a list of all people. When using cluster sampling,
they first draw a probability sample (i.e., simple random, systematic, etc.) of
all housing units in the city. Second, the researchers might interview one
person in each sampled housing unit to gather data. Cluster sampling is a
probability sampling approach where groups or clusters (where researchers
will find the actual sources of data needed for the research) are first sampled,
and then each unit within each cluster is used to gather data. Cluster, in this
context, refers to a sampling element where one or more of the desired
sources of data are found or associated. Clusters take on many forms such as
states, cities, universities, schools, housing units, census blocks, counties, and
so on.
Cluster sampling: Probability sampling approach where groups
or clusters (where a researcher will find the units of observation
needed for the research) are first sampled, and then each unit
within each cluster is used to gather data.
Cluster: Part of cluster sampling that refers to a sampling element
where a researcher or more of the desired units of observation are
found or associated.
Figure 5.6 Stratified Sampling Produces a Sample That Represents the
Population on Key Variables
Cluster sampling can also be used in cases where a researcher may have the
sampling frame, but gathering data from each unit drawn into sample is too
costly or impossible for other reasons such as geographic dispersion. By
using cluster sampling, the units from which the data are taken are clustered,
meaning the time and money used to access these units is diminished. It
would be faster, cheaper and easier to draw a sample of cities (clusters of
people) and then to interview individuals in person in a restricted number of
cities than it would be to interview people randomly dispersed throughout the
nation.
Cluster sampling has wide applicability, and many benefits, and for that
reason is widely used. One benefit is that it saves time, money, travel, and
other research-related expenses. Second, it can be used in situations where
you do not have a sampling frame of the units from which the data will be
gathered. This scenario is not uncommon, especially when you need data
from individuals. A third benefit is that it easily used for large populations,
especially those dispersed over a broad geographic region. Like all
approaches, cluster sampling has some drawbacks. Each unit from which data
will be taken does not have an equal chance of being drawn in the sample
(although their chance of selection is greater than zero, and known). This
may lead to issues of representativeness of the population. For this reason, a
larger sample is advisable (and cost effective) when using cluster sampling.
Figure 5.7 provides an illustration of cluster sampling.
Figure 5.7 Example of Cluster Sampling of Housing Units to Gather Data
From Individuals
Multistage Sampling
At the beginning of the section on probability sampling, it was noted that a
researcher can use probability sampling techniques separately or in
conjunction with one another. Multistage sampling is not really a probability
sampling approach but instead refers to the use of multiple probability
sampling techniques to draw a sample. For example, a researcher may first
randomly sample housing units (first stage; sampling a cluster of housing
units) and then randomly sample those living in each selected housing unit
(second stage). Another example of a multistage approach would be to collect
a random sample of clusters (schools; stage 1), then stratify the students in
selected schools based on a class-level example (stage 2), and then randomly
select students to interview in each class level (stage 3).
Multistage sampling: Not really a probability sampling approach,
but instead it is the use of multiple probability sampling
techniques to draw a sample. For example, a researcher may first
randomly sample housing units (first stage; sampling a cluster of
housing) and then randomly select people to interview in each of
the selected clusters.
National Crime Victimization Survey (NCVS) data, the data used by
Zaykowski (2014) in her research article, are gathered using a stratified,
multistage cluster sampling approach. In the first stage, geographic regions in
the United States are stratified to ensure those living in rural and urban areas
are represented in the data. Failure to engage in this stratification may lead to
an overrepresentation of urban areas. In the second stage, strata are further
subdivided into four frames that designate the areas: housing units, housing
permits, group quarters, and other areas. In the third stage of sampling,
clusters of approximately four housing units (or their equivalents) are
selected from each stratum. And finally, all people 12 years of age or older
are interviewed in each selected household about their experiences with crime
victimization. Use of a stratified, multistage cluster sampling approach leads
to a sample that is representative, findings that are generalizable (after
postsampling adjustments), and makes the NCVS data collection feasible and
much more affordable than other approaches available.
The benefits of this approach are that the sampling elements are selected
based on desirable characteristics. Second, there is diversity in the sample on
the identified characteristics. In this way, quota sampling is better than
convenience or haphazard sampling. Another advantage are those found in
convenience sampling: Quota sampling is cheap, fast, and easy. The
disadvantages are also the same as those characterizing convenience
sampling. Even with the added diversity in the sample on the selected
characteristics, the sample remains biased and nonrepresentative of the
population from which it was selected. Just because the sample has racial
diversity (in this example), you cannot know with certainty that the subjects
used in the quota sample are representative of all people with the same race in
the population. The findings of research based on a quota sample lack
generalizability to any larger population.
Purposive or Judgmental Sampling
Purposive sampling is a nonprobability sampling approach in which the
sample is selected based solely on a particular characteristic of the case.
Samples gathered using this approach are selected because the respondents
have particular data needed to conduct the research. In other words,
respondents or subjects are selected because they have a particular
characteristic relevant to the research such as occupation or where they live.
Certain steps are required to conduct purposive sampling. First, a researcher
must identify the characteristic that makes a researcher eligible to be in the
sample. Second, the researcher must attempt to identify all possible subjects
with that characteristic. Third, the researcher must strive to contact all
individuals with that characteristic. Finally, the researcher should gather the
needed data from those individuals. If a researcher merely reaches out to
convenient cases of respondents with the named criteria, a researcher is
engaging in convenience sampling.
Purposive sampling: Nonprobability sampling approach in which
the sample is selected based solely on a particular characteristic of
the case. Samples gathered using this approach are selected
because the respondents have particular information needed to
conduct the research.
This was the sampling approach taken by Brunson and colleague in their
study on adolescent males living in three study neighborhood sites (Brunson
& Weitzer, 2009). They approached counselors working in community
organizations and asked that they identify young males for participation in
interviews for the study. It was not enough for the researchers to find any
young male; they required young males who lived in one of three
disadvantaged neighborhoods who were at risk of delinquent activities. These
characteristics made these young men more likely to have contact with
police. Using this strategy, they were introduced to 45 youth for the study.
Second, this formula and Table 5.2 demonstrate that the sample size needed
for a particular margin of error is not dependent on the population size. Look
again at the formula: The population size is not a part of it. That means that a
sample size of 1,100 individuals, based on random sampling (and no
refusals), will provide statistics within ±3% points from the true population
value. Similarly, a randomly drawn sample of 1,600 is enough for findings
with minimal error (±2.5%). A researcher would need a randomly drawn
sample of 1,600 people in Denver to provide a statistic within ±2.5% of the
true population parameter. Alternatively, a researcher would need a randomly
drawn sample of 1,600 people in the state of Colorado to provide a statistic
within ±2.5% of the true population parameter for Colorado. Likewise, a
researcher would need a randomly drawn sample of 1,600 people in the
United States to provide a statistic within ±2.5% of the true population
parameter for the United States.
In contrast, the sample size needed to conduct research examining rarer
events is much larger. This type of research is more common in the criminal
justice and criminology disciplines. For example, if one’s purpose is to
estimate the amount of violent victimization occurring in the nation in one
year, as is done with the NCVS, a far larger sample than 1,600 is required.
Why? Because violent victimization is rare, meaning not everyone drawn in a
random sample will have been a victim of violence. The sample size need
must take into account the majority of those who have not been victimized.
Therefore, a researcher would need to gather an enormous sample in order to
have enough victimization cases to offer reasonable estimates of
victimization that would be generalizable to the larger population. Consider
that in 2015, NCVS data were based on data from 95,760 households and
163,880 people. Even with such large sample sizes, there were 1,023 violent
victimizations revealed by the survey during the year, most of which were
simple assaults (635). If a researcher wanted to investigate only female
victims, young victims, or inner city males, the needed sample size of
victimizations would need to be far larger. Although this text cannot go into
the intricacies of sample size, it is important to recognize that the topic of the
research is highly related to the sample size needed.
Type of Research
Related to the purpose of the research is the type of research conducted. The
use of a probability versus a nonprobability sample will affect the needed
sample size. Gathering a sample using probability methods tends to require
larger sample sizes compared with gathering a sample using nonprobability
methods. If a researcher seeks to generate findings that are generalizable back
to the population, a larger sample is need compared with when
generalizability is not a goal. Consider the work of Dodge and collaborators
(Dodge et al., 2005). Their research used a sample size of 25 subjects for
their work on women decoys. How did Dodge et al. know to stop their
sample with 25 subjects? According to Dodge, she stopped pursuing
additional subjects because of saturation. Saturation in this context refers to
the point at which there is no new data gathered from new subjects.
Saturation also indicates that replication of the research should not reveal
additional information. Replication refers to the repeating of a piece of
research. Research findings should remain the same when the study is
replicated.
Saturation: Has several related meanings, one of which involves
searching for sources for a literature review. In particular, it
indicates the search for sources is complete because one finds no
new information on a topic and the same studies and authors
repeatedly are discussed.
Replication: Repeating of a piece of research.
Brunson and his collaborator gathered a sample of 45 adolescents for the
study on youth perceptions and interactions with police in disadvantaged
neighborhoods (Brunson & Weitzer, 2009). Their sample of 45 is a bit larger
than the one used by Dodge and colleagues (sample size = 25; Dodge et al.,
2005), but given their need to have an adequate number of subjects (15) from
each of the three neighborhoods, a larger sample was required. The
researchers capped the number of participants for each neighborhood at 15
because they found they had reached saturation and no new information was
being uncovered in the later interviews. Had Brunson and his colleague only
been interested in what was happening in one neighborhood, a sample of 15
would have been sufficient.
Although this chapter has demonstrated that identifying the perfect sample
size is not always possible, a researcher must make every effort to take into
account the issues described in this chapter. Consider that using too large a
sample is unethical because it wastes resources (and often people’s time) with
no gain in data. A researcher should engage in research that burdens
respondents and participants as minimally as possible. This falls under the
“do not harm” umbrella of human subjects. It is also incorrect to stop
gathering subjects for a sample prior to reaching saturation if that is the goal
of the research. A researcher must commit to gathering as many subjects as is
required to reach that point.
Other, more obvious ethical considerations are to use the sampling
approached approved by the institutional review board (IRB). If a researcher
gains IRB approval to sample individuals 18 years of age and older in a
setting but does not ensure that juveniles are kept out of the sample, that
researcher is engaging in unethical research. As noted in an earlier chapter,
certain groups have additional human subjects review, and yet other groups
should be approached carefully, even if additional IRB scrutiny is not
mandated.
Sampling Expert—Sam Gallaher, PhD
Sam Gallaher, PhD, is the data analytics and methodology specialist at the
Auditor’s Office of the city and county of Denver, Colorado. His work at the
Auditor’s Office includes three primary responsibilities. First, through audits,
he adds assurance that taxpayer money is being spent wisely. Second,
Gallaher also uses these audits to search for risks to the city (risks include the
potential for being sued) as well as to assess whether those groups being
audited are meeting their stated objectives (generally defined by the City
Charter or outlined in strategic plans). Finally, his role includes searching for
and setting up processes to minimize the risk of fraud or abuse. Each of these
responsibilities requires an understanding of sampling.
© vaeenma/Canstockphoto
Why Conduct Research Using Qualitative Data?
Why would a researcher use qualitative data to conduct research? In short,
because qualitative data are ideal for answering many research questions that
seek to reveal a comprehensive understanding about a topic. By using
qualitative data, you can glean an understanding about what is important and
the meaning of symbols, body language, and rituals related to a particular
topic. By using qualitative data, you can identify important patterns, themes,
feelings, and core meanings associated with a topic. This type of rich,
nuanced understanding cannot be garnered with all the quantitative data in
the world. Much of what a researcher may be interested cannot be understood
in quantifiable terms. If you are seeking knowledge about more than the
quantifiable elements of a topic, research using qualitative data is ideal (see
Figure 6.1).
One of our featured researchers, Carlos Cuevas, revealed to us during a video
interview conducted for this book that he finds that many students believe
that conducting qualitative research is faster and easier to do. Cuevas wants
to dissuade everyone from this notion. Qualitative inquiry is not faster, and it
is not easier. As Cuevas notes, it is “differently” time- consuming and
“differently” challenging. The biggest reason why someone should conduct
research using qualitative data is because it is the best way to answer their
research question. Although Cuevas and colleagues’ highlighted research did
not include gathering qualitative data (Sabina, Cuevas, & Cotignola-Pickens,
2016), he recognizes the value of this approach, and the challenges associated
with it.
Second, research using qualitative data allows you to study how things work.
By using qualitative data, a researcher can gain an understanding about how
human phenomenon occurs, as well as about the effect on those participating
in it. Dodge et al.’s (2005) interviews of female decoy officers provided
insight into how decoys needed to look, what they needed to wear (or not
wear), and how they needed to posture themselves. Some officers noted they
had to change their stances because they could not stand like an officer.
Decoys had to change the way they made eye contact, chewed gum, and other
seeming mundane things. In addition, decoy officers learned they were more
effective decoys if they did not bathe and instead showed up dirty with ratty
clothes on. In some cases, decoys found great success at attracting johns
when they wore no shoes. Johns were not trusting of prostitutes who were
neat, clean, and pretty.
Fourth, Patton (2015) stated that research using qualitative data can provide
insight into how systems function and into the consequences of those systems
on people. Indeed, this was at the core of Brunson and colleague’s research
(Brunson & Weitzer, 2009). Through interviews with youth in these
neighborhoods, the researchers learned how the system worked differently
for Blacks and Whites. The system differs for Blacks compared with Whites
in terms of interactions and perceptions. Blacks overwhelmingly received
negative police attention (detained, searched, arrested) when engaging in law-
abiding behavior.
Fifth, research using qualitative data can provide rich understanding of
context, including how and why the context matters. In addition, context
provides deeper meaning about actions or beliefs. In Brunson and Weitzer’s
(2009) work, an attempt at holding the environmental context constant was
made since the three neighborhoods used in the study were equally
disadvantaged. By doing this, the research held constant the role that a
disadvantaged neighborhood has on the outcome. In that way, the influence
of race could be better identified. The researchers found evidence that as the
context of the neighborhoods changed, so did the way police engaged
members of the community. Respondents noted that as a neighborhood
gained a larger percentage of Black residents, policing became more
aggressive and abusive.
Sixth, research using qualitative data offers an understanding of unanticipated
consequences. The open-ended nature of qualitative investigations provides
an understanding of both the planned and unplanned consequences of an
event or phenomenon. Brunson and Weitzer’s (2009) examination of youth
and police relations identified important consequences. Because of negative
interactions with officers, those in Black communities expressed a desire that
police stay out of their neighborhoods unless they were specifically called.
Black youth, given their experiences and perceptions, were cynical regarding
police and had no confidence in law enforcement.
Based on research on female decoys provided, one may hypothesize that
policewomen acting as decoys have higher job satisfaction than policewomen
not engaged in this activity.
And finally, our featured researchers such as Carlos Cuevas, Rachel Boba
Santos, and Heather Zaykowski point to the complementary nature of
qualitative and quantitative inquiry. All agree that studies using both
qualitative and quantitative approaches at the same time are ideal. Use of
both types of inquiry—a mixed-methods approach—provides statistics, as
well as information that provides both context and meaning to those statistics.
They are not competing approaches but complementary approaches.
What Is Research Using Qualitative Data?
Broadly, qualitative data are data that are non-numeric in nature. This
includes text from transcribed interviews with people, narratives, published
documents, videos, music, photos, observations, body language, and voice
intonation to name a few. These types of data convey the qualities of the
topic of interest. Qualitative data are gathered with an open mind. During
interviews, it means using open-ended questions. An open-ended question is
one in which the respondent is not asked to pick from a predetermined set of
response categories but can instead answer using his or her own words. In
contrast, a closed-ended question is one in which the response categories are
provided and the respondent must choose among the option given only. In
closed-ended questions, respondents are often asked to “check a box.” An
analogy that helps clarify open- and closed-ended questions is found on
exams. Imagine an exam composed of 10 essay-type questions. Students are
asked to use their own words to answer each question in a paragraph or more.
This is an open-ended exam question. In contrast, a closed-ended question is
similar to a multiple-choice question on an exam. In this case, students must
pick from the options provided. A student cannot provide additional detail
but must check a box or fill in a bubble (see Figure 6.2).
Open-ended questions: Designed to give the respondent the
opportunity to answer in his or her own words. These are similar
to essay questions on exams. Open-ended questions are ideal for
gathering qualitative data.
Closed-ended question: Type of survey question that requires
respondents to select an answer from a list of response categories.
Figure 6.2 Examples of Open-Ended and Closed-Ended Questions
Source: http://housemadisonedm310.blogspot.com/2015/06/blog-
assignment-4.html
Stages of Research Using Qualitative Data
The remaining part of the chapter discusses methods used to gather
qualitative data in greater detail. Before moving on, it is instructive to think
of the big picture when it comes to gathering qualitative data. Like all
research, this begins with a research question. Consider both Dodge et al.’s
(2005) and Brunson and Weitzer’s (2009) work. After developing a research
question, both groups turned to the literature to gather an understanding of
what was currently known regarding their topics. Dodge and colleagues
learned there was virtually nothing in the literature about the topic of
policewomen serving as decoys beyond conjectures about how they might
feel. Brunson and colleague found a richer existing literature that suffered
from an important gap: no detailed comparisons of Black and White youths
living in similarly disadvantaged neighborhoods. This knowledge helped
guide them to pursue interviews to fill this gap and enrich our knowledge of
the topic.
Their next need was to identify a sampling strategy to gather the qualitative
data required to answer their research questions. The most feasible sampling
strategy depended on several things. Will the data be gathered directly from a
person? From observations? From participation in the topic? From existing
documents? Answering these questions provided direction regarding the next
steps in their research. In both examples, the researchers opted to interview
individuals. Knowing this allowed them to move forward with their sampling
strategy. The remaining steps used to conduct qualitative research included
specifying their sampling strategy, gathering their data, organizing the data,
analyzing the data, and making conclusions that are shared (see Table 6.2).
The postsampling steps are described in the remainder of this chapter.
© RuslanDashinsky/iStockphoto.com
Alternatively, an interview can be semistructured, which better enables the
researcher to compare responses across individuals or groups, as well as to
address specific hypotheses since every respondent is asked some of the same
questions. Semistructured interviews are ones in which all respondents are
asked the same set of questions. By asking the same questions across
respondents, the researcher can compare responses. Although questions are
posed to every respondent, a semistructured interview still allows for and
requires follow-up questions, as well as probing on interesting responses
given. How did Dodge and her colleagues gather their data? Dodge and her
collaborators used semistructured interviews to gather data from
policewomen decoys. Although a complete list of questions used during these
interviews is not provided in their published article, it is likely Dodge and
colleagues (2005) asked the undercover officers things such as “Can you tell
me about the most recent time you acted as a decoy prostitute in a sting?” and
“How did acting as a decoy make you feel?” and “How did others view you
when you were acting in this role?” Additional probing follow-up questions
or requests to expand on respondent comments are the norm in unstructured
interviews.
Complete Participant
Research using a complete participant role means that the researcher hides
his or her true identity and purpose from those being observed. The
researchers’ goal is to interact in this natural setting as naturally as possible to
best gather information and meaning. A researcher may dance as a topless
dancer in a club to learn about dancers, customers, and the environment in
which they interact. Or a researcher may join a gang. Alternatively, another
researcher may obtain a job as a probation officer. In these cases, the
researcher would not reveal the real purpose of his or her being in the club,
the gang, or the probation office. This approach is unmatched in providing
access to the information desired, and the people of interest, in their natural
environment of interest. Although excellent at providing rich, in-depth
information, acting as a complete participant has limitations. First, the risk of
going native is greatest when the researchers actually become members of,
and identify with, the individuals and group being observed. If going native
occurs, the researchers’ objectivity is compromised, and the research findings
become questionable. Protecting oneself from going native is challenging.
Gold (1958, p. 220) stated it well when he noted that “[w]hile the complete
participant role offers possibilities of learning about aspects of behavior that
might otherwise escape a field observer, it places him in pretended roles
which call for delicate balances between demands of role and self.” The
second concern with this approach is that of ethics. As noted, you as the
researcher must consider the ethical considerations with this approach beyond
what the IRB allows or disallows. Those being observed are being duped. Is
this ethical? Discussions with experienced mentors who have confronted
these issues are recommended.
Complete participant: Role conception in which the researcher
keeps hidden his or her true identity and purpose from those being
observed. The researchers’ goal is to interact in this natural setting
as naturally as possible to gather data and meaning.
Participant as Observer
Research in which the researcher is a participant as observer occurs when
the researcher and the primary contact(s) in the field are the only ones aware
of the researcher’s actual role and purpose for the observation. The researcher
then engages with the groups as a member or as a colleague. Acting as
participant as observer, the researcher must first make contact with and
develop a good relationship with an informant. This informant assists the
researcher in terms of gaining access to the group, individuals of interest, and
group’s environment. Given this approach, most of the individuals
encountered are unaware of the researchers’ purposes and real identity.
Participant as observer: One of four role conceptions in which
the researcher and his or her primary contact(s) in the field are the
only ones aware of the researcher’s actual role and purpose for the
observation. While the researcher’s role is known to a select few,
the researcher engages with the group as a member or a colleague.
This approach is effective in gaining access to information, people, and
environment of interest, yet it has challenges. First, it requires lying,
misleading, and betraying the trust of those being observed. This raises the
question of participants freely and voluntarily consenting to participating in
the research (they cannot). Still, should the researcher be open and honest
about his or her role and purpose, the behaviors of those being observed will
likely change. The Hawthorne Effect, identified in 1953, refers to possible
impact on the behavior of those who are aware they are being observed and
studied. The Hawthorne Effect suggests that people will react by hiding or
exaggerating behaviors when they are aware they are being observed.
Researchers continue to study and debate all aspects of the Hawthorne Effect
(McCambridge, Witton, & Elbourne, 2014). Some consider the Hawthorne
Effect as nothing more than a well-known and glorified anecdote. Others
argue there is no single Hawthorne Effect but multiple effects. Others debate
the degree to which the Hawthorne Effect influences behavior. Although the
scientific examination of this effect rages on, it is prudent for a researcher to
at least be cognizant of the possibility that observing behavior will lead to a
subject’s reaction.
© doomu/Shutterstock
Although field notes are a necessary way to record qualitative data, they can
and should be augmented with other means of recording data. For instance, a
researcher benefits greatly by also using a video or an audio recorder.
Recorders are useful for capturing details a researcher may miss in writing
field notes. Having recordings (audio or video) is really useful. Going over
these recorded data can often reveal patterns or important elements or themes
that escaped the researcher’s attention initially. Nevertheless, like all things,
audio and video recordings have some disadvantages, including being
distracting to the people being studied, having the potential to break down, or
otherwise requiring the researcher’s attention (e.g., changing a battery or
tape) when their attention should be squarely on the research topic. Even with
this potential technical difficulty, audio or video recordings are necessary
especially if one is conducting long, complex interviews when the
participants’ words are important.
Organizing and Analyzing Qualitative Data
Once you have gathered your data in the form of field notes, audio tapes,
video tapes, documents, or some combination thereof, what do you do next?
Organize those data. This means different things depending on the different
types of data. With field notes, organization means ensuring your field notes
are legible, edited, and corrected. Recorded data should be transcribed,
corrected, and edited. All data must be organized such that the researcher can
move through the voluminous data swiftly and efficiently to glean meaning.
Once data are organized, analysis becomes the primary goal of the research.
To be clear, analyzing qualitative data begins as soon as the first data are
collected because the researcher is watching for patterns or relationships even
then. It is at this point where no additional qualitative data are being
collected, that the major task of organizing the data and analysis is the focus.
Analyzing requires the researcher to reduce, condense, or code the large
volume of data into broader categories or themes. In other words, you as a
researcher must take a lot of specific information and from it develop broad
categories and themes. This is an iterative process where coding leads to
themes that are then reviewed for additional refinement of codes. Ideally,
multiple researchers should engage in the coding process of qualitative data
independently to look for regularities and patterns in the data. A specific
description of how to analyze qualitative data is not easily described given
the many types of qualitative data available. Even a review of published
qualitative research tends to offer little more than a very brief description of
how the researcher analyzed his or her data. This is the case for both Dodge
et al.’s (2005) and Brunson and Weitzer’s (2009) research. They described
their data analysis approaches, in part as follows:
Dodge and colleagues: “Each interview was taped with the subject’s
consent and then transcribed verbatim for qualitative data analysis.
Major themes were extracted and grouped into conceptual domains
based on generalised statement content (Glaser & Strauss, 1967;
Schatzman & Strauss, 1973).” (p. 76) . . .
Brunson and colleague: Youth were interviewed and recorded. The tapes
of the interviews were transcribed and analyzed by the authors. The
analysis included selecting statements that illustrated themes
consistently found throughout the data. The quotes used were not
atypical. Each author independently coded the data and subsequently
categorized it into themes and subthemes (see methods section).
A specific way to analyze qualitative data is using grounded theory.
Grounded theory is a systematic methodology that leads to the construction
of theory through the coding and analysis of qualitative data. In addition to
using qualitative data to develop themes, you can use these data to develop
theory via grounded theory. This is the approach taken by Brunson and
Weitzer (2009) in their exploration of adolescent and police relations in urban
neighborhoods as noted in their text:
The data analysis was conducted with great care to make certain that the
themes we discovered accurately represented young men’s descriptions.
This was accomplished using grounded theory methods, by which
recurrent topics were identified along with less common but important
issues (Strauss, 1987). Each author independently coded the data and
subsequently categorized it into themes and subthemes. (p. 864)
Grounded theory: Systematic methodology that leads to the
construction of theory through the coding and analysis of
qualitative data.
Grounded theory analysis involves three coding steps according to Strauss
(1987): (1) open coding, (2) axial coding, and (3) selective coding. Open
coding refers to a researcher reading the complete set of raw data multiple
times to organize and summarize the data (whether they are words, sentences,
paragraphs, illustrations, etc.) into preliminary groupings of analytic
categories. The goal of open coding is to focus on the specific data and assign
labels or codes identifying major themes. For example, for Brunson, it is
plausible that some preliminary analytic categories identified centered on
verbal interactions, and physical interactions between law enforcement and
youth given these are the focus of subsections in their findings.
Open coding: Refers to a researcher reading the complete set of
raw data multiple times to organize and summarize the data into
preliminary groupings of analytic categories.
The second stage of coding using grounded theory is axial coding. During
axial coding, the researcher focuses on the preliminary analytic categories or
labels developed during open coding to identify relationships between the
categories. The attention during this stage is not on the raw data but is
focused on the summarized labels of the raw data.
Axial coding: In this step, the researcher focuses on these
preliminary analytic categories or labels to identify relationships
between the categories. The attention during this stage is not on
the raw data but on the summarized labels of the data.
Finally, the third stage of coding in grounded theory is selective coding.
During selective coding, the researcher reviews all raw data and the previous
codes or labels with few purposes in mind. First, the researcher makes
comparisons and contrasts among themes or labels developed. Returning to
Brunson and Weitzer’s (2009) research, it is probable that during the
selective coding stage, the researchers selected typical quotes illustrating
differences in experiences and perceptions between the White and Black
youth interviewed. A second purpose of selective coding is to identify
overarching and broad variables that describe connections and relationships
among some of the labels or themes. An illustration of these coding steps
used in grounded theory is provided in Figure 6.8.
Selective coding: In this coding step, the research reviews all raw
data and the previous codes or labels for several purposes. First,
the researcher makes contrasts and comparisons among themes or
labels. Second, during this secondary stage, the researcher
identifies overarching and broad variables that describe
connections and relationships among some of the labels or
themes.
Once the data are condensed or reduced into broader themes, interpretation is
required. Many options of interpretation are available, and which option is
used is dependent on the specific research goals of the work. Interpreting
qualitative data is a process that “involves abstracting out beyond the codes
and themes to the larger meaning of the data” (Creswell, 2013, p. 187). It can
be accomplished by asking, “What do these data mean?” “What is the big
picture here?” and “What does this tell me about the topic being studied?”
Figure 6.8 Simplified Example of the Three Stages to Coding Qualitative
Data When Using Grounded Theory
One way of interpreting the data is to identify relationships and patterns
among the codes and themes. This also includes the researcher offering
explanations and drawing conclusions. In other cases, a researcher will count
the frequency of codes found in the data analyzed. This is frequently the
approach taken when a researcher is engaged in content analysis. For others,
the goal is more focused on the meaning versus on the quantity of any code
found. A researcher can also contextualize the themes and relationships
identified given the existing literature. A researcher may ask him- or herself,
“Are these codes and themes found in the literature?” and “Do these codes
and themes offer greater understanding to what is in the literature?” A
researcher can also interpret the data by making contrasts and comparisons by
asking, “How do the codes and themes compare to one another?” and “Are
there meaningful contrasts?” In addition, the researcher could suggest some
hypotheses that come from the data such as suggesting one particular group is
more (or less) likely to support police than another group. Finally,
interpretation is accomplished by visualizing the data using tables, figures,
and illustrations. Graphics are excellent ways to understand and convey
information. These additional steps in analyzing, interpreting, and visualizing
qualitative data make clear that coding data and identifying themes are not
necessarily the only, or the last, steps of research based on qualitative data.
Examples of Qualitatively Derived Themes:
Brunson and Weitzer’s (2009) Research
Brunson and his colleague used grounded theory to identify themes in their
research on male youth experiences and perceptions of police relations. Six
themes emerged from their research that indicate that when “holding
neighborhood socioeconomic context constant, race makes a difference in
how youth are treated by police and in their perceptions of officers.” (p. 879).
The six themes that led the researchers to offer this conclusion were
following:
1. Unwarranted stops. Several respondents noted they had been stopped
without cause. Black teens noted this happened regardless of their
behavior (suspicious or not) and felt that they were stopped without
reason frequently. White youth respondents noted they were stopped
less and were stopped only under three specific conditions: (1) while in
the company of young Black males, (2) when in racially mixed or
majority-Black neighborhoods, or (3) while dressed in hip-hop apparel.
White teens recognized that Black teens were treated differently and
more harshly.
2. Verbal abuse. All teens noted that officers frequently were discourteous
to them, used inflammatory language or racial slurs, and name-called
those they stopped. Both Black and White teens were well aware that
Black teens bore the brunt of this poor police behavior. All teens
recognized that Black teens endured verbal abuse more often and in
harsher ways.
3. Physical abuse. Although most police interactions did not result in any
serious physical harm, the teens reported that excessive force was used
on numerous occasions. This physical abuse included shoving,
punching, kicking, and the use of mace. In addition, some teens were
taken to other neighborhoods and released—behavior by the police that
can place the teens in very dangerous or lethal circumstances.
4. Corruption. All the teens knew of some corrupt cops engaging in
bribery, stealing from the teens, planting drugs on teens, and other
reprehensible activities. No teen portrayed the entire police department
as corrupt, however. The teens recognized that only a few individual
officers conducted themselves in this way.
5. Biased policing. That idea that policing was biased (based on race) was
held by all of the teens. Moreover, most teens believed that residents of
White communities fared much better as a result of this bias.
6. Police accountability. The teens uniformly believed that there was little
accountability among poorly behaving officers in the department. The
young men noted that supervisors and other officers know of police
misconduct, but they consistently fail to address it.
Researchers must decide, even beyond what the IRB allows, if the parameters
of the relationship established with the participant are ethical. Should a
researcher take a fully covert role in their research, he or she must consider
the ethics associated with the deception inherent in this approach. The
researcher must consider that engaging in a 100% covert role means that
respondents cannot voluntarily participate. This is also a consideration when
a researcher has opted to be less than fully honest and open with participants
who may not know that this person they are dealing with is a researcher who
is conducting research.
Researchers must also keep the well-being of their subjects at the forefront of
their research. As ethical rules stipulate, a researcher must minimize any
harm done to subjects. Respondents are freely giving of themselves for the
benefit or research, and they deserve a researcher’s full respect, zero
judgment, complete care, and consideration. Judgments cannot be made
regarding behavior or thoughts shared by respondents, no matter how
personally challenging they may be to the researcher. Participants must be
made comfortable and not traumatized as a result of the researcher’s
activities. This includes questioning of subjects. For instance, a researcher
may ask about sexual violence, intimate partner violence, homicide,
commission of child abuse, or any myriad of very sensitive topics. Should the
researcher’s questioning become harmful to the respondent, it is the
researcher’s obligation to stop the process and do what is needed to assist the
respondent. For many research topics, it is advisable that an advocate be
made available (or be present). No research is worth damaging another
person (or being).
In addition, as a researcher, you must make every effort to protect
respondents’ privacy and confidentiality. This means stopping an interview to
erase identifying data from tape recordings (as Brunson and colleague, 2009,
had to do during their research) if needed. It means destroying or protecting
records that could lead to respondent identification. It is the researcher’s
responsibility and ethical obligation to take great care with records so that the
data gathered and respondents’ identities are always protected.
The call to do no harm also extends to researchers. Researchers must not only
consider the well-being of the respondents but of those engaging in any
portion of the research. Honoring this was evident in Santos and Santos’s
(2016) research, although it was not based on qualitative inquiry. Given
potential danger in making repeated contacted with known felons in Santos
and Santos’s research, only certain members of the research team interviewed
these targeted offenders. It was deemed a potential safety issue for some
members of the research team to make these contacts.
Finally, it is the researcher’s responsibility to present findings that are
accurate. It may be tempting to fail to share unflattering information about a
topic or respondents. It may be tempting to emphasize only positive findings.
Nevertheless, failure to offer a complete description is unethical. The findings
presented must be complete and accurate.
Qualitative Data Research Expert—Carol Peeples
Carol Peeples is the founder and CEO of Remerg, a social cause company
that produces remerg.com, a website for people coming out of jail and prison
in Colorado. The organization’s goal is to reduce recidivism by connecting
people to the resources and information they need to succeed. Peeples
developed Remerg with the vision of using the Internet to fill the
informational gaps around systemic reentry efforts and the community brick
and mortar organizations.
Before founding Remerg, Peeples was a K–12 teacher, mostly in private
international schools. After moving to Colorado in 1999, she took a part-time
job as a GED teacher in a prison for a community college program. Later, she
began teaching college English composition classes in the same facility. She
notes that “getting to know prisoners as their teacher was transformative as
I’d really never thought about the issue of incarceration before this time. The
greatest impact was the feeling that our prisons were full of a wasted resource
—human beings—and I wanted to do something about it.” Given that, she
founded a voting project designed to educate people with a criminal record
about their voting rights, researched and wrote a statewide guide for people
coming out of prison for a criminal justice advocacy organization, worked as
an advocate, and returned to the university and earned her MPA in public
policy. When she began her MPA, she had no idea about the important role
that research using qualitative data would play in the success of her career
and organization.
The next chapter, Chapter 7, focuses on data gathered using surveys. Survey
research is a widely used approach in academia, public, and private entities.
Although it may seem easy to construct and field a survey, Chapter 7 will
offer best practices to assist you in developing and fielding an excellent
survey that will result in quality data.
Applied Assignments
The materials presented in this chapter can be used in applied
ways. This box presents several assignments to help in
demonstrating the value of this material by engaging in
assignments related to it.
1. Homework Applied Assignment:
Assessing Research Using Qualitative Data
Select two peer-reviewed journal articles that describe research
using qualitative data. Using Table 6.4 describing our case studies
as a guide, create a table for your two articles. Be sure to clearly
articulate the purpose and research question in each piece.
Identify the type of sampling used, the sample size, and other
qualities of the sample. And in the table, identify the methodology
used to collect the data and the type of data analysis used. After
completing your table of the two articles, write a paper that
describes each as well the advantages and disadvantages of this
approach. What other approaches can you envision to answer the
research question? What future research does each article suggest
to you? Be prepared to discuss your findings in class.
2. Group Work in Class Applied
Assignment: Field Observation as a Group
Your group needs to enter the field as “complete observers.” As a
group, and with the permission of your professor, you need to go
to an open area (park), coffee shop, diner, shopping mall, bus
station, or somewhere you can observe people interact. Ideally,
the site you choose will be somewhere your group is interested in
and easily accessible. Your goal is to observe anything and
everything at your location, assuming you know nothing.
Data you can gather can include (it is useful to delegate
responsibilities among the group members):
Describing the interactions occurring in the setting, including
who talks to whom, whose opinions are respected, how
decisions are made
Noting where participants stand or sit
Examining interactions from a perspective of comparing
those with and without power, men versus women, young
versus old, majority versus majority, and so on
Counting persons or incidents of observed activity that
would be useful in helping one recollect the situation,
especially when viewing complex events or events in which
there are many participants
Listening carefully to conversations, trying to remember as
many verbatim conversations, nonverbal expressions, and
gestures as possible
Noting how things at your location are organized and
prioritized, as well as how people interrelate and what the
cultural parameters are
Providing a description of what the cultural members
deemed to be important in manners, leadership, politics,
social interaction, and taboos
Describing what is happening and why
Drawing a map of the place
Sorting out the regular from the irregular activities
Looking for variation to view the event in its entirety from a
variety of viewpoints
These data should be gathered using field notes. As a group,
summarize your field notes. What do the data mean? What
surprised you when doing this assignment? Does your observation
suggest any research questions for later analysis? Be able to report
your findings to the class.
3. Internet Applied Assignment: Gathering
and Analyzing Online Qualitative Data
You have been hired as a researcher to explore university mission
statements from 10 universities. Find at least 5 public and 5
private universities for inclusion your sample. In a paper, you
need to identify your purpose and research question. You must
identify how you will select your ten universities for study. Next,
after using and analyzing these qualitative data, identify themes.
What findings emerge? Are there differences in public and private
universities? Any other patterns you can discern? What future
research does your analysis suggest? Write a qualitative research
paper that describes what you did, your methodology, your
findings, and your conclusion. Be prepared to share your findings
with the class.
Key Words and Concepts
Axial coding 197
Closed-ended question 179
Coding 180
Complete observer 193
Complete participant 192
Computer-assisted qualitative data analysis (caqdas) 199
Content analysis 194
Document analysis 193
Ethnography 190
Field notes 195
Focus group 189
Going native 190
Grounded theory 196
Hawthorne Effect 192
Interviews 187
Maximum variation sampling 183
Observation and fieldwork 190
Observer as participant 193
Open coding 197
Open-ended question 178
Participant as observer 192
Participant observation 190
Probing 181
Qualitative research 174
Role conceptions 190
Selective coding 197
Semistructured interviews 188
Theoretical sampling 183
Triangulation 184
Unstructured interviews 188
Key Points
Qualitative research uses non-numeric data to answer what are
frequently exploratory research questions using inductive reasoning to
provide detailed and nuanced understanding of a topic.
Research using qualitative data is especially good at providing a deep,
nuanced understanding of a topic with an emphasis on the context. In
general, it provides information on everything nonquantifiable about a
topic.
Qualitative data come from interviews, observations and field research,
and document analysis. In each of these categories, there are additional
variants of qualitative research.
Qualitative data are non-numerical in nature and includes things such as
narratives, published documents, videos, music, photos, recordings,
observations, body language, voice intonation, and other aspects of a
topic.
Triangulation is the use of multiple methods, researchers, theory, or data
to conduct a study. In the words of Denzin, research that has been
triangulated offers “different lines of action.” By using triangulation, a
researcher can offer strengthened evidence supporting the conclusions.
Interviews can be face to face or over video or audio technology. They
can be conducted one on one, in small groups, or with focus groups.
Regardless of the style, a researcher should pose broad questions to the
subjects and be prepared to use follow-up probes to gather more data.
As per Gold (1958), role conceptions represent the four roles a
researcher can take on during field research that indicates the degree of
participation and observation taken on by the researcher. They include
complete observer, participant observer, observer as participant, and
complete participant. As a researcher, you must pay attention to the
possibility of the Hawthorne Effect, as well to the ethical considerations
that come along with participant observation.
Document analysis refers to a systematic collection, review, evaluation,
synthesizing, and interpretation of documents to gain meaning and
understanding, regardless of whether the document is printed or
available in electronic form. Content analysis is one type of document
analysis that focuses on the analysis of non-numeric data to develop
inferences pertinent to the research question. There is no agreement on
the precise definition of content analysis, but most agree it has
characteristics of both qualitative and quantitative research (i.e., counts
or quantifies non-numerical data).
Sampling for qualitative research generally uses nonprobability
approaches such as convenience, purposive, or snowball sampling. The
sample should be large enough to reasonably represent the population of
interest and result in saturation. The size of the sample is less important
than the quality of the sample.
After gathering qualitative data, the research must spend time organizing
the data. After that, analysis can take place. Although analysis is widely
varied (and frequently not detailed), it generally consists of coding that
involves three stages according to Glaser and Strauss (2012): open
coding, axial coding, and secondary coding.
Even though qualitative data analysis software is available, it is only a
tool. Whether software is used or not, the research must still make sense
of, organize, synthesize, and condense the data to meaningful
generalizations.
Review Questions
1. What is qualitative research, and how does it differ from
quantitative research?
2. What are qualitative data, and how do they differ from
quantitative data?
3. What are the advantages and disadvantages of qualitative research
approaches? What research questions are most appropriate for
qualitative research?
4. What are the three primary types of qualitative research? What are
the advantages and disadvantages of each?
5. When would a researcher use unstructured versus structured
interviews? When would a researcher opt for one-on-one
interviews versus focus groups?
6. What are the responsibilities of a moderator?
7. What are the four types of role conceptions? What are the
advantages and disadvantages of each? What ethical
considerations and research issues are associated with each?
8. What is content analysis? Is it quantitative or qualitative in
nature? Why? What sorts of documents can be analyzed in
document examination?
9. What are ways a researcher can record qualitative data? What is
the most important? Why? Is it best to use only one recording
method? Why or why not?
10. What are the three general steps involved in qualitative data
analysis? What occurs at each of these stages?
© Camrocker/iStockphoto.com
General Steps in Survey Research
Before we move on to more detailed information related to survey research, it
is useful to consider the survey research process. An examination of this
process shows how in each step there is information that informs one’s
decision about how to gather data. Will it be a questionnaire or a survey?
Will it be administered in person, via e-mail, or by handing out surveys. The
first step in all research, including survey research, is to develop a research
purpose and research question. This step is important because the purpose
and question helps to inform whether one will use a questionnaire or an
interview. If research is purely exploratory, an unstructured or a
semistructured interview may be the way to go. This allows the flexibility to
follow up with respondents immediately when new information is learned. If
the purpose is explanatory, it may be appropriate to use a questionnaire with
a fixed number of items. In the following sections, we will provide greater
information about how survey research can be used in the four purposes we
consider in this text: exploratory, descriptive, explanatory, and evaluative.
The purpose and research question also informs how one will handle
concepts, conceptualizations, and operationalization. For example, if one is
seeking to gather data on a topic that is associated with a rich literature, then
using a questionnaire may be best. This is because the research can identify
the concepts of interest, conceptualize them, and operationalize them into
survey items on a questionnaire. The questions or items on the questionnaire
should correspond to concepts of interest. On the other hand, if the research is
fully exploratory on a topic in which little is known, then an unstructured or
semistructured interview may be best. In this way, the researcher can learn
from his or her respondents about key concepts, how respondent define these
concepts, and the best ways they might be measured in the future.
Developing a sampling plan is the next major step. Depending on resources,
and on the nature of the sample (is there a sampling frame available?), one
may adjust the way the way the survey is distributed. For example, if there is
a sampling frame of e-mail addresses available, then using an online survey
might be the best approach. If the sample desired is hard to find and no
sampling frame is available, however, the approaches available may be more
limited. Interviews may be best in these situations. Alternatively, if this
population gathers in a single place, a researcher may be able to distribute
surveys. There are many ways a survey can be distributed that will be
discussed in this chapter. After distribution of the survey, the data must be
gathered and analyzed, and the research question must be answered.
Throughout the development of the research methodology, the research uses
information to decide on the best approach. Let’s now turn to the many
purposes of research and how survey research can be used for each.
Surveys Across Research Purposes
As indicated, surveys are useful across all research purposes. This is in part
why they are the most popular form of gathering data in the social sciences.
In this section, we consider how surveys are used across purposes with an
emphasis on your case studies to demonstrate this flexibility.
Surveys and Exploratory Research
Recall from Chapter 2 that exploratory studies are useful when researchers
know little or nothing about a topic. Exploratory research is especially useful
when the researchers seek to answer “what,” “how,” or “where” questions.
Throughout the text, we have described how featured researcher Rod
Brunson and his colleague’s study of young Black and White males residing
in disadvantaged St. Louis, Missouri, neighborhoods was an example of an
exploratory study (Brunson & Weitzer, 2009). What we have not yet shared
is that this research also benefited from survey research as one means to
collect their data. Brunson explained that the use of face-to-face interviewing
allowed the researchers to build confidence and trust with the respondents.
After this trust was established, the researchers used self-administered survey
questionnaires to obtain general information from participants. Then during
the face-to-face conversations, researchers were able to delve deeper into
important issues and topics, allowing for more detailed and qualitatively rich
data to be collected.
The LEMAS survey is used to gather data from state and local law
enforcement agencies that employ 100 or more sworn officers (i.e., large law
enforcement agencies).
U.S. Department of Justice
The Law Enforcement Management and Administrative Statistics
(LEMAS) survey is another national survey that is widely used for
descriptpve research. The Bureau of Justice Statistics (BJS) sponsors the
LEMAS survey, which is conducted, or fielded, every 3 to 4 years. Fielded
means that the survey has been distributed or launched and data are being
collected. The LEMAS survey is used to gather data from state and local law
enforcement agencies that employ 100 or more sworn officers (i.e., large law
enforcement agencies). Self-administered survey questionnaires are mailed to
a representative from each law enforcement agency who provides
organizational and administration data on the LEMAS surveys. Data
collected during the survey administration includes agency responsibilities,
operating expenditures, job functions of sworn and civilian employees,
officer salaries and special pay, demographic characteristics of officers,
weapons and armor policies, education and training requirements, computers
and information systems, vehicles, special units, and community policing
activities.
Law Enforcement Management and Administrative Statistics
(LEMAS): National survey sponsored by the Bureau of Justice
Statistics (BJS) that gathers data from state and local law
enforcement agencies that employ 100 or more sworn officers
(i.e., “large” law enforcement agencies).
Fielded: Term used to mean that a survey has been distributed or
administered and data are being collected.
Surveys and Explanatory Research
Explanatory research, also introduced in Chapter 2, is an approach that
provides explanations about a topic and builds off knowledge gained from
exploratory and descriptive research. Explanatory research is used to identify
what characteristics are related to a topic, as well as to identify what impacts,
causes, or influences a particular outcome or topic of interest. In addition,
through explanatory research, a researcher may try to understand how to
predict outcomes or topics of interest. Surveys are a valuable tool for
gathering data for explanatory purposes.
Although the NCVS is widely used to describe topics related to criminal
victimization in the United States, NCVS data are also used widely for
explanatory research. For example, featured researcher Heather Zaykowski’s
(2014) research analyzed NCVS data to estimate the likelihood that crime
victims would access victim services. Specifically, she focused on better
understanding the role that police reporting plays in influencing victims
accessing services. These NCVS data, gathered via surveys administered on
the phone and during face-to-face interviews, allowed her to do so. Featured
researcher Chris Melde and colleagues’ study is also explanatory, given their
goal to explain associations (or lack thereof) between gang membership, fear
of victimization, perceptions of risk of victimization, and self-reported
victimization (Melde, Taylor, & Esbensen, 2009). Data used to answer Melde
et al.’s research questions were collected from self-report surveys that
involved students at sampled schools answering questions as they were read
aloud by a researcher. Consent forms were sent to the students’ parents, and
only those students whose parents gave consent were allowed to complete the
questionnaires. The ethical protocol was in place to protect the children,
especially their privacy, and all students’ data collected during the study
remained confidential. Overall, 72% of students sampled participated in the
surveys, which took about 40 to 45 minutes to finish.
© pjhpix/iStockphoto.com
How Are Surveys Distributed?
We opened this chapter by noting that surveys are a popular research
methodology used in criminology and criminal justice research in part
because they are versatile. Part of that versatility relates to the variety of ways
a researcher can distribute surveys. Stated differently, surveys are versatile
given the variety of survey modes. Four primary survey modes or means of
distribution are available for surveys: mail or written, phone, face-to-face,
and online or mobile surveys. The choice of which type of survey mode is
best to use in any research project is influenced by many factors. Influential
factors include the research purpose and goal, the amount of time a researcher
has to field the survey, the amount of money the researcher can spend, and
other resources (such as availability of Internet or laptops used). In addition,
each survey mode has particular strengths and weaknesses that must be
considered. Each is discussed next.
Survey modes: Ways in which surveys can be administered.
There are four primary types of survey modes or means of
distribution: mail/written, phone, face-to-face, and the Internet.
Mail/Written Surveys (Postal Surveys)
When you think of a survey, you probably think of answering a series of
questions on a form or document and returning the completed form back to
the person or organization that distributed it. Mail and written surveys are a
mode of surveying that involves providing a written survey to an individual.
This survey can be sent via the mail or handed out in person. Since the
respondent is the person responsible for completing the survey (i.e., no one
else is with the respondent to assist in completing it), the survey is considered
a self-administered questionnaire. A self-administered questionnaire is a
survey administered either in paper or in electronic form that a survey
participant completes on his or her own.
Mail and written surveys: Mode of surveying that involves
providing a written survey to an individual. This survey can be
sent via the mail or handed out in person.
Self-administered questionnaire: Type of survey, administered
either in paper or in electronic form, which a survey participant
completes on his or her own.
Advantages
There are many advantages of a mail/written survey. First, because an
interviewer is not present during a self-administered survey, there is no
chance of interviewer bias. Interviewer bias can occur if the survey
participant is influenced by the presence or actions of an interviewer or field
representative. Imagine you are filling out a survey in the presence of an
interviewer, and you check a box noting you had taken a drug. As a response
to that, the (unprofessional and poorly trained) field representative shakes his
or her head in disgust. This may cause you to go back and change your
answer or to answer later questions differently to avoid the representative’s
judgment. Alternatively, imagine you are filling out a survey focused on
whether you had hired a prostitute. Imagine the field representative or
interviewer looking at you during this survey is a woman who looks like your
mother. Might that affect your answer if you had hired a prostitute? If
something about the field representative, whether appearance or behavior,
affects your responses on the survey, interview bias is present. Interview bias
distorts the outcome of the interview, affects the data gathered, and ultimately
affects the results of the study.
Interviewer bias: When the survey participant is influenced by
the presence or actions of an interviewer.
Another advantage of a mail/written survey is that they can be distributed
across a massive geographic study area at a relatively low (to moderate) cost.
© MarkHatfield/iStockphoto.com
Another advantage of a mail/written survey is that they can be distributed
across a massive geographic study area at a low (to moderate) cost. The cost
of a mail survey can increase, however, based on the length of the
questionnaire (i.e., the number of survey pages), whether return postage is
prepaid and return envelopes are provided, and whether other strategies
designed to reduce survey nonresponse are implemented.
Disadvantages
Mail or written surveys are imperfect and characterized by disadvantages as
well. A disadvantage of a mail or written survey is that it requires the
researcher or someone on the research team to take the completed surveys,
code the responses, and enter the codes for every completed survey into a
database. Coding, as introduced in Chapter 6, refers to converting a
respondent’s answers into a numerical value that can be entered into a
database. In addition, when a mail or written survey includes open-ended
questions with short answers provided by respondents, their answers must be
typed into a database for later analysis. These steps are not only time-
consuming, but they also represent a point in the research process where error
can be introduced through inaccurate coding and text entry. A second
disadvantage associated with mail or written surveys is that it is vulnerable to
survey nonresponse. Survey nonresponse occurs when sample respondents
who choose to complete and return surveys differ in meaningful ways from
those who are selected in the sample, but do not complete and return the
survey. When this occurs, the data may be biased, and ultimately the finding
of the study may be compromised. Survey nonresponse is measured using
response rates. Response rates are the proportion of surveys returned relative
to the total number of surveys fielded. The response rate is generally
described as a percentage and is calculated using this basic formula:
Survey nonresponse: When sample respondents who choose to
complete and return surveys differ in meaningful ways from those
who are chosen but do not participate in the survey.
Response rate: Way to measure the success of a survey and,
subsequently, how well the participants represent the larger
population they are intended to reflect. Rates are typically
reported as the percentage of all eligible subjects asked to
complete a survey and are computed by dividing the number of
participants by the total number of eligible participants and by
multiplying that proportion by 100.
For example, if you mail out 100 surveys, but only 10 are completed and
returned, your response rate is 10%. Historically, mail or written survey
response rates are low (e.g., 10% to 15%). This is markedly less than the
response rates associated with other survey modes. As a researcher, you need
to keep this in mind if you plan to mail out surveys to your sample. You need
to draw a large enough sample to ensure you have enough data taking into
account the likely survey nonresponse.
Low response rates for mail surveys are not a foregone conclusion.
Researchers have spent decades studying ways to increase mail survey
response rates (i.e., decreasing nonresponse). No other researcher is more
closely associated with survey research—and the strategies used to maximize
response rates—than Don A. Dillman. As an expert in survey methodology at
Washington State University, Professor Dillman has written extensively on
ways to minimize survey nonresponse. His book The Tailored Design
Method is considered the “go-to book” for those interested in conducting
high-quality self-administered questionnaires. By following his advice, you
may be able to get response rates as high as 80% (Dillman, Smyth, &
Christian, 2009).
Besides survey nonresponse, other sources of nonresponse biases exist. Some
of those sources of nonresponse bias researchers have control over, but other
sources they do not. By using many of the ideas developed by Dillman and
colleagues, Daly, Jones, Gereau, and Levy (2011) identified several of the
most common reasons why survey nonresponse occurs. One major reason is
that the intended respondent never received the survey sent to him or her.
Alternatively, it may be the respondent received the survey, but for a variety
of reasons, he or she did not complete or return the instrument. An
accounting for these nonresponse influences is presented in Table 7.1.
When using self-administered mail surveys, researchers must try to avoid
other threats to data quality.
© claudiodivizia/iStockphoto
In addition to the threat of nonresponse bias caused by low participation
rates, other important shortcomings are associated with self-administered
questionnaires. In the case of mail surveys, the researcher never truly knows
who completes the questionnaire. Although the mail survey might have been
addressed to a particular household respondent, nothing can guarantee that
the addressee is the person who completed it. Similarly, even though the
absence of an interviewer eliminates the chances of interviewer bias, not
having an interviewer present while the respondent completes a survey means
that the respondent cannot get help if he or she is confused by a question,
needs aid in understanding or interpreting instructions, or seems to lose
interest in completing or returning the questionnaire. All of these
circumstances can affect data quality and ultimately the results of the study.
When using self-administered mail surveys, researchers must try to avoid
other threats to data quality. Recall bias and recency effects are two examples
of problems that can occur when researchers gather data using mail surveys.
Both of these forms of error threaten the validity and reliability of data, which
were discussed in earlier chapters. Recall bias occurs when the survey
participant’s responses to questions lack accuracy or completeness because
they are unwilling or unable to recall events or information from the past.
Recall bias is not uncommon. Think back to something that has occurred to
you such as having a car stolen, your house broken into, a marriage
dissolved, or a heart broken. Have you ever found yourself describing this as
having happened more recently than it did? Maybe you told someone that
your car was stolen two months ago. When you stop and think about it, you
may realize that it was stolen six months ago (time flies). Recency effects, on
the other hand, stem from question or item ordering on a survey. This
question-order effect occurs when the ordering of questions on a survey
influences responses given to later questions. Perhaps you are taking a survey
that asks about a time you felt discriminated against. You answer and
describe such an event. The next question on the survey is as follows: “Do
you believe discrimination is a problem?” Chances are this question ordering
affects responses to the second question. Researchers must pay attention to
the ordering of their questions to avoid recency effects.
Recall bias: When the survey participant’s responses to questions
lack accuracy or completeness because they are unwilling or
unable to “recall” events or information from the past.
Recency effects: Stems from question ordering on a survey. This
question-order effect occurs when the ordering of questions on a
survey influences responses given to later questions.
Question-order effect: When the ordering of questions on a
survey influences responses given to later questions.
Online and Mobile Surveys
Increasingly, researchers are disseminating self-administered questionnaires
online, as well as through mobile applications (i.e., apps). Cuevas goes as far
as suggesting that the days of traditional phone surveys are numbered and
that methods such as online and mobile surveys do a better job of maximizing
the success of survey research. Online surveys and mobile surveys are
offered to respondents online via a desktop computer, laptop, Apple® iPadTM,
smartphone, or other mobile device. Many researchers design their surveys to
ensure their surveys can be taken using any of these devices.
Online survey: Mode of survey research whereby a questionnaire
is delivered to an eligible participant via the Internet and
completed on a personal computer, usually through a Web
browser.
Mobile survey: Mode of survey research whereby a questionnaire
is delivered to an eligible participant via the Internet and
completed on a smartphone or tablet, usually through a Web
browser or a mobile application (i.e., app).
Advantages
Web and smartphone technologies offer a faster (and often less expensive)
alternative to collecting survey data than do traditional mail surveys. As with
self-administered mail or written surveys, the absence of an interviewer
eliminates the threat of interviewer bias.
In addition, constructing questionnaires for online/mobile surveys is much
faster, cheaper, and easier than is creating a physical questionnaire. With an
online or mobile survey, there is no need to print forms, stuff envelopes, pay
for postage, or sit around waiting for questionnaires to be returned. In
addition, online and mobile surveys do not require anyone to code or enter
data from the actual survey into a database. This means data-entry errors are
eliminated because survey participants’ responses are recorded directly into a
data file once the participant submits his or her survey. In addition, the data
file can easily be exported from the survey software company’s website, or
directly onto the researcher’s computer, at any time during the study. Online
and mobile surveys are also convenient survey modes as they allow
respondents to answer questions according to their own pace, chosen time,
and preferences. And in some cases, finding respondents is made easier
because many companies provide access to samples, or survey panels of
respondents, at an additional cost.
Survey panels: Existing samples that a researcher can pay to get
access to.
An additional advantage of online and mobile surveys is that they can easily
be tailored for specific subgroups of respondents using skip-and-fill patterns
or conditional logic statements. A skip-and-fill question is a type of survey
question in which questions asked are contingent on answers to previous
questions. Imagine you program your online or mobile survey to begin by
asking the participant whether he or she has any siblings (i.e., Question 1). If
the answer to Question 1 is “No,” then you do not need that respondent to be
asked Question 2 (how many siblings do you have?). Given that, the
researcher creates a skip-and-fill routine in survey software to “skip”
Question 2 if the answer to Question 1 is “No.” The software will then
automatically record (i.e., “fill”) a “Not Applicable” answer for Question 2
for people who noted in Question 1 that they have no siblings.
Skip-and-fill patterns: Also known as “conditional logic
statements.” A type of survey in which the questions asked are
contingent on answers to previous questions.
Example of Skip-and-Fill Pattern in a Survey
Question 1: Do you have any siblings that are currently alive?
Yes
No
Question 2: How many?
1
2
3
4
More than 4
Question 3: What is your current zip code? (Enter your 5-digit zip
code.)
Question 2 is asked only if the answer to Question 1 is “Yes”;
otherwise, Question 2 is skipped, and “Not applicable” is
automatically recorded as the answer, based on a “skip-and-fill”
routine set up in the online/mobile survey software by the
researcher.
In addition to all of the advantages of online and mobile surveys described
earlier, there are certain characteristics specific to mobile surveys that set
them apart from online surveys. Perhaps the most notable difference is that
surveys administered on mobile devices can be used in Ecological
Momentary Assessment (EMA) studies. An Ecological Momentary
Assessment (EMA) is a unique type of survey methodology designed to
collect data that have greater ecological validity. In this context, ecological
validity refers to data that more accurately reflect the real world. For
example, historically, studies on fear of crime use self-administered mail
questionnaires to gauge individuals’ perceptions of crime, especially violent
crime like assault or robbery. Ironically, however, participants are often
asked to complete these types of surveys in the comfort, safety, and security
of their own homes. Surveys on fear of crime can collect data with greater
ecological validity if they are administered to participants during their
everyday routine activities, in other words, while they are out and about in
the “real world.”
Ecological momentary assessments (EMA): Research that
comprises a unique type of methodology, designed for real
time/real place data collection.
Ecological validity: Extent to which data reflect the “real world.”
EMA and other EMA-like techniques are used in research that focuses on the
daily processes of individuals, and they share four common characteristics.
First, EMAs are used to collect data from individuals in real-world
environments. Second, they focus on the individual’s actions, behaviors, or
attitudes that are temporally proximate (i.e., feeling the research participant
has “in the moment”). Third, they typically are triggered by predetermined
events, at predetermined times, or at random times during the day, using
complex triggers available in many mobile survey software solutions. Finally,
EMAs are administered to subjects repeatedly over an extended period.
Compared with online surveys, EMAs delivered as brief mobile surveys
gather data from individuals while in their natural settings. Therefore, they
can produce information with greater ecological validity than data collected
from survey modes.
Disadvantages
For all that is good about online and mobile surveys, many disadvantages are
associated with these survey modes. As with self-administered mail surveys,
the absence of an interviewer means that no one is able to assist the
participant if he or she needs help or encouragement to complete the survey.
Another disadvantage of online and mobile surveys is that they are not suited
to study certain samples of the population. For example, it would be
inappropriate to choose an online survey mode if you were interested in
collecting data from senior citizens regarding elder abuse or victimization. A
mobile or online survey may be a questionable mode if the population of
interest includes those in rural communities who may not have access to
Internet services. Problems with access to the Internet, familiarity with using
web browsers, and issues related to text size might adversely influence
response rates.
Ecological Momentary Assessment (EMA) is unique type of survey
methodology designed to collect data that have greater ecological validity.
© AndreyPopov/iStockphoto.com
Finally, survey fraud is something that researchers must consider before
conducting an online or mobile self-administered survey. Survey fraud
occurs during online and mobile surveys when participants sign up to be part
of a survey panel just for the incentives offered. Those committing this fraud
have no interest in a study and may not even meet the criteria set forth by the
research to be eligible for the study. These fakes (and losers) complete
questionnaires with little to no interest in topic. As a result, their responses
can bias results. Despite these limitations, online and mobile surveys
represent powerful survey modes that are growing in popularity and use.
Survey fraud: When participants, during online and mobile
surveys, sign up to be part of a survey panel just for the incentives
offered.
Telephone Surveys
Telephone surveys are a common interview modality used to collect original
survey data. Telephone surveys are less expensive to administer and offer
quick data collection. Telephone surveys are also inexpensive and efficient
when you need to gather data from a large number of people or from a
sample of individuals living across a large geographic area. In addition,
telephone surveys are characterized by higher response rates.
Telephone surveys: Common interview modality used to collect
original survey data over the telephone.
Random digit dialing (RDD), as discussed in Chapter 5, is a commonly used
method of accessing potential respondents in a telephone survey. Prior to
RDD, a researcher had to access a sampling frame of phone numbers in the
area of interest. This sampling frame had limitations in that unlisted numbers
and cell phone numbers were not listed on the frame. Plus, some homes had
multiple phone lines making their chances of participating in the survey
higher than others. In addition, some numbers may have been disconnected
since the construction of the sampling frame. With random digit dialing.
researchers no longer had to rely on tangible sampling frames like phone
books. Rather, RDD generates telephone numbers at random that are called to
find respondents to take the survey. RDD is increasingly important as more
and more households have stopped using landlines and use only cell numbers
for communication (see Figure 7.1). According to the Centers for Disease
Control and Prevention’s National Health Interview Survey, about half of
U.S. households no longer have a landline in their home (Blumberg & Luke,
2015). Despite this rising figure, and especially with the use of RDD,
telephone surveys still offer researchers a powerful way to collect survey
data. This is especially the case when a researcher needs data from groups of
the population that may not have access to the Internet or who live in remote
areas.
Regardless of the way the participant was reached on the phone, historically,
telephone surveys were conducted by a field representative who would ask
questions and record responses on their paper-and-pencil survey instruments.
Increasingly, however, telephone surveys are conducted using computer-
assisted telephone interviewing (CATI) systems. CATI operates such that
the interviewer on the phone is guided through the survey by prompts on a
computer. Responses are entered directly into the computer. CATI systems
allow survey data to be collected even more quickly and easily from a sample
population than do traditional telephone survey techniques, especially when
they involve random digit dialing. In addition, the step of coding and entering
data later, and the associated errors, is removed from the process.
Computer-assisted telephone interviewing (CATI):
Computerized system that guides the interviewer through the
survey on a computer. The system prompts the interviewer about
exactly what to ask.
Figure 7.1 Graphs of Cell Phone Usage by U.S. Adults and Corresponding
Increase in Surveys Conducted via Cell Phone by the Pew Research Center
© Maica/iStockphoto
Disadvantages
The biggest downside of in-person surveying is cost. Training interviewers
requires a considerable amount of money. Once interviewers are trained, they
will be spending a considerable amount of time in the field collecting data.
Time is money, which means the costs of completing a single face-to-face
interview are more than in any other survey mode. In addition, data quality
might be an issue, although this is highly dependent on the quality of the
interviewer. Some people have a natural ability to conduct face-to-face
interviews, whereas others do not. It is unlikely that an entire interviewing
staff will all have great survey-taking skills. If some are poor, then data
quality may suffer (not to mention a good bit of the research budget will have
been wasted on that interviewer’s training).
Another problem associated with face-to-face interviews is that unless you
have a gigantic budget, they can only be administered if a study involves
collecting data from a smaller sample. It would simply take too much time
(and money) to conduct interviews otherwise. Related to this limitation is
data entry. Unlike some other survey modes, data collected during face-to-
face interviews must either be manually entered into a data file or transcribed
from tapes or other recoding mediums. Transcription services and paying
someone to enter data is an additional cost related to face-to-face interviews.
If a research project requires collecting high-quality and detailed data from a
smaller group of people and the research team has the time and money to
train an effective interview team, then conducting face-to-face interviews
might be the best survey mode option.
With this foundational information, we next present information on designing
your own survey.
Designing Your Own Survey
Perhaps you have decided that survey research is the best way to gather
original data to answer your research question. A key component of creating
a survey includes the survey questions and the survey layout.
Survey Questions
Survey questions must be clear, relevant, and unbiased. Using clear and
concise survey questions can mean the difference between success and
failure. Survey questions are used to gather the specific data needed to
answer your research question. Zaykowski (2014) wanted to understand the
role that police reporting had on accessing victim services. Had she designed
her own survey, she would have needed to include a question asking whether
the respondent had reported the victimization to the police, questions
identifying whether someone had been a victim, as well as a question asking
whether the respondent had accessed victim services (among others). Every
survey question on a survey should correspond with specific concepts for
which you need data. Asking about more than is necessary increases the
respondent burden. Respondent burden refers to the effort needed in terms
of time and energy required by a respondent to complete a survey. As a
researcher, you must minimize respondent burden in all ways possible.
Survey Administration
The instrument is designed and tested; now it is time to administer the
survey. Administering your survey requires three major steps. First, you need
to contact potential survey participants and alert them that a survey is
coming. Next, you need to field the survey. And finally, you need to follow
up with those who have not completed and returned it. The next sections
focus on these three steps.
Notification Letters
In general, once the survey is ready to be fielded, you must engage the
potential participants with a notification letter. Their first contact for a
written/mail survey should come in the form of a notification letter.
Notification letters are simply letters mailed to those in the sample to alert
them that they will be receiving a survey. Notification letters are effective
tools for increasing response rates. In the 2010 U.S. Census example, a
notification letter explaining why the survey was being conducted, why it
needed to be completed, and how to complete and return it was sent to all
residential households a few days prior to the questionnaire arriving. Dillman
(2000) recommends that mail/written survey notification letters be sent a few
days to a week before the survey is fielded.
Notification letters: Letters mailed to those in the sample to alert
them that they will be receiving a survey.
Notification letters should include several pieces of information, including
the following:
1. Title of and purpose of the research project
2. Name and contact information of the primary researcher or chief
investigator
3. Name of the agency or organization sponsoring the project
4. Whether incentives (i.e., rewards or inducements for participating) are
being offered for participation
5. Information about how the data will be used, how the results of the
study can be accessed or obtained once the project is over, and the
safeguards in place to protect participants’ information
Incentives: Used in survey research when eligible participants are
given something of value in return for their participation.
Notification letters should disclose whether participants’ identities and data
would be kept confidential or anonymous. Recall that confidentiality refers
to when the researcher knows and can identify individual respondents but
promises to keep that data private. In contrast, anonymity refers to situations
where the researcher cannot link the data gathered to the respondent or does
not gather identifying data about the respondent. Survey data cannot be both
confidential and anonymous. In some cases, different people on the research
team have access to confidential, whereas others have access to anonymous
data. For example, Zaykowski worked on a project focused on gathering data
from young people. Individuals at the organization administered the surveys
and could tie respondents to the data (i.e., confidential). Then the
organization stripped all identifying information out of the data so that when
Zaykowski received it to work with, it was anonymous.
One issue that is especially relevant for survey research is the decision to
incentivize participants. There is nothing that prohibits the use of incentives
in survey research. When incentives are incorporated into the research
protocol, however, their value must not be so great as to be “the deciding
factor” for the participant. In other words, the incentive cannot be so valuable
that the participant would essentially suffer a financial harm or undue burden
if he or she decided not to participate. For example, during the first phase of
the Arrestee Drug Abuse Monitoring Program (ADAM; 1998–2003), free
candy bars were offered to jail inmates for participating in a survey about
their past drug and alcohol use. The candy bars were allowed to be used to
reward volunteers for their participation in the ADAM project because they
were considered not to be so valuable that the inmates would “suffer” if they
did not participate in the survey. Instead, they were valuable enough to
increase the number of volunteers likely to participate without creating a
harm or burden for the participants/nonparticipants.
Proper care and control over data collected from respondents is another
aspect of survey research that requires careful consideration. For example, if
a survey is confidential and individual identifiable data are obtained from
participants, then those data must be protected so that the respondent’s
information is never disclosed to unauthorized persons. Procedures in place
for the care and custody of participants’ data must be described in detail in
the researcher’s institutional review board application. In addition, the
researcher must describe how all survey data will be stored securely after it
has been collected, the length of time it will be stored, and how it will be
destroyed once the study has ended. It is very easy today with the widespread
use of laptops to carry around survey data. This is inappropriate and unethical
considering the risk of losing the data this presents. Data must be stored in a
safe and secure place. On a researcher’s laptop is not a safe place unless one
uses encryption. Encryption is a process in which data and information are
encoded making unauthorized access impossible.
Table 7.5 shows how widely used survey research was for our case studies.
As the table demonstrates, only Santos did not use survey research for her
study (Santos & Santos, 2016). Among those who did use survey research, a
variety of approaches were taken. Brunson used both interviews (face-to-
face) and questionnaires (self-administered) to gather data for his study on
youth experiences and perceptions of police relations (Brunson & Weitzer,
2009). Cuevas and his colleagues used data gathered via telephone interviews
in the Dating Violence Among Latino Adolescents survey (DAVILA; Sabina
et al., 2016). Dodge relied solely on face-to-face interviews to gather data
from her subjects (Dodge et al., 2005). Melde and his collaborators gathered
data from their adolescent sample using self-administered questionnaires
(Melde et al., 2009). Finally, Zaykowski’s (2014) research used data gathered
for the National Crime Victimization Survey, which is administered using
two modes of interviewing: in person and over the phone. As our case studies
show, survey research is versatile and allows many approaches for gathering
data. This means you can personalize your approach to best suit the needs of
your research as well as your sample.
In the next chapter, we shift gears and discuss experimental research. This is
the type of research that students enter the class feeling they know most
about. Much of that understanding is from so-called experiments they have
conducted over time or have seen in the movies. In that chapter, we discuss
what makes an experiment an experiment and how they can inform our
understanding of the world.
Applied Assignments
The materials presented in this chapter can be used in applied
ways. This box presents several assignments to help in
demonstrating the value of this material by engaging in
assignments related to it.
1. Homework Applied Assignment: Bad
Survey Questions
Find a survey used in a peer-reviewed, published academic
journal article in criminal justice or criminology. In your thought
paper, describe the purpose of this survey. What is your overall
assessment of the instrument? Identify two poorly asked
questions, and identify why you believe they are poorly
constructed, given the material presented in this chapter. Describe
how you would improve these questions. What value would you
place on data gathered using these two questions? Given what you
have learned in this chapter, how would you improve the
instrument? Why would your way be better? Turn in the
instrument, the article that used it, and your thought paper. Be
prepared to discuss your findings in class.
2. Group Work in Class Applied
Assignment: Constructing Excellent Survey
Questions
As a group, select three of our case study articles. Pretend your
group has been hired to write a short survey to gather data that
would be useful for these articles. As a group, write 10 survey
questions including response categories that you believe would
gather additional data that our case study researchers could use to
answer their research question(s). Be sure to use the best practices
described in this chapter to write questions that will likely
produce valid and reliable data. Make sure you also include
appropriate response categories for your questions. Be able to
point to the concepts that each survey question is measuring. Be
able to describe what type of data each survey question would
gather (quantitative or qualitative) and the levels of measurement
that each question would produce (e.g., nominal, ordinal, interval,
or ratio). Discuss the strengths of your measures, as well as their
limitations. Be prepared to discuss and share your work with the
class.
3. Internet Applied Assignment:
Constructing a Survey
Use the survey questions that your groups developed in the
second assignment to build a questionnaire on SurveyMonkey. Be
sure to provide the informational boxes described in this chapter,
and make the survey attractive. After you have created your
survey, pretest it on a friend to work out any bugs. Once your
survey is complete, print it out to turn in. Include with your
printed survey a paper describing your 10 questions and the
concepts they are measuring. In addition, describe the type of data
each question gathers in terms of qualitative and quantitative, as
well as the levels of measurement of the data you would gather
with the questions. Describe any issues you faced and how you
overcame them. Include any changes you made to your questions
and response categories after the survey was pretested.
Key Words and Concepts
Anonymity 230
Computer-assisted telephone interviewing (CATI) 219
Confidentiality 230
Demographic questions 227
Double-barreled questions 223
Ecological momentary assessments (EMA) 218
Ecological validity 218
Encryption 237
Face-to-face interviews 221
Fielded 213
Incentives 230
Interviewer bias 214
Interviews 210
Items 209
Law Enforcement Management and Administrative Statistics (LEMAS)
212
Loaded questions 223
Longitudinal data 213
Mail and written surveys 214
Matrix question 224
Mobile survey 217
Notification letters 230
Online survey 217
Questionnaires 209
Question-order effect 216
Recall bias 216
Recency effects 216
Respondent burden 222
Respondents 209
Response rate 215
Self-administered questionnaire 214
Skip-and-fill patterns 217
Survey 209
Survey fraud 218
Survey modes 214
Survey nonresponse 215
Survey panels 217
Survey processing 231
Survey research 209
Telephone surveys 219
Key Points
Surveys research is the most common approach used in criminology and
criminal justice to collect original data because it can be an incredibly
effective and efficient method for collecting vast amounts of data from
large groups of individuals, and because surveys can be administered
using a variety techniques.
In survey research, data are collected either from self-administered
questionnaires or during interviews. Mail, online, and mobile surveys
are types of self-administered surveys, whereas interviews are conducted
over the telephone or face to face. These different survey modes can be
used to collect data for exploratory, descriptive, explanatory, or
evaluation research projects.
There is no “one-size-fits-all” approach to designing an effective survey
instrument. Nevertheless, all questionnaires should be designed and
administered in such a way as to increase response rates and minimize
inaccurate answers to questions that result from poor question wording
or interviewing. This means that survey researchers must be mindful of
how the design and layout of a questionnaire, its administration, and the
processing and entry of the instrument once it is returned must be
considered.
Surveys should have good “flow,” which can come from asking one
question at a time, minimizing the use of matrix questions, listing
answer categories vertically instead of horizontally, using dark print for
questions and light print for answer choices, and vertically aligning
question subcomponents across questions.
In general, presurvey notification letters should contain the title of the
research project, the name and contact information of the chief
investigator, the name of the agency or organization sponsoring the
project, whether incentives are being offered to participants, information
about how the data will be used, how the results of the study can be
accessed at the end of the project, and the safeguards in place to protect
participants’ information.
Survey research questions also must be clear, relevant, and unbiased
questions, and much free and paid software is available to help those
interested in conducting survey research, including LimeSurvey,
SurveyMonkey, and Qualtrics.
Ethical considerations need to be given when conducting survey
research. Of special interest is the use of incentives as well as the
protection of the survey data. Finally, for in-person interviewers,
researchers have an ethical responsibility to ensure the safety and
security of members of the interviewer team.
Review Questions
1. Why is survey research a popular research method in the field of
criminology and criminal justice?
2. What are the different modes in which surveys are conducted?
Describe the strengths and weaknesses of each.
3. How can survey research be used in an exploratory research
project? Explain.
4. How can survey research be used in a descriptive research
project? Explain.
5. How can survey research be used in an explanatory research
project? Explain.
6. How can survey research be used in an evaluation research
project? Explain.
7. What are some ways an interviewer can help improve interview
response rates? What are some ways an interviewer can cause
increases in interview nonresponse?
8. What are some ways a questionnaire can be designed to help
increase completion/response rates?
9. Why are incentives used in survey research? When should they be
avoided, and why?
10. In survey research, what is the difference between anonymity and
confidentiality? What are some other ethical considerations of
survey research that must be considered before administering
surveys or conducting interviews?
© Mark Stivers
Association between variables is another criteria needed to demonstrate
causation. Association means that the values in the independent variable and
the values in the dependent variable move together in a pattern, or they are
correlated. The association between variables may be linear, curvilinear, or
some other pattern. A linear association between variables is found when
increases in the values of the independent variable occur with increases in the
values of the dependent variable. Another linear association between
variables is found when the values in the independent variable are associated
with decreases in the value of the dependent variable. A nonlinear association
is another type of relationship found between variables. Age is frequently
associated with outcomes in a curvilinear fashion. For example, as age
increases, values in a dependent variable initially increase until some point
when values begin decreasing. Patterns of association or correlation come in
many forms, and a comprehensive discussion of all of them is beyond the
scope of this text. The important point is that two variables must move
together in a pattern for there to be an association between them. Some
examples of associated variables are shown in Figure 8.1.
Association: One of three criteria needed to establish causation,
which is found when the values in the independent variable and
the values in the dependent variable move together in a pattern.
Like the temporal ordering requirement, establishing an association between
variables alone is not sufficient to demonstrate causality between variables.
In addition to the temporal ordering requirement and association, a researcher
must also rule out the influence of a third variable. In other words, a
researcher must rule out the influence of a spurious relationship.
The third requirement for establishing causality is ruling out the presence of a
spurious relationship between the independent and the dependent variable. A
spurious relationship is an apparent relationship between independent and
dependent variables when in fact they are not related. Spurious relationships
can occur via confounding factor or intervening variable. A confounding
factor is a third variable that causes the independent and dependent variables
to vary. Although initially one may think that the independent variable is
making the dependent variable vary, what is happening with a confounding
variable is that it is making the independent and dependent variables vary. In
reality, the independent and dependent variables are unrelated. Failure to
consider the presence of a confounding factor may lead a researcher to
conclude erroneously a causal relationship between the independent and
dependent variables. A spurious relationship may also result when an
intervening variable is at work. An intervening variable is a variable that is
situated in time between the independent and dependent variables. When an
intervening variable is operating, the independent variable does not cause the
dependent variable, but instead the independent variable causes the
intervening variable to vary, and the intervening variable causes the
dependent variable. Figure 8.2 illustrates spurious relationships resulting
from a confounding factor, and spurious relationships resulting from
intervening variables.
Spurious relationship: Relationship among variables in which it
appears that two variables are related to one another, but instead a
third variable is the causal factor.
Confounding factor: Third variable at work in a spurious
relationship that is causing what appears to be a relationship
between two other variables.
Intervening variable: Variable that is situated between the two
other variables in time, creating a type of a spurious relationship.
Association Is Not Causation
Association, or correlation, is not the same as causation. As described earlier,
association means that variables vary together. For example, you likely notice
that bird singing is associated with flowers blooming. You might also
recognize that drinking a lot of coffee is associated with earning higher
grades. Although each of these pairs of variables is associated, additional
work is needed to suggest that these pairs of variables are causally related. To
establish causality, a researcher needs to offer evidence of appropriate
temporal ordering of the variables, a lack of spurious relationships, and
association between the variables. Only with this evidence can you state with
certainty that the variables are causally related.
Figure 8.1 Examples of Associated or Correlated Variables
Source: Used with permission. Copyright © 2017 Esri, ArcGIS Online,
and the GIS User Community. All rights reserved.
Note from the authors: This figure is found here:
http://resources.esri.com/help/9.3/arcgisengine/java/GP_ToolRef/Spatial_Statistics_tool
Figure 8.2 Examples of Spurious Relationships
© Adam Sandiford/asandiford.com
What Is Experimental Research?
Most students enroll in a research methods class with some notion of what
experiments are given unforgettable exposure to them in the media. Consider
the classic 1931 movie Frankenstein where scientist Henry Frankenstein
experiments with the reanimation of lifeless bodies. After success
reanimating animals, Henry decides to reanimate a human body. To do this,
he stitches together various body parts gathered from numerous corpses to
conduct his biggest experiment ever. He is successful in reanimating the body
—Frankenstein’s monster—but almost immediately, things go terribly
wrong. The monster accidently kills a little girl (among other things), and
eventually, Frankenstein is killed in a burning building.
Alternatively, consider the experiments highlighted in the classic movie The
Fly (versions available in 1958 and 1986) in which scientist Seth Brundle
developed telepods that can transport a person (or other beings) through
space from one telepod to the another. When Seth feels he has finally worked
out all the kinks to teleportation, he hops in to transport himself across the
room. Unfortunately, unbeknownst to Seth, a fly is buzzing around in the
telepod with him. Bad news! The transportation is successful in that Seth
arrives in the other telepod, yet because he was transported with a fly, their
molecular makeup is jumbled and Seth begins to transform into a (nasty
looking) fly!
More recently, the movie Supersize Me highlights an experiment. In the
documentary, Morgan Spurlock experimented with his own body to
determine the influence that eating three McDonalds meals every day for a
month would have on his body. A part of his experiment required him to
supersize the meal if the McDonalds clerk asked him if he wanted to do so. In
addition, Spurlock foregoes exercise during that month. Prior to beginning
the 30-day experiment, multiple doctors (a general practitioner, a
cardiologist, and a gastroenterologist) run tests demonstrating that Morgan is
outstandingly healthy. A dietician/nutritionist and an exercise physiologist
also assess Morgan and conclude he is exceedingly fit. After the 30-day
experiment, Morgan is again tested by the physicians and finds that his health
declined dramatically during the period.
Although the experiments depicted in the movies generally end badly with
the creation of monsters, death, carnage, and high blood pressure, luckily,
experiments conducted in criminology and criminal justice tend have tamer
outcomes. What accounts for these radical differences? A part of the
difference may be because what we see in movies are not true experiments.
Why is that? That is because true experiments have particular characteristics
that allow a researcher to establish causality among variables. You might be
asking yourself, “What about the Stanford Prison Experiment? That didn’t
turn out so well.” You are correct—it did not turn out so well! How did all
these normal people very quickly turn on the prisoners? How did this so-
called experiment end so badly? How can it be that this experiment resulted
in monsters wearing Ray Bans® and khaki shirts? Consider a question posed
by Cuevas as you read this chapter—was the Stanford Prison Experiment an
experiment at all? An argument can be made that although this work is called
an experiment, it has none of the important characteristics of experiments.
Something to keep in mind, as emphasized by Zaykowski, is that
experimental research is not always conducted in a laboratory setting. The
numerous types of experimental approaches show that they can occur
anywhere. We next turn to defining true experiments and describing those
very important characteristics.
True Experiments
A true experiment is an approach to research that has three defining
characteristics: (1) at least one experimental and one control group, (2)
random assignment to the experimental and control group, and (3)
manipulation of an independent or a predictor variable (i.e., treatment) by a
researcher. Research that includes these three criteria is a true experiment.
Santos and Santos’s (2016) research is an excellent example of a true
experiment. In this research, Santos and her collaborator used 24 treatment
(or experimental) and 24 control geographic areas, a type of randomization
(recommended for low sample sizes), and researcher control of the treatment
administration. The purpose of this research was to identify whether the
treatment—intensive policing—led to a reduction of crime in hot spots in the
experimental group. The next section discusses each element that makes up a
true experiment.
Taking great care in defining and measuring hot spots and the randomization
of the hot spots into experimental and control groups was imperative to
maximize the rigor of this work by Santos and her colleague (2016). A theme
of this text is that one’s research is only as strong as its weakest part. By
taking their time to clearly define, and carefully randomize, the researchers
were able to maintain the integrity of their work. This is an important
consideration because as Cuevas notes, too often, experimental researchers
fail to account for, or think about, the specific nuts and bolts of conducting
research. Santos and her colleague used a carefully thought-out design, and
they followed this plan. This means they are able to have the confidence in
the strength of their design.
Researcher Manipulation of Treatment
A third characteristic of true experimental research is that it includes the
researcher’s control or manipulation of an independent variable. The
manipulation of the independent variable is also referred to as the treatment.
By controlling the treatment, a researcher can better isolate any effect of the
treatment on the outcome. For instance, the researchers implemented a
treatment—in this case of Santos and Santos (2016), intensive policing—in
the experimental hot spots. The control group of hot spots did not receive the
treatment but continued to be policed in the traditional ways. The
experimental group received the treatment, which was offender-focused
intervention carried out by law enforcement officers. Santos and colleague
managed the application of the treatment by managing two officers engaging
in high-intensity policing. What did the intensive policing entail? First, each
of the two detectives repeatedly contacted each sampled offender or family
member in the treatment groups in a friendly way. The detectives would ask
whether the offender had any information about crimes that had recently been
committed in the area. In addition, the detectives conducted curfew checks
and encouraged the offenders about the importance of following their
probation rules. The detectives visited the offenders and their families on
random days. In addition, contact by the detectives was made by phone. The
message of these intensive interactions was to influence the offender’s
perceptions of their risk of being caught should they commit additional
crimes. Activity in the control group hot spots remained unchanged from
normal policing protocols. Given the use of this protocol, Santos and Santos
could later measure to see whether intensive policing influenced crime rates
(one of four dependent variables) in the geographies of interest.
The value of each of these measures was calculated prior to the treatment
(i.e., the pretest measures) using data available at the local law enforcement
agency. Then posttest measures were taken after the treatment period. The
findings from this research offered mixed findings. First, in turning to the
first outcome, the results of the experiment did not indicate that the intensive
policing program led to lower counts of residential burglary and theft from a
vehicle. Santos and Santos (2016) do note that the tests were suggestive of
the beneficial impact of the intensive policing, but it just did result in a large
enough difference to conclude an effect. More research with a larger sample
of hot spots is recommended. The findings from the other three experiments
also failed to indicate causality. Each experiment had limitations (all research
has limitations), however, and Santos and her colleague discuss them in their
journal article.
It is important to note that although this design is rigorous, it is not without
potential issues. A potential issue of the two-group pretest–treatment–posttest
design is precisely because of the presence of the pretest. In particular, that
subjects are given the pretest may affect their response to the treatment, and
that could affect the findings and jeopardize the generalizability of the
findings. Even though this was not an issue with the work of Santos and
colleague’s (2016) research (given the subjects were not aware of either the
pre- or posttests), it is of concern in other types of research. We discuss the
generalizability of experimental findings later in this chapter.
Solomon Four Group Design
One answer to whether treatment is affected by the presence of a pretest, and
to minimize the effect of any confounding variables, is to use the Solomon
Four Group design. In using this approach, a researcher randomly assigns
members of the sample to one of four groups:
1. A pretest, treatment, posttest group
2. A pretest, no-treatment, posttest group
3. A treatment, posttest group
4. A no-treatment, posttest group
Solomon Four Group: True experimental design that includes
four randomly generated groups of subjects. It has all the benefits
of a two-group pretest–treatment–posttest design; plus it offers
information about whether the results were affected by the
presence of a pretest.
The first two groups (1 and 2) are identical to those found in the two-group
pretest–treatment–posttest. The next two groups (3 and 4) are included to
better identify whether the presence of a pretest influenced the outcome
measures.
In using these four groups, several assessments or comparisons are made. The
four comparisons in the two-group pretest–treatment–posttest are made for
the same reasons as described earlier. The Solomon Four Group test includes
four additional comparisons. First, by measuring for any differences between
the posttests in groups 1 and 2, the effectiveness of the treatment given the
presence of a pretest can be identified. If the researcher finds a difference in
this comparison, they can conclude the presence of the pretest had an effect.
Next, by measuring for any differences between the posttests in groups 3 and
4, the effectiveness of the treatment without a pretest can be identified. The
Solomon Four Group design is illustrated in Figure 8.8.
Given this approach is characterized by the strengths of a true experiment,
that it includes a pretest and a posttest, and that it can identify any influence
of the presence of a pretest on the treatment, it is considered one of the most
rigorous designs available for a true experiment. This approach guards
against threats to both internal and external validity. It guards against threats
to internal validity in that it offers all the necessary components to identify
causality. It also guards against threats to external validity in that it allows an
assessment of the treatment as it would occur in the real world where pretests
are not the norm. Although it is the gold standard, true experiments must still
be assessed for threats to both the internal and the external validity. In
addition, a researcher must focus on reliability. The next sections introduce
important issues related to validity and reliability.
Figure 8.8 Solomon Four Group Design
© Arctic Circle used with the permission of Alex Hallatt, King Features
Syndicate and the Cartoonist Group
Validity
Regardless of the design or approach taken when conducting research,
researchers must be sensitive to threats to their work. As noted in earlier
chapters, your research is only as strong as its weakest component. For this
reason, when conducting any experimental research, one matter of concern is
the validity of the research. In Chapter 4, we introduced the concept of
validity in terms of variables accurately capturing the essence of an abstract
concept it is designed to represent. In this section, we introduce you to some
other important types of validity. There are two primary types of validity of
concern: internal validity and external validity. Internal validity refers to the
degree to which a researcher can conclude a treatment effect. In other words,
can the researcher be reasonably sure that any change in the experimental
group is a result of the treatment? In general, true experiments are
characterized by high internal validity given the researcher’s control. The
presence of randomness in constructing the control and experimental groups,
and the manipulation of the independent variable (i.e., treatment) by the
researcher, makes for a highly controlled experiment. This in turn leads to
high internal validity for true experiments.
Internal validity: Related to the degree to which a researcher can
conclude a causal relationship among variables in an experiment.
Internal validity is often at odds with external validity.
A researcher must also be attentive to external validity. External validity is
related to the generalizability of the experimental findings to other people in
others situations and other environments. A researcher must ask, “Do the
findings apply to the larger population of interest in the ‘real world,’ outside
of a highly controlled experimental world?” Internal and external validities
are frequently at odds with one another. Brunson notes it is one challenge
associated with experimental research. The researcher must keep an eye on
both. Another issue that Brunson emphasizes is that given the controlled
nature of experiments, it is questionable how the findings translate into the
real world. The real world is not characterized by rigorous controls found in
experiments, but it is far messier.
External validity: Related to the generalizability of experimental
findings to other people in other situations. External validity is
often at odds with internal validity.
Internal Validity Threats
Internal validity allows a researcher to claim a causal relationship. In their
iconic book, Campbell and Stanley (1963) identified several types of threats
to internal validity of experimental research. This section introduces some of
these threats, although it does not offer a comprehensive list of them. It is
important to recognize that one or many internal validity threats may be
present. Understanding what threatens internal validity of experiments is the
first important step in guarding against them.
Experimental Mortality
Experimental mortality is a potential threat to the internal validity of
experimental research and occurs when subjects from either the control or the
experimental group drop out of the research at differential rates for reasons
related to features of the study. If experimental attrition is related to key
elements of the study, it becomes unclear how much of the experimental
outcome is a result of the treatment or of the differential attrition. A
researcher must spend time contemplating whether there were differences in
those who dropped out of the experiment and those who remained in the
experiment that are relevant to the study. For example, assume a researcher is
conducting a study on a new treatment designed to reduce recidivism. If the
treatment group experiences attrition as a result of the challenges of engaging
in the treatment, then those remaining in the experimental group are those
who find the treatment easier to accomplish. Given that, any difference in
posttests may be a result of the treatment, but some is because of the
treatment and others is because the treatment group consisted only of those
finding the treatment easy to engage in. This experimental mortality threatens
the internal validity of the experiment, making it difficult to assign a causal
effect between the treatment and the outcome.
Experimental mortality: Threat to internal validity of
experimental research that occurs when subjects from either the
control or the experimental group drop out of the research at
differential rates for reasons related to features of the study.
Experimental mortality is a threat to the internal validity of an experiment.
When experimental mortality is present, it is difficult to know if the outcome
of the experiment is due to the treatment or due to the loss of subjects.
© Konstantin Zavrazhin/Getty Images
History
History is a threat to the internal validity of experimental research and refers
to the occurrence of a specific event during the course of an experiment that
affects the posttest comparisons between the experimental and control
groups. A researcher must ask whether some event occurring during the
course of an experiment may cause a change in subjects that would affect the
experimental outcome. If so, history is a problem. For example, imagine an
experiment with the purpose of testing if increased police presence leads to
higher perceptions of safety among residents. The researcher randomly
assigns police districts in a jurisdiction to experimental and control groups.
The experimental group districts receive a treatment of doubled the normal
police presence, and the control group districts continue receiving the same
level of police presence. Now imagine that in the middle of the experiment,
there is a protest in the jurisdiction. During the course of the protest, the
police clash with the public, and several members of the public are injured
and arrested. In addition, substantial property damage results. How might this
event affect the posttest measures of perceived safety among subjects? It
stands to reason that having this historical police–public clash occur in the
midst of the experiment will influence the internal validity of the experiment,
but it will be unclear how much of any posttest differences found will be a
result of the increased police presence (i.e., the treatment) versus the effects
of clash.
History: Threat to the internal validity of an experiment that
refers to the occurrence of an external event during the course of
an experiment that affects subjects’ response to the treatment and
as a result the findings of the research.
Melde is currently involved in research focused on schools in Michigan. This
includes areas in Flint, Michigan, which have been terribly affected by
drinking water with dangerously high levels of lead (recall our earlier
discussion about the very real dangers of lead exposure). During the course of
this experiment, some schools started getting safe drinking water through the
tap (versus only bottled water). How might Melde and his colleagues account
for this in their study? Might the introduction of safe water and some of the
schools affect test scores? Will this influence the attitudes of children in the
study? Will this history event present a threat to the validity of his research?
You can bet that Melde and colleagues will be keeping data about the water
issue to control for its effects in this experiment as best as possible.
Instrumentation
Instrumentation is a threat to internal validity that refers to the possibility
that instruments used in an experiment may affect measurement and the
ultimate posttest comparison between control and experimental groups.
Instruments take on many forms. Instruments may be a mechanical
instrument such as a weight scale, survey instruments (and questions on
them), or human observers who gather data or score subjects in an
experiment. The researcher must ask whether the precision of the
instrumentation used has changed in the course of the experiment. If so,
instrumentation may be at play. For example, imagine an experiment uses a
breathalyzer to measure blood-alcohol content. What if during the course of
the experiment, the breathalyzer’s calibration changes. This would affect
measurements and the conclusions of the experiment. You may erroneously
conclude a causal effect between an independent and a dependent variable
when, in reality, the only thing that may have changed was the instrument’s
calibration. A researcher must strive to conclude that any differences seen in
posttest comparisons are a result of the treatment, not of the instrumentation.
Instrumentation: Threat to the internal validity of an experiment
that refers to how instruments used in an experiment may be
unreliable and affect the findings of an experiment.
Maturation
Maturation is a potential threat to the internal validity of experimental
research that refers to the natural changes that happen to subjects
participating in experimental research. A researcher must ask whether during
the course of the experiment, something occurred to the subjects with the
passage of time that would affect the outcome. Subjects in an experiment
may get tired, hungry, or impatient. In addition, for longer term experiments,
a researcher must be aware of changes in subjects such as aging. For
example, imagine an experiment designed to understand whether a new
approach to teaching improves math ability. The experiment is designed to
last eight hours. Let us assume the researcher failed to consider the subjects
would get hungry during the course of the day. As a result, subjects rushed
through posttests because they were hungry (and crabby). This suggests that
posttest measurements reflect not only any change resulting from the
treatment but also changes resulting from hunger. Researchers must guard
against the effects of maturation.
Maturation: Threat to the internal validity of an experiment that
refers to the natural changes that happen to subjects participating
in experimental research, including hunger, boredom, or aging.
Selection Bias
Yet another potential threat to the internal validity of experimental research is
selection bias. Selection bias refers to important differences between the
control and the experimental group that exists when they are randomly
assigned to control and experimental groups. Even though the random
assignment of subjects to experimental and control groups minimizes the
threat of selection bias, the threat must be considered especially when
working with small samples. If either the control or the experimental group
disproportionately includes subjects characterized by a confounding factor,
then any treatment effect found may be in whole or part a result of this
confounding factor instead of the treatment. A researcher must spend time
contemplating whether there are differences between the control and the
experimental group such as a willingness to participate, attitudes, or
personality characteristics that may affect the outcome of the research.
Selection bias: Potential threat to the internal validity of
experimental research that refers to differences that may exist
between the control and experimental group unbeknownst to the
researcher.
Statistical Regression
Statistical regression is a threat to the internal validity of experimental
research using subjects selected for participation based on extreme scores.
Statistical regression refers to the tendency for extreme scores to move
toward the mean over repeated measures. This threat requires a researcher to
consider whether a change in a score during the course of an experiment is a
result of the treatment or whether it is a result of the natural tendency of
extreme scores to move toward the mean. Imagine an experiment in which
subjects with scores in the bottom 2 percentile of the SAT are selected for a
study. The purpose of this experimental study is to ascertain whether a new
technique designed to improve study habits can improve students’ SAT
scores. Now imagine the experimental findings demonstrate that the new
studying technique improved students’ SAT scores dramatically. The posttest
comparisons show dramatic increases. Nevertheless, the researcher must
consider that an alternative explanation for the improved SAT score is
regression to the mean. It may be that the new studying approach did improve
performance on the SAT. A plausible alternative explanation—a threat to the
internal validity of the research—is that of statistical regression. (Note that
another plausible explanation may be a testing effect that is described next).
Researchers must recognize that these SAT scores may have improved in part
or wholly as a result of the tendency for extreme scores to move, or regress,
toward the mean.
Statistical regression: Threat to internal validity of some
experimental research where groups have been selected for
participation in an experiment based on extreme scores that refers
to tendency for extreme scores to move toward the mean over
repeated measures.
When an experiment involves repeated testing, statistical regression is a
threat to the internal validity of the experiment.
© skynesher/iStockphoto.com
Testing
Testing is a threat to internal validity that refers to the effect of taking one
test on the results of taking the same test later. In terms of experimental
research, this means the researcher must be aware that the act of taking the
pretest may affect a subject’s performance on the posttest. Some students are
aware of the testing affect because they have experienced it. Consider the
SAT or GRE. Have you been told to take the SAT or GRE multiple times
because you did poorly your first attempt? Why? Because it is widely
recognized you are likely to earn an improved score on the second (or third)
exam by simply retaking it. The change in your score is not totally a result of
additional studying, but it is also the effect of testing. If there is a change
during the experiment, the researcher must ask whether that change is a result
of the treatment or of testing (or some combination of both).
External validity refers to the ability to general findings from experimental
research into the real world versus the controlled environment of a lab or
other artificial setting.
©Andrew Toos/Cartoonstock.com
Testing: Threat to the internal validity of an experiment that refers
to the effect that taking one test can have on another; the results of
a later test. In terms of experimental research, this means the
researcher must be aware that the act of taking the pretest may
affect one’s performance on the posttest.
External Validity Threats
External validity is the ability to generalize experimental research findings to
the population of interest in the “real world.” Campbell and Stanley (1963)
identified multiple external validity threats that researchers must consider.
The generalizability of findings from experimental research is threatened
when the treatment or the independent or experimental variable is contingent
or dependent on other factors. For this reason, the types of external validity
threats are generally described as interactions between the experimental or
the treatment variable and some other feature. Like the section on internal
validity, this section is not comprehensive in terms of the many potential
threats to external validity. Readers seeking additional information on threats
to external (and internal) validity are encouraged to access the resources
listed at the back of this chapter.
Interaction of Selection Biases and Experimental
Variables
The interaction of selection biases and experimental variables is an
external validity testing threat that results from the interaction between some
features of the sample and the treatment resulting in nongeneralizable
findings. A researcher considering the possibility of this threat must consider
whether there is something about those in the experiment such as willingness
to volunteer for an experiment that makes them unlike those in the general
population. For example, consider research focused on a treatment designed
to reduce the negative psychological effects of witnessing violence. The
researcher gathers a sample of volunteers to engage in this research. All
volunteers are given a pretest to measure the negative psychological trauma
resulting from witnessed violence. Half of the volunteers randomly are placed
in an experimental group, and the other half are placed in a control group.
The treatment designed to reduce psychological trauma is administered to the
experimental group. A posttest is then administered to both groups to
ascertain whether there is a difference in trauma experienced between the two
groups. Imagine the findings show those in the treatment group now register
significantly lower levels of trauma. The researcher may conclude that the
treatment was effective in reducing trauma. This may be true for those
selected for the experiment, but do those findings apply to the larger
population of interest? Is it that those who are likely to volunteer for
experimental research on trauma differ in important ways from those who do
not volunteer? If there are selection biases, then the external validity of the
experimental findings is jeopardized.
Interaction of selection biases and experimental variables:
Threats to the external validity of experimental research resulting
from the sample used in experimental research.
Testing threat: Threats to the external validity of experimental
research due to the use of pre- or posttests that may lead subjects
to respond differently or to be sensitized in unnatural ways to the
treatment during the course of the experiment.
Interaction of Experimental Arrangements and
Experimental Variables
The interaction of experimental arrangements and experimental
variables refers to how the artificially controlled situations in which
experiments are conducted may jeopardize the ability to generalize findings
to a nonexperimental setting. As noted, a feature of experimental research is
the control of the experimental situation that assists in a researcher’s ability to
isolate the effect of the treatment on the outcome. Although important for
isolating the effect of the treatment, these same controls on the situation
affect the external validity of the findings. A researcher must question
whether the artificial environment of an experiment is affecting the findings
such that the same outcome would not be found in the “real world.” For
example, in an experiment, the researcher takes great care to ensure the
control and experimental groups are exposed to identical situations. The
degree of lighting, temperatures, locations, noise, and other parts of the
situation are identical during the experiment. A limitation of this is that in the
real world, people are exposed to a variety of situational characteristics. This
researcher must ask whether variation in lighting, noise, temperature, or
myriad other situational characteristics would affect the experimental
outcome. If so, the external validity of the research is questionable.
Interaction of experimental arrangements and experimental
variables: Threats to the external validity of experimental
research that refers to how the artificially controlled situations in
which experiments are conducted may jeopardize one’s ability to
generalize findings to a nonexperimental setting.
Research in Action: Acupuncture and Drug Treatment: An
Experiment of Effectiveness
The first drug court in the nation was established in Miami,
Florida, where innovative approaches to treating those in criminal
court includes using a more informal, supportive, nonadversarial
approach. One approach used included the use of acupuncture as a
part of the drug treatment regimen (White, Goldkamp, and
Robinson, 2006). Over time, this “Miami Model,” including
acupuncture, has been implemented elsewhere in the United
States. Although there has been research focused on the
effectiveness of drug courts more broadly, no research had been
conducted on the effectiveness of acupuncture in these drug
courts. The purpose of this research is to examine the
effectiveness of acupuncture in a drug court setting.
To conduct this research, White et al. focused on a drug court in
Clark County, Nevada, more commonly referred to as Las Vegas,
Nevada. The Clark County drug court was an early adoption of
the Miami Model, and it normally requires it participants to
receive auricular acupuncture at a specific clinic five days a week.
Drug court participants typically remain in this initial state of
treatment for 30 days or until they produce five clean urine
specimens. After that, acupuncture is recommended but not
required. Struggling participants can be ordered to resume
acupuncture by the judge later in the program.
The research by White et al. included an experimental and a
quasi-experimental element. The experimental approach included
336 newly admitted drug court patients who were randomly
assign into one of two groups: a nonacupuncture control group
and a treatment or an experimental group that receives
acupuncture. Those assigned to the control group received
relaxation therapy as the treatment alternative. The experiment
lasted six months. In addition, a quasi-experimental element
included the researchers using existing records to examine the
influence of acupuncture on outcomes. This study found that the
more acupuncture participants received, the more likely they were
to be rearrested or terminated from the program. This finding is
not surprising given the use of additional acupuncture to
participants who are struggling. This did demonstrate the need for
an experimental design, where participants would be randomly
assigned to acupuncture and nonacupuncture groups.
The findings from the experiment showed few differences in the
outcomes between the acupuncture and the control groups. As is
the nature of research, however, especially true experiments, a
few issues were encountered. The first major issue concerned the
ethical problem of denying a presumably beneficial treatment to
half of the drug court participant population. Given this concern,
the drug court officials required that participants who requested
acupuncture receive the treatment, regardless of random
assignment. In addition, when couples entered drug court
together, court officials required they be in the same group. Third,
court officials also required those in the control group to receive
relaxation therapy (versus nothing). These changes affected the
true experimental nature of the design. First, random assignment
was not used. Second, the experiment compared the effect of two
interventions with no control group. As noted by the authors,
“unless an experimental design is implemented perfectly, it is
difficult to disentangle the effects of acupuncture from all of the
other influences on treatment outcomes” (p. 59). Furthermore,
they noted that “the problems with treatment integrity in this
study are by no means unique, and in fact they are fairly common
in criminal justice field experiments” (p. 59).
The policy implications of this work include that acupuncture
participants did as well and certainly performed no worse than
those receiving relaxation therapy. This opens up other
possibilities for treatment among drug court participants. In
addition, this research points to the need to conduct additional
research in which a true experimental design is used.
White, M., Goldkamp, J., & Robinson, J. (2006). Acupuncture in
drug treatment: Exploring its role and impact on participant
behavior in the drug court setting. Journal of Experimental
Criminology, 2(10), 45–65.
Interaction of Testing and Experimental Variables
The interaction of testing and experimental variables, also known as the
reactive effects of testing, and the interaction effect of testing, is another
way in which the external validity of experimental research can be
threatened. In experimental research, the use of pre- or posttests is common.
In the real world, however, people are not normally tested during the course
of the day. It is plausible then that the use of tests during an experiment may
lead to subjects responding differently to the treatment or to being sensitized
to the treatment in unnatural ways affecting the findings. It may be that the
causal relationship found in an experiment exists only when pretests are used.
Alternatively, it may be that the causal effect found in an experiment is found
only when posttests are used. On the other hand, it may be true that the causal
effect is measured only when both are used. The result is the same—if the
presence of tests during the experiment affects the outcome of the
experiment, the claim of causality may not be generalizable to the larger
population of individuals of interest.
Interaction of testing and experimental variables: Also known
as the reactive effects of testing, it is a threat to the external
validity of an experiment that results from subjects responding
differentially to the treatment or being sensitized to the treatment
in ways that affect the findings.
Reactive effect of testing: Also known as the “interaction effect
of testing.” It is a threat to the external validity of experiments that
result from subjects responding differentially to the treatment, or
being sensitized to the treatment, in unnatural ways affecting the
findings.
Reactivity Threats
Another threat to the external validity of experimental research is known as
reactivity threats, which refer to instances in which the novelty of
participating in research, and being aware of being observed during the
experiment, influences the subject’s behaviors and views. In normal life,
individuals are not reacting to observation or a part of an experiment. The
Hawthorne Effect, introduced in Chapter 6, is one kind of reactivity that may
affect the external validity of an experiment. Should subjects alter their
behaviors or views because of the novelty of participation in an experiment,
then the generalizability of the findings can be called into question.
Reactivity threats: Threats to external validity that refer to how
the novelty of participating in research, and being aware that one
is being observed during the experiment, influences one’s
behaviors and views.
Reliability
In Chapter 4, we introduced the idea of reliability as consistency of
measurement. Reliability in the context of experimental research is the same:
the notion that repeated experiments by others under the same conditions
should result in consistent findings. Zaykowski notes that it is important to
recognize that in an experimental setting, reliability is not an indicator of a
perfectly designed experiment, but it demonstrates an experiment that
produces consistent findings when repeated under the same conditions. In
addition, according to Zaykowski, it means getting the same findings with
different populations. Offering evidence of reliability bolsters a researcher’s
claim of validity of an experiment. In others words, with evidence of
reliability of experimental findings, a researcher has evidence of valid
findings.
© gwflash/iStockphoto.com
Natural experiments are a useful approach when controlled experimentation
is difficult or impossible to implement. For instance, through the use of a
natural experiment, a researcher could assess the causal influence of lead
paint exposure on offending risk as an adult. Clearly, a researcher should not
randomly assign children to experimental groups where they would be
exposed to lead paint. Yet, a researcher could find subjects who had lived in
lead-painted households as a child and compare their adult offending risk
with others who had not been exposed to lead paint.
Common Pitfalls in Experimental Research
Like all research, experimental research must be undertaken with great care
and attention to detail. As Dodge has stated, experimental research is
challenging to execute. This section addresses some pitfalls that are
associated with experimental research. When conducting experimental
research, a researcher must select the most rigorous design available. Ideally,
you can engage in a true experiment using both pre- and posttests or, even
better, a Solomon Four Group design. Using a less rigorous design when
better options are available is something to be avoided.
Some overarching themes of this chapter are that true experimental research
is considered by many the gold-standard design in part because it can provide
evidence of causality. Also, true experiments are not impervious to problems.
Those problems come in the form of poor designs, threats to internal and
external validity, failure to execute a well-designed experiment, and a lack of
attention to detail. An experimental research expert, Chris Keating, PhD,
offered ways in which experimental research is used outside of academia.
Some of his work focuses on political campaigns and policy issues using
experimental research. Keating’s work demonstrates that research
methodology skills are useful beyond academia. In addition, Keating’s work
shows that research methodology can be used in a wide variety of substantive
areas beyond criminal justice and criminology. Not only that, Keating’s life
indicates that someone like you can use these skills to help others and live an
enviable lifestyle.
Two of our case studies used experimental design in their featured research:
Santos (Santos & Santos, 2016) and Melde (Melde et al., 2009). Table 8.2
summarizes some experimental elements of their research. As this table
shows, experimental research comes in many variations making it an
excellent choice for many research projects. In the next chapter, the text
addresses the topic of secondary data analysis. In this chapter, what
secondary data analysis is and the types of data available for this work are
described.
Applied Assignments
The materials presented in this chapter can be used in applied
ways. This box presents several assignments to help in
demonstrating the value of this material by engaging in
assignments related to it.
1. Homework Applied Assignment: Movie
Experiments
Select one of the following movies that feature an experiment
(some of these are violent so choose carefully):
The Belko Experiment (2016)
The Fly (1986)
The Incredible Hulk (2008)
To Search for a Higher Being, the Island of Dr. Moreau
(1996)
My Little Eye (2002)
Reanimator (1985)
After watching the movie you selected, write a paper with the
following information. First, offer a summary (in your own
words) of the movie. In that summary, describe the experiment
featured in that movie. Include the type of experiment the movie
portrays (e.g., true experiment, quasi-experiment, etc.) Next, write
a section that analyzes the experiment shown in the movie based
on the information in this chapter. For example, describe the
control groups used, when measurements are taken, etc. Next,
outline what would need to happen for the movie to portray a true
experiment. Be sure to include information regarding ethics (e.g.,
voluntary participation, etc.). Be prepared to discuss your findings
in class.
2. Group Work in Class Applied
Assignment: Designing a True Experiment
As a group, pretend that you have been hired by your university to
conduct a true experiment designed to answer the following
research question: Does mandatory training of all students on an
annual basis about the university’s antiviolence policy reduce
violence against students at the university? In your plan, you need
to identify your population, how you will select subjects for your
true experiment, and how many subjects you need. Include
additional information on sampling given what you have learned
in this class. Next, identify the type of experimental design you
will use. Include what exactly you will measure and how often
you will measure it. What are advantages of your research design?
What problems might your research design encounter? How will
you overcome those issues? Be prepared to discuss and share your
summaries in class.
3. Internet Applied Assignment: Describing
a Real Experiment
Access the webpage for the Journal of Experimental Criminology
(https://link.springer.com/journal/11292). Find an article that you
are interested in. Next, write a paper about this article. Begin the
paper with a statement of the purpose of the research and why it is
important research. Provide some information about the literature,
and how this research will add to the literature. Next, provide
details on this experiment. What type of experiment is it? What
are the advantages and disadvantages of that type of experiment?
Describe the control and experimental groups, the treatment and
how it was administered, and the findings. Did they answer their
research question? What issues of validity and reliability are
present? What would you do differently if you could work with
the authors on a replication of this work? Turn in a copy of the
journal article with your thought paper.
Key Words and Concepts
Association 245
Before-and-after design 266
Block matching 251
Block variables 252
Causal relationships 243
Causation 244
Confounders 250
Confounding factor 245
Control group 249
Experimental group 249
Experimental mortality 257
External validity 257
Gold standard 252
History 258
Instrumentation 258
Interaction of experimental arrangements and experimental variables
260
Interaction of selection biases and experimental variables 260
Interaction of testing and experimental variables 262
Internal validity 257
Intervening variable 245
Manipulation of an independent variable 252
Matching 250
Maturation 258
Natural experiments 267
Nonequivalent control groups design 266
One-group pretest–posttest design 264
One-shot case design 264
Paired-matching 251
Predictor variables 248
Pre-experiments 263
Quasi-experimental research 265
Random assignment 250
Reactive effect of testing 262
Reactivity threats 262
Reliability 262
Right to service 268
Selection bias 259
Solomon Four Group 255
Spurious relationship 245
Static-group comparison design 264
Statistical regression 259
Temporal ordering 244
Testing 259
Testing threat 260
Treatment 252
True experiment 248
Two-group posttest-only design 253
Two-group pretest–treatment–posttest design 254
Key Points
Experimental research is one way a researcher can identify causation, or
how an independent variable affects, influences, predicts, or causes the
variation in a dependent variable.
Causation exists when variation in one variable causes, or results in,
variation in another variable and is demonstrated with temporal
ordering, association, and a lack of spurious relationships.
A true experiment is an approach to research that has three defining
characteristics: at least one experimental and one control group, random
assignment to the experimental and control group, and manipulation of
an independent or a predictor variable (i.e., treatment) by a researcher.
Internal validity refers to the degree to which one can conclude causal
relationships based on experimental research. Threats to internal validity
include things like experimental mortality, history, instrumentation,
selection bias, testing, and statistical regression.
External validity is related to the generalizability of the experimental
findings to other people in other situations and environments. This
includes interactions of elements of the research with the independent
variable. Internal and external validity are frequently at odds with one
another.
Pre-experiments are not true experiments because they are not
characterized by the three defining characteristics of true experiments. In
general, they do not include control and experimental groups, they lack
randomization, and they lack researcher control over the administration
of the treatment. In general, pre-experiments should be avoided.
Quasi-experimental research designs, also called nonrandomized
designs, are used when random assignment to a control or experimental
group is not possible, impractical, or unethical. The lack of
randomization and the loss of associated properties is why Don
Campbell referred to quasi-experiments as queasy experiments.
Natural experiments occur in the natural world and include a control and
an experimental group. Nevertheless, natural experiments sort subjects
into the control and experimental groups naturally with no control by the
researcher.
Review Questions
1. What characteristics do true experiments have that make them
powerful?
2. What are the criteria required to establish causality?
3. How are causation and association related? How are they
different? Why does it matter?
4. What are pre-experiments, and what are their limitations? When
would one use a pre-experiment?
5. What characterizes quasi-experiments, and how do they differ
from true experiments?
6. What is internal validity? What are some threats to internal
validity, and why?
7. What is external validity? What are some threats to external
validity, and why?
8. What is a natural experiment, and how does it differ from other
experimental designs?
9. Under what circumstances would one use a quasi-experimental
design versus a natural experiment?
10. What types of pitfalls and ethical considerations are associated
with experimental research? How might a researcher avoid them?
So much secondary data exist that are readily available right now that if you
knew where to look, you could go online, download data collected by other
researchers, and run statistical analysis on these data in less time than it
would take to finish reading this chapter. Accessing existing data for
secondary data analysis is just that easy. Given the volume of data that is
being collected in contemporary society, and how accessible these data are,
coupled with the speed (instantaneously) at which it can be disseminated, you
should be able to see why it makes sense for researchers to consider research
using secondary data as an option for answering research questions. In fact,
with these benefits, featured researcher Rachel Boba Santos encourages
students and junior faculty she knows to consider strongly focusing solely on
research using secondary data early in their careers. In addition, being adept
at using secondary data makes one marketable outside of academia. This is
exactly how Santos first used secondary data. She worked as a crime analyst
using existing data in a law enforcement agency before earning her PhD and
becoming a professor.
Aside from the saving resources, researchers opt to engage in secondary data
analysis because available data often include very large samples and are
collected using sound (and often complex) methodological techniques.
Again, data collected for the NCVS provide an excellent example. Recall that
in Chapter 1 we explained that the NCVS is an ongoing survey of a
nationally representative sample of about 169,000 people, age 12 or older,
living in U.S. households. The NCVS is a federally funded data collection
program, which is designed to gather data on hundreds of variables related to
victims, offenders, and incidents of nonfatal crimes against persons and
property. No other source of victimization data is available with this level of
detail, so researchers interested in conducting victimization research are able
to study myriad victimization topics quickly, easily, and inexpensively by
using existing NCVS data. Great attention has been given to the design and
administration of the NCVS since its inception in the early 1970s. Time and
time again, it has been shown that the NCVS produces accurate and reliable
data about nonfatal violent victimization in America. Given the high quality
of NCVS data, it is unlikely that a researcher could conduct his or her own
study that produced “better” victimization data than what is currently
available. Therefore, researchers like Zaykowski (and the authors of this text)
interested in doing victimization research often conduct secondary data
analysis of NCVS data. This is done because the data set is large, of high-
quality, nationally representative, and easily accessible online.
Finally, the breadth of data that can be found in existing secondary data
sources may be unmatched by what researchers can collect on their own. This
is often the case because the federal government has funded many of the
studies that have produced data that can be used in secondary data analysis;
and when it comes to funding research, few can match the funding power of
the federal government. This means that more data can be collected during
these primary data collection efforts than what could have been collected if a
researcher conducted a study on his or her own, without funding from the
government. In other words, the adage “You get what you pay for” holds true
when it comes to many of the available data sets that researchers use for
secondary data analysis—those data may be free to you, but a lot of money
went into their creation. Again, the NCVS provides an excellent example.
What Are Secondary Data?
Answering research questions does not always require collecting original data
directly from or about research subjects. Instead, answering research
questions can be accomplished using existing data collected in a previous
study. These existing data are known as secondary data. Use of secondary
data is a popular approach because it can produce new empirical knowledge
without the time and money associated with collecting original data. Two of
our case studies provide examples of the value of using secondary data to
answer new research questions.
Santos and colleague’s research on hot spots and high-intensity policing on
known felons was conducted using secondary data collected by the Port St.
Lucie Florida Police Department (Santos & Santos, 2016). The pretests and
posttests of the experimental and evaluation research were based on police
data. Four impact measures using police data were used:
1. The count of residential burglary and theft from vehicle reported crime
in each hot spot
2. The count of all arrests for each targeted offender
3. The count of burglary, theft, and dug offense arrests in each hot spot of
individuals who live in the hot spot
4. The ratio of burglary, theft, and drug offense arrests per individual
arrested who live in the hot spot
Zaykowski (2014) followed a series of steps to begin her study that were
consistent with the typical stages of research presented in Chapter 1. For
example, she began by developing her research questions and reviewing the
existing literature. Instead of designing a study that required her to collect
original data, however, she knew her study could be successfully completed
using existing NCVS data. In other words, her research design did not require
her to collect new data from crime victims across the nation. She knew that
data existed already in the form of the NCVS. Zaykowski also knew those
NCVS data were available at no cost to her through the Inter-university
Consortium for Political and Social Research (ICPSR), located at the
University of Michigan. To gain access to these data, all she needed to do
was create a free account and agree to ICPSR’s terms of use. You can go
there now to see the vast amount of secondary data they house
(https://www.icpsr.umich.edu/icpsrweb/). She was also aware that her
university’s institutional review board (IRB) would support her using NCVS
data for her research because the data are de-identified, and she knew she
would be able to apply for an expedited IRB review.1
1. At the time of her study, Zaykowski’s university’s policy required all
research of any kind to be reviewed, including secondary data. If the same
study was conducted today, according to Zaykowski, it would not be
reviewed at all because it would not fall within the regulatory definition of
research involving human subjects (see the section titled “Ethics Associated
with Secondary Data Analysis” for more details).
Zaykowski was convinced that using secondary data was the best way to go
about answering her research questions. Knowing certain characteristics of
the NCVS program and about the data collected during NCVS interviews
supported her decision. She knew, for example, that the NCVS program
produces high-quality data because, in part, trained interviewers are used
to conduct each survey;
draws a nationally representative sample of people 12 years if age or
older, living in U.S. households, which meant her findings could be
generalized to the broader population;
produces very high response rates, which meant problems associated
with survey nonresponse would not likely be an issue;
could provide her with more robust data about crime victims than other
existing data could, like official police data, because incidents that are
both reported and not reported to police are identified;
includes multiple variables associated with victims, offenders, and
nonfatal criminal incidents, thereby making it possible for her to answer
all of her research questions; and
have been used in previous studies to generate knew empirical
knowledge; in other words, she knew that using NCVS data for
secondary data analysis was an established and acceptable approach to
conducting research.
After Zaykowski received confirmation from her university’s IRB committee
that her research was exempt from full review, she proceeded to visit the
ICPSR website, registered, agreed to the terms of use, found the NCVS data
she needed, downloaded it, and began answering her research question using
these secondary data. Her (2014) analysis of the existing NCVS data
involved examining nearly 5,000 records to predict the likelihood a person
would seek help from victim services after personally experiencing criminal
victimization.
Today, ICPSR has more than 750 member organizations, more than 100 staff
members, and operates on revenues of greater than $17 million. During the
past several decades, the number of special topic archives housed at ICRSR
has also increased. These archives now include the Substance Abuse and
Mental Health Data Archive (SAMHDA), Data Sharing for Demographic
Research (DSDR), Project on Human Development in Chicago
Neighborhoods (PHDCN), the National Addiction & HIV/AIDS Digital
Archive Program (NAHDAP), NCAA Student-Athlete Experiences Data
Archive, and the National Archive of Data on Arts & Culture (NADAC).
Not only does ICPSR offer secondary data, and information about those data;
it also helps to develop quantitative and qualitative analytic skills among
students and social scientists through various workshops and seminars. The
ICPSR Summer Program in Quantitative Methods of Social Research is one
of the more popular programs offering many workshops. For example, in
2015, 81 courses, taught by 101 instructors from across North America and
Europe, were offered through ICPSR’s Summer Program. Participants
included students, faculty, and researchers from more than 40 countries, 300
institutions, and 30 disciplines. Although ICPSR does not offer as many
qualitative workshops and seminars, it does offer some. For example, you can
take a class in Process Tracing in Qualitative and Mixed Methods Research
in the same summer program. To learn more about ICPSR and to become
registered so that you can access data housed in its archives, visit
https://www.icpsr.umich.edu/icpsrweb/ICPSR/.
The main role of ICPSR, however, is to archive political and social research
data. Accessing these secondary data is simple and straightforward. First,
your university, government organization, or other institution must be an
ICPSR member institution by paying a subscription fee. The fee is based on
the size of an organization, and it can cost more than $17,000 each year for
large universities and colleges. Once your university or organization is a
member, however, you will have free access to ICPSR data archives. You can
find out whether your university or college is a member institution by
checking to see whether it is listed at
www.icpsr.umich.edu/icpsrweb/membership/administration/institutions.
If you are affiliated with a member institution, three basic steps can be
followed to conduct secondary data analysis using ICPSR data: (1) accessing
the ICPSR archive, (2) finding the data you need to answer your research
question, and (3) downloading these data onto your local hard drive.
Accessing ICPSR data archives requires a new user to create a MyData
account. The MyData account will permit you to download data available
only to ICPSR members. Once an account is created, you can log in to the
archive. All users of the archive must also agree to terms of use outlined by
ICPSR. On subsequent visits, users may log in to ICPSR using their
Facebook® or Google® account that is affiliated with their university or
organization via third-party authentication.
Step 2 of the process for conducting secondary data at ICPSR involves
finding the specific data you need to answer your research question. ICPSR
offers several different ways for finding data once a registered user has
logged on. You can browse for data by topic, data collection themes,
geography, or series name (e.g., NCVS). Data can also be queried based on
when it was released (i.e., searching for data released in the last week, month,
or year). Alternatively, data can be searched for by how often they are
downloaded or by which countries are using ICPSR data. In other words,
researchers can limit their search for data to the most frequently downloaded
data found in ICPSR archives. When searching for data on the archive, do not
forget some of the strategies presented in Chapter 3 that relate to using
Boolean operators and filters. These strategies can help you find just what
you are looking for more quickly and easily even at ICPSR.
The third and final step in obtaining data housed at ICPSR for use in research
using secondary data involves downloading data files from the archive to
your local hard drive. Figure 9.2 is a screenshot from the ICPSR archive. It
shows some of the options available to users who want to download the FBI’s
UCR data from 1980. Notice that different subsets of the data can be
downloaded in different formats for use on a variety of statistical software
programs (including SPSSTM, SAS®, STATATM, R, ExcelTM, and ASCII).
Import files for SPSS, SAS, and STATA are also available. These software
programs are discussed in greater detail in Chapter 12. Finally, codebooks
that provide documentation on how the data files are structured, the variables
contained in the data set, and the coding of specific variables is available for
download for each of the data sets. These data and their supporting
documents and files can be downloaded individually or as a compressed file
onto a user’s local hard drive. Once they are, the user can then begin his or
her secondary data analysis—that is, the analysis of these secondary data.
Codebook: Documentation for a data set that explains how the
data files are structured, how the variables are contained in the
data set, and the type of coding of specific variables is also
available for each data set.
Figure 9.2 Different Data File Types That Can Be Downloaded From ICPSR
That Contain Information From the FBI’s UCR Program
US Department of Commerce
Census Bureau
You have probably heard of the U.S. Bureau of the Census, which is more
commonly referred to as the Census Bureau. You have likely heard of the
Census Bureau because it is the federal agency responsible for producing data
about the American people and its economy. In particular, the U.S. Census
Bureau is the federal agency in the United States responsible for
administering the decennial census, which is used to allocate seats in the U.S.
House of Representatives based on states’ populations. In addition to the
decennial census, the Census Bureau conducts several surveys, including the
American Community Survey and the Current Population Survey.
Researchers often look to these comprehensive data sets about individuals
and the communities in which they live when conducting secondary data
analysis. For example, Cuevas says he often uses census data in writing grant
applications to help make arguments about where the people he wants to
study are located and that there is a sufficient number of potential research
participants in those locations to study.
The American Community Survey (ACS).
The American Community Survey (ACS) is the largest survey (only the U.S.
Census, which is not a survey, is larger; the NCVS is the second largest
survey conducted by the Census Bureau) that the Census Bureau manages.
The survey is administered to more than 3 million people living in America
every year, collecting data on income and education levels as well as on
employment status and housing characteristics. After the data are collected,
the Census Bureau aggregates responses with previously collected data one,
three, or five years prior to produce several sets of estimates. First, it
produces one-year estimates for areas of the country with a population of at
least 65,000. Second, it produces three-year estimates for areas with at least
20,000. Finally, it produces five-year estimates for all census administrative
areas called block groups, respectively.
Block groups are a geographic area used by the Census Bureau for
aggregating population data; a population of 600 to 3,000 people typically
defines a single block group. ACS are available online through at American
Fact Finder (https://factfinder.census.gov/) and include data on both people
(e.g., age, sex, education, and marital status) and housing units (e.g., financial
and occupancy characteristics). This website can be queried to find
population figures about specific geographic areas, based on data collected by
the Census Bureau, including data collected from the ACS. In addition, data
can be downloaded from the site onto a researcher’s local drive so he or she
can use it for secondary data analysis.
Block groups: Geographic area used by the Census Bureau for
aggregating population data that is typically defined by a
population of 600 to 3,000 people.
The Current Population Survey (CPS).
The Current Population Survey (CPS) is another survey administered by the
Census Bureau. The CPS is smaller than the ACS. It is a monthly survey of
approximately 60,000 U.S. households sponsored by the Bureau of Labor
Statistics. Its primary aim is to gauge the monthly unemployment rate in the
United States, providing one of many indicators of the economic health of the
nation. Information related to individuals’ work, earnings, and education are
available in the CPS data. Survey supplements are also commonly used as
part of the CPS to measure other topics of interest to the nation, such as data
on child support, volunteerism, health insurance coverage, and school
enrollment. Researchers conducting secondary data analysis can access data
produced from the CPS at the Integrated Public Use Microdata Series
(PUSM), the world’s largest individual-level population database. The
IPUMS website is located at https://www.ipums.org.
Geospatial Data
The data available for the secondary data analysis already discussed thus far
in this chapter represent the tip of the iceberg with respect to what is out
there. There are thousands of data sets that researchers can access and
download quickly and easily through the Internet. The challenge for those
wanting to use existing data to conduct secondary data analysis is finding
data files that contain the variables you need to answer your specific research
question(s). Increasingly, research in criminal justice and criminology is
focusing on the spatial and temporal characteristics of crime, and many of
those who are interested in this topic turn to existing geospatial data to help
support their research projects.
Geospatial data: Data containing spatially referenced information
(e.g., the latitude and longitude coordinates) included as part of all
the data contained in a data set.
Geospatial data have spatially referenced information (e.g., the latitude and
longitude coordinates of a particular point or an administrative area such as a
census block, block group, or tract) included as part of all the data contained
in a data set. Geospatial data are particularly useful in that it allows
researchers to consider the importance of time and place as they relate to their
research question. A more detailed look at the use of geospatial data is
considered in the next chapter.
In addition to the decennial census, the ACS, and the CPS, the Census
Bureau produces geographic data files that can be used in secondary data
analysis. They have provided these data for more than 25 years through their
Topologically Integrated Geographic Encoding and Referencing (TIGER)
program. TIGER products are extracted from the Census Bureau’s TIGER
database, which contains geographic features like roads, railroads, rivers, and
legal and statistical geographic areas (e.g., census block groups). The Census
Bureau offers several file types and an online mapping application to support
analysis of TIGER data. For example, Figure 9.4 shows income data mapped
by county for 2015. Some of the different types of data that can be
downloaded from https://www.census.gov/geo/maps-data/data/tiger.html
include the following:
TIGER/Line Shapefiles
Can be used in most mapping projects and are the Census Bureau’s
most comprehensive data set. They are designed for use with
geographic information systems and can be useful in various crime-
mapping projects. Chapter 10 covers Geographic Information
Systems (GIS) in greater detail, but it can be defined as a branch of
information technology that involves collecting, storing,
manipulating, analyzing, managing, and presenting geospatially
referenced data.
TIGER Geodatabases
Useful for users needing national data sets on all major boundaries
by state.
TIGER/Line with Selected Demographic and Economic Data
Compiles data from selected attributes from the 2010 Census,
2006–2010 through 2010–2014 ACS five-year estimates and
County Business Patterns (CBP) for selected geographies. CBP
data contain annual economic information by industry and can be
used to support studies that consider economic activity of small
areas as part of their research question.
Cartographic Boundary Shapefiles
Provide small-scale (limited detail) mapping projects clipped to
shoreline. Designed for thematic mapping using GIS.
TIGERweb
Can be used for viewing spatial data online or streaming to your
mapping application.
Geographic information systems (GIS): Branch of information
technology that involves collecting, storing, manipulating,
analyzing, managing, and presenting geospatially referenced data.
County Business Patterns (CBP): Data that contain annual
economic information by industry and that can be used to support
studies that consider economic activity of small areas as part of
their research question.
Although data available from the TIGER program are generally not primary
data that have been used to answer a particular research question, like NCVS
or UCR data, they do represent existing data that can be used by researchers
conducting studies that use secondary data analysis. It is especially true for
studies interested in some aspect of criminal justice and criminology that are
focused on identifying or explaining spatio-temporal patterns in data.
Figure 9.4 Median Household Income of Total U.S. Population, by County
Source: U.S. Census Bureau, Small Area Income and Poverty Estimates
(SAIPE) Program, Dec. 2016.
Center for Disease Control and Prevention
A useful, but often overlooked, source of secondary data is the Centers for
Disease Control and Prevention (CDC). Violence is a public health issue, and
for this reason, the CDC gathers data valuable to criminal justice researchers.
The CDC is a part of the Department of Health and Human Services and is
located just outside of Atlanta, Georgia. Of particular interest is the National
Center for Injury Prevention and Control found in its Office of
Noncommunicable Disease, Injury and Environmental health. The CDC has
come a long way since its participation in the Tuskegee Syphilis Experiment
that ended in 1972 (see Chapter 1 for details). Today, the CDC gathers data
on violence, although the 1996 Dickey Amendment stated that “none of the
funds made available for injury prevention and control at the Centers for
Disease Control and Prevention may be used to advocate or promote gun
control.” Even though this avenue is not one that is examined at the CDC, it
does provide useful information and data on other types of violence,
including elder abuse, sexual violence prevention, global violence, suicide
prevention, and child abuse and neglect (to name some). The CDC’s violence
prevention webpage can be accessed here:
https://www.cdc.gov/violenceprevention/index.html
A particularly useful CDC data source is the National Vital Statistics System
(NVSS) program, which provides data on vital events including deaths (but
also births, marriage, divorces, and fetal deaths). Mortality data, including
that on homicide, from the NVSS are a widely used source of cause-of-death
information. It offers the advantages of being available for small geographic
places over decades, allowing comparisons with other countries. For
example, Table 9.1 provides the numbers taken from death certificates of
assaults leading to death for all people, for males, and for females. These
secondary data show that in 2014, 15,872 people in the United States died
from an assault: 12,546 were male, and 3,326 were female. Homicide by
assault was the 17th leading cause of death in the country in 2014. Although
these represent just a few of the data available, all data can be accessed and
downloaded at https://www.cdc.gov/nchs/data_access/vitalstatsonline.htm.
Note: Texas and North Carolina are the only states in the United States
without a SAC.
Local Statistical Agencies
Finally, secondary data analysis can be based on data obtained from local
governments or local government agencies. The amount and type of data
available locally varies by place. Links to local governments, by every state
in the country, can be found on the USA.gov website
(https://www.usa.gov/local-governments). For example, following the link on
this site to Colorado’s local government can lead users to demographic
profiles of the state’s general population and housing characteristics, by the
state’s cities and towns (https://demography.dola.colorado.gov/census-
acs/2010-census-data/#census-data-for-colorado-2010). These data are based
on 2010 U.S. Census data and can be downloaded from the American Fact
Finder website, but if a researcher is interested in these data specifically, then
it may be faster to go directly to Colorado’s Division of Local Government
for data (https://www.colorado.gov/pacific/dola/division-local-government).
A particularly useful source of local data can be found at police agencies’
websites. For example, Kansas City, Missouri, homicide and monthly crime
data and statistics can be found on the police department’s website located at
http://kcmo.gov/police/homicide-3/crime-stats/#.WUR212jys2w. Similarly,
UCR data for San Bernardino, California, are available on their police
department’s website at https://www.ci.san-
bernardino.ca.us/cityhall/police_department/crime_statistics/default.asp. It’s
important to remember that local agencies sometimes offer more data online
than they submit to the FBI, so it is worthwhile looking at your local police
department’s webpage. Although the amount and quality of data available
differ by agency, this is a great place for students and researchers to find data
on crime-related topics in their community.
This book cannot possibly cover all local sources of crime and criminal
justice data available online. We encourage you to use an online search
engine to get a better sense of what’s out there that is related to your area of
interest to your particular areas of interest. For example, if you are interesting
in researching courts, then search on “courts” and “data” and “<your state’s
name here>.” If you are interested in conducting research on corrections, do
the same. With minimal effort, you can find an amazing amount of data
available for you to use.
Disadvantages of Using Secondary Data
Secondary data offer an amazing opportunity to conduct research quickly,
efficiently, and inexpensively. Although they offer so many benefits, like all
things research, analysis involving secondary data has some disadvantages.
When you use secondary data, you must be aware of these issues to avoid
making a mistake that could jeopardize the quality of your research. One
major disadvantage to using secondary data is that the available existing data
may not be able to address your specific research question. For example, if
your research question focuses on how a person’s immigration status is
related to their victimization risk, you may be thinking the NCVS would be
an ideal data set to use. Although the NCVS is huge and has thousands of
variables, it does not gather data on people’s immigration status. Imagine
your research question focuses on whether students enrolled in a college or
university part time have victimization rates that differ compared with their
full-time counterparts. Where might you go for these data? Again, the NCVS
seems like a good source, but it does not differentiate between full- and part-
time students. These data are not collected during NCVS interviews. If these
were your research questions, you would have to keep searching for other
possible existing data, or devise a plan to collect your own data.
Imagine your research question focuses on whether students enrolled in a
college or university part-time have victimization rates that differ compared
to their full-time counterparts. Where might you go for these data?
© tzahiV/iStockphoto.com
A second issue to be aware of when conducting secondary data analysis is
that existing data may not code variables or have the response categories you
need for your research project. The example about full- and part-time
students in the NCVS is a good example of this problem. Although the
NCVS gathers data on whether a person is a college student, it does not code
the responses in more than a nominal (i.e., “Yes” or “No”) way. Another
example of this possible shortcoming in existing data concerns age. Perhaps
your research question focuses on how each year of age is related to
recidivism risk. Perhaps you have found a source of data on recidivism that
codes respondents’ age in an ordinal fashion (e.g., in categories of 0–10
years, 11–20 years, 21–30 years, etc.). Because these data are coded this way,
your research question cannot be answered with these data. If a study wanted
to use existing data to conduct secondary data analysis, the way in which
variables are originally coded may limit (or even prohibit) its utility. In
conducting secondary data analysis, you must ensure that the variables you
need to answer your research question are coded in a way that is appropriate
for your project.
In conducting secondary data analysis, you must ensure that the variables you
need to answer your research question are coded in a way that is appropriate
for your project.
© Mark Anderson/Andertoons
A third limitation to be aware of when considering using secondary data is
that a researcher must understand (as best as possible) how the original data
were gathered. Just downloading a codebook and using a variable are not
enough. Because a researcher conducting secondary data analysis did not
participate in the planning and execution of the primary data collection
process, they may not be able to know exactly how or why it was collected
the way it was originally collected, which is another disadvantage. If you are
using secondary data, you must look for literature that describes how the data
were collected and some history about it. This may indicate that you can use
it, but we need to issue a caveat about it. For example, when we discussed
some of the FBI data collections, we noted that the purpose of these data
collection efforts was to inform law enforcement. It was not intended that
they be used in ways that many researchers use them. That is okay in some
circumstances but not in others.
For instance, some researchers use the SRS to estimate criminal victimization
it the United States. This is okay as long as it is acknowledged that those data
only represent what is reported to the police. We know that a lot of crime is
never reported to the police. This issue illustrates that a researcher conducting
secondary data analysis needs to try to understand all he or she can about the
quality of the existing data used, including how seriously the data might be
affected by problems such as low response rates or respondents’
misunderstanding of specific survey questions. In other words, if the
researcher conducting secondary data analysis was not present during the
data collection process, he or she has to determine independently the overall
quality of the data to be used and must decide, based on this information,
whether it is suitable for his or her investigation. Much of this
methodological information should be available, but it may not always be. If
you cannot find information in writing, contact the agency collecting it and
ask to talk to an expert on the data there. If you share what you are
considering doing with the data, the agency can offer advice as to the
feasibility of your project.
Finally, it is often that secondary data do not provide nuance about topics.
Dodge finds this to be true in much of her research. Dodge has worked with
others on projects that include secondary data. Nevertheless, she finds that
using secondary data to supplement what is learned through qualitative data
leads to the richest research. In addition, Dodge finds that looking at studies
using secondary data highlights gaps in the literature she can fill conducting
research using qualitative data. Brunson reiterated this point by noting to us,
“I may be able to use secondary data to learn about the frequency of police
stops, but generally I cannot find any information about the nature of the
stop.” In studies of procedural justice, like Brunson conducts, the key is what
happened during the stop, not that a stop happened necessarily.
There are several things this first paragraph contains worth noting that you
should emulate should you use secondary data. First, Zaykowski (2014)
identified that the primary data were collected as part of the NCVS program.
Second, she stated that these data were obtained directly from ICPSR. Third,
Zaykowski explained that she received IRB approval for using the NCVS for
this research. Fourth, she included specific information about characteristics
of the NCVS data that conveys to the reader some of the reasons it is
appropriate to use to answer her specific research question. Information about
the NCVS data presented in the article continued with the following passage.
Keep in mind that Zaykowski did not gather these data, yet she reports about
how the data were originally collected. This information is only available by
reading documentation associated with the NCVS, as well as information on
variables included in the codebook:
Trained researchers conducted in-person interviews with all members in
each sampled household using a screening questionnaire. This screening
questionnaire asked subjects about demographic information and about
events that may have happened to them in the past 6 months. For
example, subjects were asked “has anyone attacked or threatened you in
any of these ways: With any weapon, for instance, a gun or knife . . . ?
With anything like a baseball bat, frying pan, scissors, or stick . . . ?” If
subjects had experienced any of the situations, they were then asked to
complete an additional questionnaire for every incident. The incident
questionnaire asked subjects where the incident occurred, how well they
knew the perpetrator, how the perpetrator attacked the victim and if any
weapons were used, what injuries the victim received, how the crime
affected the victim’s mental and physical health, what type of help the
victim sought including if the incident was reported to the police, if
there was anyone else present at the scene of the incident, and the
victim’s relationship to the offender. The trained interviewer
determined, based on this information, into which crime category the
incident fit. (p. 366)
This next paragraph conveys additional information about the quality of the
NCVS data and how some of the variables used in the study were measured.
When findings from secondary data analysis are reported, this level of detail
about the data being used helps the reader contextualize results from the
secondary analysis. In other words, details about data quality help convince
the reader that their use is justified. Zaykowski (2014) also includes
information specific to the number of cases used in her analysis and the
criteria used for selecting them. This is valuable information about the NCVS
sample. Recall in our earlier chapter on sampling, we discussed the value of
representativeness; Zaykowski has noted that her study uses both a
representative sample of persons age 12 years and older in households and
that her research is based on a large sample. In part, this information is
presented so that someone else can replicate her study:
The dataset of 27,795 cases contained both property and violent crime
victims. Only victims of violent crime, however, were asked if they
sought assistance from a victim service agency. Therefore, the sample
was limited to victimization by violence (sexual assault/rape, physical
assault, and robbery). Included cases significantly different from
excluded cases (i.e., property crime victims) in that a greater proportion
of violent crime victims were black/African American, male, not
married, had lower household incomes, and were younger in age (p <
.10). Subjects whose household income was unknown were retained in
the sample through a dummy variable (compare with Kaukinen, Meyer,
& Akers, 2013). Subjects with missing data were excluded from the
analysis using listwise deletion (n = 330). The final sample included N =
4,746 cases of whom few sought services (7.8%). (p. 366)
Although this paragraph has some more advanced technical information in it,
notice that it demonstrates the need for researchers—whether a student, a
professor, or an analyst working outside a university setting—to include
specific information about the data he or she used to conduct secondary data
analysis. When you are conducting research using secondary data, you should
include important information such as the:
1. Identify the project or program undertaken to collect the primary data.
2. Explain how the primary data were obtained from those who collected
it.
3. Include a statement about whether they have received IRB approval to
use the primary data.
4. Provide a justification for why the primary data are being used instead
of other existing data.
5. Identify the unit of analysis.
6. Discuss which variables from the primary data are focal variables and
how they were measured.
7. If the entire primary data set is not used, explain why and how the subset
of data was extracted, including details of the final sample size and how
missing data were handled.
By providing these details, readers can more easily draw conclusions about
whether it is appropriate for a researcher to conduct secondary data analysis
on a particular existing data set.
Common Pitfalls in Secondary Data Analysis
Not unlike the other research methodologies discussed in previous chapters,
researchers must think carefully before engaging in secondary data analysis.
Secondary data analysis is subject to many pitfalls. Some challenges
associated with using this particular research methodology can threaten the
overall quality of a research project, which threatens the overall quality of
your research. This section addresses the most common pitfall associated
with secondary data: forcing the data to be what they aren’t.
© Tom Fishburne/Marketoonist.com
Not surprisingly, Cuevas echoed this concern about secondary data. He
believes that too often researchers think that “just because they can read a
data codebook up and down, look at every variable, and look at the data
documentation, researchers have a good sense of the data; but they don’t.”
During his video interview, he noted, “It’s one thing to know how the
sausage was made, but it’s another thing to actually make the sausage. The
process of making the sausage is what matters.” Cuevas also thinks there is a
bit of a “round-hole-square-peg” problem with secondary data analysis. He
believes that researchers often try to make variables in existing data reflect
the concepts they are interested in studying, when often they do not, and
suggests that when this occurs, secondary data analysis becomes a potentially
misleading and poor methodology. Dodge dealt with these data issues most
recently on an internal project using police use-of-force data. She was
provided data from a law enforcement agency that she felt looked wrong. She
did not feel the data she had been provided really constituted police use-of-
force data regardless of how it was labeled. A second request was made for
additional data, and those data still looked wrong. Somehow, in the course of
Dodge’s research and trying to understand exactly what data the agency had,
the media got ahold of the same data and reported about them as police use-
of-force data. They were wrong. Unfortunately, the questions surrounding
these secondary data stemmed from issues about what the data actually
referred to. Cuevas offers a good strategy for researchers to avoid the
common pitfalls associated with secondary data analysis: Research must
“stay in their lane” and not use a variable for purposes far removed from its
original purpose. Santos and Dodge both have been called in on projects
where an organization needed its data analyzed, yet the organization did not
even have a codebook describing its data. Both researchers were faced with a
puzzle about what the variables were, what the data and variables were
supposed to represent or measure, what the data actually represented, and
even what the codes for the response categories were. To say these projects
are challenging is an understatement.
Dodge identifies a final pitfall associated with secondary data: that it can
often be rife with missing data. Missing data refer to when there are no data
recorded for a particular variable. Missing data come in two broad forms.
First, it can refer to when a person was supposed to include information and
did not do so. These are nonresponses. For example, you may be asked on a
survey to provide your annual household income but opt to leave that answer
blank. That is missing data. It can also refer to not applicable responses. For
example, a survey asks how many children live in your home. Assuming you
have none, the survey then skips you to a question focused on something
unrelated. For those who noted they had children living in the home, the
survey then asks how many males and female children live in the home.
When we look at the resulting data, the person with no children in the home
would not have a value recorded for the sex of children in the home. This
would appear as not applicable missing data. A problem with some secondary
data is that they include a large amount of missing data raising challenges
with analyzing the data. Understanding if what is missing is truly missing,
versus not applicable missing, is important to know and often not clear with
some secondary data.
Missing data: When no data are recorded for a particular
variable.
Ethics Associated With Secondary Data Analysis
Many universities and institutions acknowledge that some research projects
involving secondary data analysis may not meet the definition of “human
subjects” research; and therefore, they may not require IRB review.
Nevertheless, some secondary data analysis projects do require IRB approval.
This section of the chapter is designed to provide guidance on IRB policies
and procedures that could be applied to projects using existing data. Despite
this information, researchers conducting secondary data analysis on existing
data must always have their host university or institution make the final
assessment on whether IRB review is necessary. In other words, researchers
should not assume that just because they are using secondary data that they
are always exempt from the IRB review process.
The U.S. Department of Health and Human Services (HHS), Office for
Human Research Protection, offers a series of flowcharts to help researchers
determine whether their research falls under the umbrella of “human subject”
research and, as such, requires ethics approval to undertake. Figure 9.6 shows
the decision chart for researchers wanting to use existing data for secondary
data analysis. All of HHS’s Human Subject Regulations Decision Charts are
located at https://www.hhs.gov/ohrp/regulations-and-policy/decision-charts/.
Courtesy of Jennifer Truman
Secondary Data Expert—Jenna Truman, PhD
Jenna Truman, PhD, is a statistician at the Bureau of Justice Statistics (BJS).
She is in the Victimization Statistics Unit at BJS and works primarily on the
National Crime Victimization Survey (NCVS). Her current research and
work focuses on victimization patterns and trends, stalking victimization,
domestic violence, firearm violence, and the measurement of demographic
characteristics using the NCVS. As a statistician at BJS, she is tasked with a
number of responsibilities including survey design, data collection, data
analysis, writing reports, presenting analysis, overseeing the Office of
Management and Budget clearance process for data collections (think IRB
clearance), and project management. Most recently, she has been involved in
the redesign and data collection of the Supplemental Victimization Survey on
stalking victimization, implementing the addition of new demographic items
to the NCVS, and the redesign of the NCVS instruments.
Figure 9.6 Health and Human Services, Office for Human Research
Projection’s Decision Chart for Research Involving Existing Data
Truman never thought she would end up making a living working with
secondary data. Initially, she wanted to be an archeologist (like Indiana
Jones!). As she got a bit older, she decided she wanted to be a lawyer, but an
internship in high school demonstrated to her that it was not the career for
her. She then fell in love with sociology as an undergrad, which led her to
pursue her graduate degree. After receiving her MA in applied sociology, she
made the decision to pursue her PhD in sociology. As a PhD student, she had
the opportunity to be the project manager for the department’s research lab
(Institute for Social and Behavioral Sciences). There, she gained experience
in survey research. Between this experience, learning about all of the applied
research that was being done outside of academia, and learning that she did
not love teaching, she decided that she wanted to pursue an applied research
job after finishing graduate school rather than an academic position at a
university. She was aware of BJS’s work throughout graduate school as she
had read BJS reports and had even used the stalking supplement data for her
dissertation. Therefore, she knew BJS as a place that she would look at to
apply when she went on the job market. She was fortunate enough that BJS
had an opening when she was finishing school. She applied, interviewed, and
got the job! She feels fortunate that everything worked out as it did and
recognizes having research methods skills was a part of her success.
Truman knows that many times in research it is not possible for researchers
to collect their own data, and this is where secondary data analysis can be
helpful. Depending on the research question, there may be existing data that
could be used to examine it. There is a plethora of data out there for
researchers and students to use, including many national data collections
from the federal government. Secondary data analysis is also beneficial from
a cost perspective because a researcher is saving from spending any data
collection costs. In addition, many of the large-scale data collections that
exist have a variety of variables and were collected with varying
methodologies.
Truman notes that when students or researchers consider using secondary
data analysis, it is vital that they seek to understand how the data are
collected, what variables and response categories are included in the data, and
if and how the data should be weighted. In her capacity at BJS, she speaks to
many data users about the NCVS to answer questions and provide necessary
documentation. For federal data sources, like the NCVS, there are typically
data codebooks and user’s guides or other documentation publicly available.
Truman states that it is essential that users use those resources to be sure that
they are analyzing the data appropriately. It is also important to understand
the population of study so that you are making appropriate conclusions.
Although the available resources for secondary data analysis may be
numerous, one challenge is finding the right data to answer your research
question. It is possible that the data may not have the exact measure that you
need or that the variables are coded differently than you would like.
Truman has some words of advice for students. First, they should focus on
understanding both the benefits and challenges of secondary data analysis.
Students should be aware of how to find possible data sources for their
research, and once data are found, they should make the effort to understand
the data file, the collection process, variables, and appropriate methodologies.
Students must understand that each data set might require a different level of
statistical skill to analyze it. For example, working with a longitudinal
compared with a cross-sectional data file may require more advanced
statistical skills. Another important skill for students is to be able to be
critical of and evaluate the secondary data they are using. Overall, a broad
understanding of secondary data analysis is an important skill to have and
would be a valuable asset for various research organizations, academia, and
the federal statistical system.
Longitudinal data: Data collected from the same sample at
different points in time.
Cross-sectional data: Data collected at the same point of time or
without regard to differences in time.
Chapter Wrap-Up
This chapter introduces and describes research using secondary data. We
began the chapter by defining why a researcher would use secondary data,
and what it is. The chapter provided insight into how secondary data differ
from primary or original data. We introduced you to several sources of
criminal justice and related secondary data that can be used to answer your
research questions, including the Interuniversity Consortium for Political and
Social Research (ICPSR). We also described several data sets available for
secondary data analysis from the FBI, BJS, and other organizations. In
addition, we described how to access those data. We concluded the chapter
by identifying key strategies for reporting findings from secondary data
analysis, the most common pitfall associated with using secondary data, and
we highlighted some ethical concerns related to secondary data analysis.
Research using secondary data is a popular and efficient way to conduct
research assuming the data available suit your research question. In our case
studies, Santos (Santos & Santos, 2016) and Zaykowski (2014) benefited
from existing data. Cuevas was on the original team that gathered the
DAVILA data, and it remains available for others to use. Being on the
original team collecting DAVILA offers some advantages to Cuevas who
understands the details of the data and how they were collected. Melde also
worked with the team that collected the data he used for his highlighted
research (Melde, Taylor, & Esbensen, 2009), which came from an evaluation
of the Teens Crime, and the Community and Community Works Program. He
too has a richer understanding of these data, which benefits his research. Still,
when using these data, Cuevas and Melde, like everyone else, must be sure
not to stretch the data too far from its original intention. Today, each of these
data sets is available for researchers like you to use. Santos benefited from
using data available at a law enforcement agency. Many may not consider the
local police or sheriff’s office as a source of data, but they are. Furthermore,
much of their data are available online. And Zaykowski used the NCVS data,
which have been available for decades. Table 9.2 highlights some of the
features of each of our case studies in which secondary data was used to
conduct the research. In this table, we again see the diverse ways in which
one can conduct research, even when all are using secondary data.
What Is GIS?
Although we briefly defined GIS earlier in the text, it warrants further
discussion. GIS means different things to different people. For example,
Rhind (1989, p. 322) defines GIS as
a system of hardware, software, and procedures designed to support the
capture, management, manipulation, analysis, modeling, and display of
spatially referenced data for solving complex planning and management
problems.
A slightly different definition of GIS is offered by the U.S. Geological
Survey. The USGS (2007) defines GIS as
Figure 10.2 Section of Charles Booth’s “Poverty Map” in 1889
Note: The map shows an area of East End, London, where streets are
colored to represent the economic class of residents.
a computer system capable of assembling, storing, manipulating, and
displaying geographically referenced information (that is data identified
according to their locations). (para. 1)
In this text, we define geographic information systems (GIS) as a branch of
information technology that involves collecting, storing, manipulating,
analyzing, managing, and presenting geospatially referenced data. To better
understand what GIS is and how it is used to study criminal justice and
criminology topics, it can be best thought of as an approach to processing
data that consists of four interrelated parts: (1) data, (2) technology, (3)
application, and (4) people. Each of these components is discussed next.
Data
Data comprise one of the four parts of GIS. GIS data differ from other types
of data gathered and used in criminal justice and criminology research in that
GIS data contain or can be linked to spatial or geographical data. Spatial
data include data that have geographic coordinates associated with a
corresponding physical location. The coordinates associated with spatial data
are important. For example, if we administered an online survey to a sample
of university students and we asked them to provide us with their home
address as part of the data we collected, we would have collected data about
where the students live. Nevertheless, simply providing an address lacks the
spatial coordinates, or reference of the latitude and longitude coordinates, that
correspond to the addresses. If we assigned spatial information (i.e., latitude
and longitude coordinates) to the students’ home addresses that were
collected from our survey, then we would have converted the original
(nonspatial) survey data (i.e., home address) into geospatially referenced or
geographic data. This allows us to illustrate where on a map our respondents
live. Now imagine we gathered geospatial data (with coordinates) about
where a criminal victimization occurred. With these geospatial data, we now
have where the victimization occurred on a map.
Spatial data: Data that have geographic coordinates associated
with a corresponding physical location. The coordinates
associated with spatial data are the key.
In general, two types of spatial data are used in GIS: vector and raster data.
Most people are familiar with vector data; nonetheless, many may not be
aware of vector data by name.
Vector data are geospatial data that represent characteristics of the real
world on a map. Vector data do this in one of three ways: points, polygons, or
lines. Figure 10.3 is an example of vector data used to produce a map that
represents geographic information about the location of registered sex
offender homes in Pinellas County, Florida. The sex offenders’ homes are
shown on this map using georeferenced points (i.e., pins labeled “O”). In
addition, this map illustrates a georeferenced point representing the location
of a home near a school (i.e., the pin labeled “H”). Point vector data are
illustrated using a pin because these locations represent a single point or
discrete location on a map. These point data are overlaid on a map image
showing an area of Pinellas County, Florida, that also includes surrounding
waterways. These waterways are symbolized as the solid gray areas in the
bottom left corner and are georeferenced as polygons. Finally, major street
networks are also symbolized on the map using vector data; in this case, they
are georeferenced lines. Streets run across the area of the county that is
visible on the map and look exactly as you would expect—as lines on a map.
Combined, points, lines, and polygons are three types of vector data used in
GIS to depict characteristics of the real world.
Vector data: Type of geospatial data used to represent the
characteristics of the real world in one of three ways: as points,
lines, or polygons.
Figure 10.3 Map of Sex Offenders’ Residential Locations, and the Location
of an Elementary School, in an Area of Pinellas County, Florida
Source: https://www.denvergov.org/content/denvergov/en/police-
department/crime-information/crime-map.html
Tactical Crime Analysis
In addition to administrative crime analysis, law enforcement personnel
including crime analysts conduct tactical crime analysis. Tactical crime
analysis emphasize collecting data, identifying patterns, and developing
possible leads so that criminal cases can be cleared quickly. Tactical crime
analysis usually involves analysis of individual, incident-level data associated
with specific events (e.g., robberies, motor vehicle thefts, residential
burglaries, etc.) that are believed to be part of a crime series, committed by a
single individual, or group of individuals.
Tactical crime analysis: Type of crime analysis used in law
enforcement that emphasizes collecting data, identifying patterns,
and developing possible leads so that criminal cases can be
cleared quickly.
Crime analysts engaged in tactical crime analysis often produce reports
containing time series (i.e., analysis of data that are linked by time) or point-
pattern analysis (i.e., analysis of discrete points) depicted in charts, graphs,
maps, or a combination of each. This information can then be used tactically
to address known crime problems within an agency’s jurisdiction and is often
incorporated into a broader, agency-wide policing strategy such as CompStat
(COMPuter STATistics). CompStat is an organizational management
approach used by police departments to proactively address crime problems.
It does so by using data and statistics—with a heavy emphasis on crime
analysis and mapping—to identify surges in crime that are then targeted with
enforcement. CompStat began in the New York City Police Department, but
it has since been adopted by law enforcement agencies throughout the world.
CompStat: Organizational management approach used by police
departments to proactively address crime problems, which relies
heavily on crime analysis and mapping.
© KLH49/iStockphoto.com
Most of the data used in tactical crime analysis are produced from the law
enforcement agency’s records management system, which includes crime
incident report data. That is, when someone calls into the police department
about a crime, or an officer makes a stop or witnesses a crime, data from that
incident are collected. By using data representing all incidents known to the
police, the following types of tactical crime analysis are common:
Repeat incident analysis
Crime pattern analysis
Linking known offenders to past crimes
Since tactical crime analysis almost always involves examining crimes that
are a part of a series of criminal events, criminal profiling or criminal
investigative analysis is often considered to be part of tactical crime analysis.
In summary, tactical crime analysis is a crime analysis technique that
describes and conveys information about crime patterns efficiently and
accurately so that criminal cases can be cleared quickly. This may involve the
reviews of available crime data gathered from arrest reports, field interview
reports, calls for service, trespass warnings, and other available sources of
data. This approach allows the tactical crime analyst to identify how, when,
and where criminal activity has occurred on a daily basis.
Strategic Crime Analysis
Unlike the other two approaches, strategic crime analysis is a crime analysis
technique that focuses on operational strategies of a law enforcement
organization in an attempt to develop solutions to chronic crime-related
problems. Chronic means that something—in this case, crime-related issues
—is ongoing and persistent. Furthermore, spatial analytic techniques used in
strategic crime analysis differ somewhat because they usually focus on
analysis of geographic units such as an agency’s jurisdiction, census tracts,
patrol districts, or beats, instead of on individuals or crime series. Often, the
aim of strategic crime analysis is to focus on cluster analysis to produce
information that can be used for resource allocation, beat configuration,
identification of nonrandom patterns in criminal activity, and unusual
community conditions. Cluster analysis is a technique that involves the
identification of similar types of crime that clump together in space or time.
Strategic crime analysis: Crime analysis technique used in law
enforcement that is focused on operational strategies of the
organization in an attempt to develop solutions to chronic crime-
related problems.
Cluster analysis: Crime analysis technique that involves the
identification of similar types of crime that “cluster” in space or
time.
The process of strategic crime analysis often begins with data pulled from an
agency’s records management system. In addition, this type of analysis also
incorporates other data sources derived from a variety of quantitative and
qualitative methods. Some specific types of strategic crime analysis
techniques can include the following:
Trend analysis
Hot spot analysis
Problem analysis
In summary, strategic crime analysis provides law enforcement agencies with
the ability to offer more effective and efficient service to the community.
Figure 10.7 is an example of a crime hot spot map for Henderson, Nevada,
used in strategic crime analysis showing the concentration of propane gas
tank thefts. Hot spot analysis for this particular type of theft was conducted
because it was believed that gas tank bottles could be used in terrorist
activities and the agency wanted to develop a better strategy for addressing
the particular problem. Crime hot spot mapping is one of the most common
forms of strategic crime analysis used in law enforcement today.
Figure 10.7 Hot Spot Map Depicts Concentrations of Propane Tank Bottle
Thefts in Henderson, Nevada
Crime Analysis in Academic Research
Academics, and in turn students, may think of crime analysis in ways that are
slightly different from how practitioners and crime analysts in agencies such
as police departments think of crime analysis. To academics, crime analysis
is often viewed as a research methodology that can be used to answer
research questions, as well as to test criminological theories, specifically,
those theories that attempt to explain when and why crime occurs (e.g., crime
pattern theory, routine activities theory, social disorganization theory, etc.).
Academics also use crime analysis to develop new techniques used to
identify crime patterns, as well as to improve existing methods, which allows
them to advance new ways of thinking about where and when crime occurs,
and the victims and offenders involved in criminal activity. Academics may
also apply crime analysis techniques as a research methodology to assess the
efficacy of various crime prevention or reduction policies and programs,
including the case study research discussed in this text conducted by Santos.
Recall that Santos and her colleague used this approach to test the efficacy of
an offender-focused intervention strategy, implemented using a randomized
control trial, based on crime hot spot analysis (Santos & Santos, 2016).
© St Petersburg Times/ZUMAPRESS/Newscom
At the heart of geographic profiling is the idea that serial offenders are less
likely to commit offenses close to their homes, and that offenders will not
travel too far to commit additional crimes. These journey-to-crime
assumptions were used to create the original geographical profiling formula,
developed by Rossmo (1997): the distance decay function. Hot spots
identified by Santos and her colleague (Santos & Santos, 2016) in their
evaluation of offender-focused intervention strategies implemented in Port St.
Lucie, Florida, were constructed, in part, based on the distance decay
function (i.e., offenders were more likely to reoffend close, but not too close,
to their home residence).
Geocoding
Addresses are the most common way that location is conceptualized in
geographic information systems. To use data containing address locations
(e.g., crime data), the address must be converted to a feature on a map (i.e., a
point location). That is, the addresses must be given spatial reference. The
process of assigning spatial reference to nonspatial data is called geocoding.
Geocoding involves assigning an XY coordinate pair to the description of a
place (i.e., an address) by comparing the descriptive location-specific
elements to those in reference data that have existing spatial information
associated with it.
Geocoding: In GIS and crime analysis, it is the process of
assigning an XY coordinate pair to the description of a place (i.e.,
an address) by comparing the descriptive location-specific
elements of nonspatial data to those in reference data that have
existing spatial information associated with it.
The geocoding process can be thought of as a process that can be broken into
three steps. Step 1 involves translating an address entry for a nonspatially
referenced location. Step 2 involves searching for that address in a separate,
spatially referenced data set. The third and final step involves returning the
best candidate or candidates (i.e., match or matches) as a point feature on the
map that reflects the spatial location of the address.
One of the main challenges to accurate geocoding is the availability of good
reference data, and the most widely employed address data model used in the
geocoding process is street networks. Geocoding an address using street
network reference data is referred to as street geocoding. Street geocoding is
commonly used in criminal justice and criminology studies, but other types
of geocoding exist, including geocoding to parcel reference data and address
point reference data. Nearly all commercial GIS products (e.g., ArcGIS)
provide geocoding services capabilities. Figure 10.13 provides a conceptual
diagram of the geocoding process when addresses are street geocoded.
Certain quality expectations must be met for the results of a geocoding
process to be considered good. That is, the overall quality of any geocoding
result can be characterized in three distinct ways: completeness, positional
accuracy, and repeatability. Completeness refers to the percentage of records
that can reliably be geocoded and is commonly referred to as the match rate.
Positional accuracy refers to how close each geocoded point is to the actual
location of the address it is intended to represent (see, for example, Figure
10.14). Finally, repeatability indicates how sensitive geocoding results are to
variations in the input address, reference data, matching algorithms of the
geocoding software, and the skills and interpretation of the analyst.
Geocoding results are considered to be of high quality if they are complete,
spatially accurate (i.e., valid), and repeatable (i.e., reliable). Collectively,
geocoding quality research clearly demonstrates that errors in geocoding can
be substantial, and when they are, they can adversely impact findings and
conclusions based on spatial analytic results.
Figure 10.13 Conceptual Diagram of the Algorithm Behind Street
Geocoding
Only one of our featured researchers used crime mapping in her case study:
Santos (Santos & Santos, 2016). In her research, she used many of the
concepts described here. Her research included an experimental element and
hot spots were the units that were assigned to either an experimental or a
control group. She was guided in part by the notion that offenders commit
crime near, but not too near, their own homes. Santos and colleague’s
research is a great example of how GIS and crime mapping techniques can
both inform our greater understanding of policing and provide findings that
inform the way police do their jobs. Table 10.3 offers some information
about her work as it relates to GIS and crime mapping.
© princigalli/iStockphoto.com
Evaluation research is useful for many reasons or purposes. As the
orientation example indicates, program evaluation can be used to assess a
program’s performance. Relatedly, program evaluation is useful when a
researcher wants to identify whether the program is accomplishing its
intended goals. Third, someone can use evaluation research to identify
whether the program is effective. In addition, as shown in the “Five Reasons
to Use Evaluation Research” box, one reason evaluation research is useful is
that can gather data used to provide accountability to those funding the
program (e.g., tax- payers, board of directors, university leadership, etc.),
those managing the program, or other related administrators. Finally,
evaluation research can be used to gather evidence that can be used to
educate those in the community about beneficial programs. Those eligible to
receive the benefits of a program need to know about it, and those in the
community who are indirectly benefiting from a program need to understand
its importance as well.
Five Reasons to Use Evaluation Research
1. Assess the performance of a program.
2. Investigate whether a program is achieving its goals.
3. Improve the effectiveness of an existing program.
4. Provide accountability to program funders, program
managers, and other administrators.
5. Gather information to educate the public about beneficial
programs in the community.
Given the reasons outlined, it should not be a surprise that a great deal of
funded social programs now have a required evaluation component. Those
providing money for research want to know whether the policies or programs
implemented are working as intended. Those providing the money for
programs want evidence as to whether the program and its associated costs is
worth it. This again points to an opportunity to use these skills in a career.
Evaluation research is important, and the demands for those with these skills
continue to increase.
What Is Evaluation Research?
We have identified why someone would use evaluation research. This section
addresses the equally important question: What exactly is evaluation
research? Evaluation research is the systematic assessment of the need for,
implementation of, or output of a program based on objective criteria. As
Cuevas describes it, evaluation research is one of the nicer more practical
aspects of research—it is where the rubber meets the road. By using the data
gathered in evaluation research, a researcher like you can improve, enhance,
expand, or terminate a program. The assessment of a program can be
conducted with any and all of the research approaches to research described
in this text, including observation, interviews, content analysis, survey
research, and secondary data analysis.
Guiding Principles of Evaluation Research
In 2004, the American Evaluation Association published a set of five guiding
principles of evaluation research. Before learning more about evaluation
research, it is important to understand these foundational elements that guide
evaluators. The clear statement of these principles is useful to clients, or
stakeholders, who know what to expect. The five principles are taken directly
from the American Evaluation Association’s statement on the matter (see
www.eval.org/p/cm/ld/fid=51):
1. Systematic Inquiry: Evaluators conduct systematic, data-based
inquiries about whatever is being evaluated.
2. Competence: Evaluators provide competent performance to
stakeholders.
3. Integrity/Honesty: Evaluators ensure the honesty and integrity of the
entire evaluation process.
4. Respect for People: Evaluators respect the security, dignity, and self-
worth of the respondents, program participants, clients, and other
stakeholders with whom they interact.
5. Responsibilities for General and Public Welfare: Evaluators
articulate and take into account the diversity of interests and values that
may be related to the general and public welfare.
You should recognize many of these principles already because they are the
same principles used in conducting ethical research. With this foundational
understanding of what evaluation research is, and what guides it, we can
address the major steps taken when conducting this type of research.
Seven Steps of Evaluation Research
Regardless of the specific question guiding evaluation research, several basic
steps are used in the evaluation. First, a researcher must identify and engage
stakeholders. Stakeholders are individuals or organizations who have a
direct interest in the program being evaluated and can include funders,
program administrators, the community in which the program is
administered, and clients of the program. Knowing who the stakeholders are,
and developing and maintaining a good relationship with stakeholders, is
critical during the planning stages, the gathering of data, and the presentation
and buy-in of findings. Without the support of stakeholders, evaluation
research cannot be successful.
Identify and Engage Stakeholders: First step of the evaluation
research process in which the evaluator scans the environments to
identify those who are affiliated with and affected by the program
of interest. Once identified, those stakeholders must be engaged in
the process to maximize buy-in and, ultimately, the success of the
project.
Stakeholders: Individuals or organizations who have a direct
interest in the program being evaluated.
The second basic step in evaluation research is to develop the research
question. Unlike basic research, in evaluation research, the stakeholder
generally provides the research question to the evaluator. For instance, a
board of directors may engage a researcher with the express purpose of
understanding whether a program is achieving its goals. In this example, the
research question is as follows: “Is the program achieving its goals?”
Establishing a research question is just the beginning as earlier chapters in
this book demonstrated. It may also be the case that the evaluator works with
the client to develop one or more research questions.
Develop the research question: Step in the evaluation research
process in which the evaluator shares the results of the research
with stakeholders.
Figure 11.1 Seven Basic Steps Used to Conduct Evaluation Research
© dizanna/Canstockphoto
Once the data are analyzed, the researcher develops findings and
conclusions that answer the guiding research question. Based on these
findings, the researcher must then offer recommendations such as
modifications to a program, expansion of the program to other places,
expansion of the program to additional populations, or termination of the
program. In some cases, the researcher works with some client/stakeholders
to develop the recommendations. The evaluator must be able to justify these
recommendations based on the research methodology used and data
gathered. Finally, it is not enough to sit in an office, reach a conclusion, and
offer recommendations. The researcher must communicate the findings and
recommendations to the party that requested the evaluation research, as well
as to the other stakeholders. Ultimately, someone other than the researcher
makes the determination as to whether the recommendations are
implemented. With good stakeholder buy-in throughout the process,
however, the probability of the recommendations being implemented is
increased.
Develop findings and conclusions: Step of evaluation research in
which the evaluator uses the data and information gathered to
generate findings that address the research question. This includes
conclusions about the issue and recommendations for ways to deal
with the issue at hand.
Justify these recommendations: Step of the evaluation research
process in which the evaluator offers a basis for the
recommendations made.
Communicate the findings and recommendations: Final step of
evaluation research in which the evaluator shares the results and
suggestions arising from the research with stakeholders.
This section described the basic steps taken in evaluation research generally.
As discussed in previous chapters, a simple statement such as “gather data”
entails many steps and consideration. Having a full understanding of research
methods is required for conducting excellent evaluation research. Without
that, the evaluation will be poorly done, and the resulting findings will be
without value. The next section identifies several specific types of evaluation
research. For any one project, the evaluator may use multiple types of
evaluation research to address multiple questions.
The Policy Process and Evaluation Research
Understanding program evaluation or evaluation research is easier if you
understand basic policy process. Before addressing this, we must first discuss
the terms policy and program. Policies are the principles, rules, and laws
that guide a government, organization, or people. Policies are implemented to
ameliorate a problem. Programs are developed to respond to policies. A
difference between policies and programs is scale. Policies tend to be larger
and programs smaller. In some cases, policies involve multiple programs to
address an issue. Think of a university that has a policy of “retention
maximization.” This policy would include many programs, including one that
requires attendance at orientation like that discussed earlier. It may also
include other programs such as intramural sports, wellness centers, and
student clubs. Importantly, both policies and programs can be evaluated.
Given their similarity, and that both can be evaluated, we, like many others,
will use the terms policies and programs synonymously in this chapter.
Policy: Principles, rules, and laws that guide a government, an
organization, or people.
Program: Developed to respond to policies.
Now that you have an understanding of what a policy and a program is, we
move on to the policy process. The policy process, also known as the policy
cycle, is a simplified representation of the stages of policy making and
implementation (see Chapter 13 for an in-depth discussion about the policy
process and associated stages). This process includes the implementation and
evaluation of programs. In its simplest form, the policy process has five
major stages. The first stage in the policy process is problem
identification/agenda setting. Problem identification/agenda setting involves
the identification of the problem to be solved and the advocating that it be
placed on the policy makers’ agenda for further consideration. Should a
policy issue be taken up by a policy maker and placed on an agenda for
further consideration, the next stage in the policy process is policy formation.
Policy formation includes the design of multiple approaches, policies,
programs, or formal ways to address the problem of interest. After several
formal policy options are designed, the policy makers identify and select
what they see as the best policy solution, as well as programs of the group.
Once the best policy option has been identified, adoption by the appropriate
governing body is required. Policy adoption is the third stage in the policy
process and refers to the formal adoption, or passage of the policy, which
legitimizes the policy. Policy implementation is the fourth stage of the policy
process in which agencies operationalize the adopted policy. This includes
the drafting of specific programs, procedures, regulations, rules, and guidance
to be used by those tasked with carrying out the adopted policy. Finally, the
fifth stage in the policy process is policy evaluation. Policy evaluation is used
to evaluate the policy and associated programs in ways discussed earlier.
Although this illustration suggests that evaluation occurs only at stage five,
the truth is that evaluation can occur anywhere during the policy process.
Policy process: Simplified model of the stages of policy making
and implementation that includes five major stages: problem
identification/agenda setting, policy formation, policy adoption,
policy implementation, and policy evaluation. Also called the
“policy cycle.”
Figure 11.2 Basic Stages of the Policy Process
© halfpoint/Canstockphoto
Process Evaluation
A process evaluation is conducted when a program is in operation, and it
focuses specifically on whether a program has been implemented and is being
delivered as it was designed. A process evaluation can also be used to
understand why an existing program has changed over time. Engaging in a
process evaluation requires immersion in the program to understand exactly
how the program is being delivered. Process evaluation requires full
understanding that allows the researcher to identify exactly how the program
is being delivered compared with how the process is supposed to be
delivered. For these reasons, process evaluations are heavily reliant on
methods of gathering qualitative data (e.g., interviews, focus groups, and
participant observation). A way to view this is as a trouble-shooting
evaluation. The research can identify whether the program is working the best
way to meet its full potential. It can identify whether the program is working
well and doing what it is supposed to be doing. A process evaluation can also
identify where problems are occurring. This comprehensive understanding of
the program allows an understanding of what it is compared with what it
should be. The findings from a process evaluation can identify inefficiencies
in the program delivery that can be addressed.
Process evaluation: Type of formative evaluation that is
conducted when a program is in operation. Process evaluations
focus on whether a program has been implemented and is being
delivered as it was designed, or why an existing program has
changed over time.
Logic models play an important role in all evaluation research, but especially
so in a process evaluation. Logic models are used to illustrate the ideal about
how a program is intended to work. These models depict how the program is
supposed to be implemented. Logic models provide an easy-to-understand
means to see the causal relationships among program components. Featured
researcher Heather Zaykowski has conducted many evaluations and finds that
logic models streamline the research. She noted in her video interview that
logic models help the client or partner articulate their goals and focus on the
important issues. Zaykowski finds working with the clients to develop or
clarify their logic model helps them identify their tangible goals and issues.
There are multiple benefits of logic models. First, they point to key
performance measurement points useful for conducting an evaluation.
Second, logic models assist with program design, and make improving
existing programs easier, by highlighting problematic program activities.
Third, logic models help identify the program expectations. Another featured
researcher, Chris Melde, who has conducted numerous evaluations, is a
particular fan of logic models and always uses them. If the client doesn’t
have one (and this is common), Melde spends the time in collaboration with
the client to create a logic model. This is often the first part of an evaluation
according to Melde.
Logic models: Graphic depictions that illustrate the ideal about
how a program is intended to work.
Logic models in their most basic form include inputs, outputs, short-term
outcomes, and long-term outcomes. Inputs include things like training or
education. The idea is that these inputs should lead to the outputs depicted in
the logic model. Outputs resulting from the inputs include things like
increased knowledge, changed attitudes or values, and new skills. The notion
is that these outcomes will then lead to identified short-term outcomes.
Short-term outcomes or goals stem from the outputs and include
improvements or changes expected to see in a short period. Short-term
outcomes or goals include things like improved stability, higher retention
rates, reduced recidivism, or fewer arrests. Long-term outcomes are the final
causal effect of the program. These outcomes are the intended positive
benefits resulting from a program that occur further out in time. Long-term
outcomes may include safer communities, stable housing, greater educational
attainment, lack of arrests, increased justice, and greater satisfaction with life.
You should be able to see how a logic model can be useful in trying to
understand how a program should be implemented that can guide the
evaluator when assessing whether the program is evaluated as intended.
Inputs: One of four steps in a basic logic model. Inputs include
things like training or education. The idea is that these inputs
should lead to the outputs depicted in the logic model.
Outputs: One of four steps in a basic logic model. Outputs are
those intended consequences resulting from inputs.
Short-term outcomes: One of four causal steps in a logic model,
referring to the intended positive benefits resulting from a
program that are expected to occur quickly. Short-term outcomes
stem from the outputs.
Long-term outcomes: One of four causal steps in a logic model,
referring to the intended positive benefits resulting from a
program that are expected to occur further in the future.
Figure 11.3 Proposed Logic Model for Relationship Education Programs for
Youth in Foster Care
Source: Exhibit 1, p. 21, from “Putting Youth Relationship Education on
the Child Welfare Agenda: Findings from a Research and Evaluation
Review.” Child Trends December 2012. Reprinted with permission from
ChildTrends.
Note: Logic model example found here:
http://www.dibbleinstitute.org/wp-content/uploads/2014/01/logic-
model.png
Summative Evaluation
The second major type of evaluation research considered is the summative
evaluation. Summative evaluations provide an overall or summary judgment
about program results. This type of evaluation can be used to determine
whether a program should be funded and continued or terminated. Unlike
formative evaluations that provide information on how to implement or
improve a program, the goal of a summative evaluation is to answer
questions about the feasibility of continuing, expanding, or discontinuing a
program. Summative evaluations occur at the end of the policy process and
are concerned with the output and impact of programs. There is no agreement
in the literature about the types of summative evaluations. For our purposes,
we discuss two types of summative evaluations: outcome evaluations and
impact evaluations. Some authors do not distinguish between the two, noting
they both focus on results. The differences to focus on here is that outcome
evaluations focus on whether the program was effective in terms of the target
population in the short and long term, and occur after program completion. In
contrast, impact evaluations focus on whether the program met its long-term
ultimate goals (which are broader than how they affected the target
population) and tend to be conducted years after the program was
implemented.
Summative evaluations are used to make a summary judgement about a
program–keep, terminate, or expand?
© PixelsAway/Canstockphoto
Outcome Evaluation
Outcome evaluations measure the effectiveness of a program on the target
population. This type of evaluation is focused on the impacts on the target
population specifically versus a summary look at the program. Outcome
evaluations focus on the short-term and long-term outcomes among the target
population depicted in the logic model. Engaging in an outcome evaluation
does not require the deep immersion in the program required in a needs
assessment. Rather, an outcome evaluation is more often conducted using
quantitative data away from the actual program delivery to measure whether
the program is doing what it was intended to do. These data gathering
methods include analysis of existing data, surveys, and experiments. Does it
show lower rates of recidivism after engaging in the program? Does it
increase literacy after going through a program designed to improve literacy?
Does the program lead to higher student retention and greater loyalty after
attending a mandatory student orientation? Is the program approaching the
desired outcomes? If the answer to these types of questions is “no,” then the
program may be reassessed or even terminated.
Outcome evaluations: Type of summative evaluation that
measures the effectiveness of a program on the target population.
This type of evaluation is focused on the impacts on the target
population specifically versus the program.
Evaluation research skills are used broadly for countless issues.
© KHALED DESOUKI/AFP/Getty Images
Impact Evaluation
The second type of summative evaluation considered is the impact
evaluation. Impact evaluations are global evaluations that assess whether a
program is achieving its designated goals. Unlike the outcome evaluation, the
focus is not on the target population but on the broad goals of the program.
An impact evaluation is focused on big-picture questions such as whether the
program is achieving its intended results and advancing the social policy it
stems from. It identifies the outcomes of the program in terms of intended
outcomes and unintended outcomes. Like the outcome evaluation, an impact
evaluation is conducted using primarily quantitative data (e.g., focus groups,
interviews, and content analysis) without the need for immersion in the
program. Rather, an evaluator uses quantitative measurements to answer the
question as to whether the program is achieving its goals or not. If not, then
the program may be terminated.
Impact evaluations: One of two summative evaluations
considered in this text. Impact evaluations are global examinations
that assess whether a program is achieving its designated goals.
As noted earlier, there are additional types of evaluation research that you can
conduct including evaluability assessments, implementation evaluations,
structured conceptualizations, and cost-effectiveness and cost–benefit
analyses. Nevertheless, the formative and summative approaches presented
offer a basic understanding of several widely used approaches. When
thinking about types of evaluation research, it is important to recognize that
no single type is better than another is. In contrast, each type of evaluation
research is different and serves a different important purpose. Table 11.2
offers some big picture elements of each of these types of program evaluation
approaches.
Distinctive Purposes of Evaluation and Basic
Research
Evaluation research is different from the other research purposes described
here in many ways. As Stufflebeam (1983) wrote, “[t]he purpose of
evaluation is to improve, not prove.” Or even more succinctly, Dodge and
Zaykowski both highlight the fact that evaluation research differs from basic
research because it is applied in nature. Indeed, evaluation research has the
purpose or objective of evaluating and improving, but evaluation research is
different from other types of research objectives described in this text.
Although evaluation research uses the same procedures for gathering,
analyzing, and interpreting evidence that other research approaches in this
book do, evaluation research is not the same as basic research like the
descriptive, exploratory, and explanatory research described in this book. In
contrast, evaluation research is applied research. This section identifies
some of the ways this applied versus basic research distinction makes
evaluation research stand apart from other research.
Basic research: Research that generates knowledge motivated by
intellectual curiosity to better understand the world.
Applied research: Conducted to develop knowledge for
immediate use for a specific decision-making purpose.
Knowledge for Decision Making
Basic research approaches generate knowledge to understand the world
better. A researcher engaged in basic research is not conducting research
because he or she is answering to a boss. Instead, the basic researcher is
answering to no one except his or her own intellectual curiosity that leads to
general knowledge. This knowledge created may or may never be used.
Alternatively, that knowledge may be used in ways that are wholly
disconnected from the researcher who created the knowledge. In contrast,
evaluation research is applied research that is used to develop knowledge for
immediate use for a specific decision-making purpose. In addition, evaluation
research is generally conducted because the evaluator is answering to a boss,
a client, a funder, or some sort. Evaluation research is not engaged in creating
knowledge generally but is engaged in answering a specific question about a
program that a stakeholder has posed. It would be incorrect to conclude that
basic or applied research is better than the other. They are simply different
approaches and serve different purposes. The next sections present many of
these differences.
Origination of Research Questions
When conducting basic research, the researcher develops the research
question and refines it during the literature review. Research questions can
come from myriad places, as described in Chapter 2 and elsewhere.
Regardless of the source of the inspiration, the researcher is the one who
crafts the research question. In contrast, when conducting evaluation
research, the research question is developed by, or in consultation with, the
client/stakeholders. The research question is focused on some element of a
particular program that the researcher answers on behalf of a stakeholder. In
this way, evaluation research and other types of research differ greatly.
Comparative and Judgmental Nature
Basic research is interested in describing the world as it currently exists. It
seeks to explore that world or to provide a description or explanation about
the world. These findings about “what is” are intended to be used to make
improvements in our world in a general sense. Nevertheless, often basic
research publishes the research to ultimately make the world a better place. In
contrast, evaluation research is more focused. It is focused on “what should
be” in comparison with “what is” when examining a specific program or
policy. For example, evaluation research may describe how a program is
being administered as well as how the program should be administered. Such
a comparative approach is inherently judgmental, and this aspect of
evaluation research sets it apart from basic research.
Working With Stakeholders
A fundamental way that evaluation research differs from basic research is the
need to work with a variety of stakeholders. As noted earlier, basic research
is undertaken by a researcher desiring to build knowledge and understanding
about our world. In general, there is little stakeholder interaction as the
researcher conducts his or her investigation. The major exception to this is in
the time and effort it can take to get access to data that agencies have. In
contrast, evaluation researchers work with stakeholders throughout the
process. In fact, conducting evaluation research means the researcher is
working collaboratively with, and is dependent on, several stakeholders,
some of whom may or may not be cooperative. Knowing who the
stakeholders are, and developing and maintaining a good relationship with
them, is critical when conducting evaluation research. The researcher needs
his or her trust and support to be able to gather the needed data. In addition,
the researcher needs the stakeholders’ support and trust so that they will be
willing to accept and implement the findings from the evaluation conducted.
Dodge has found that the failure to establish rapport with stakeholders in the
beginning of the evaluation process can kill any evaluation. This rapport must
be maintained throughout the process as well, which includes providing a
report at the end of the process that is written for the stakeholder audience.
An evaluator must always be aware of the relationship with all stakeholders
to ensure it remains positive.
Challenging Environment
Basic research takes place in a variety of settings including an office, a
laboratory, or the field. In general, those who are being studied or examined
in the process of basic research are voluntarily engaging in the research. In
general, the relationship between the basic researcher and the research subject
is harmonious and cooperative. Evaluation research can differ from this
harmonious and cooperative environment. Evaluation research is inevitably
about observing or assessing a process or program. The priority of those
involved in the program is the program itself, not the evaluation of the
program. If gathering data interferes (or is perceived to interfere) with the
program, challenges ensue. The evaluator must understand that those
involved in the evaluation are aware that what they do in the program is
going to be scrutinized, possibly criticized, and conceivably changed because
of the evaluation. They understand that the evaluator may identify
shortcomings of a program they are dedicated to, and loyal about. In fact,
they may fail to see the need for the evaluation. As a result, the environment
that some evaluators work in can be hostile, uncooperative, and at times
incredibly challenging. Recall the judgmental nature of evaluation research,
and the fact that findings may not put the program in the best of lights. At
worst, findings may be used to terminate a program some stakeholders
believe is useful. This type of environment indicates that an evaluator must
not only navigate the challenges of conducting good evaluation research but
also must work with others who may not understand or even trust the
evaluation or the evaluators. Being mindful of the social implications of the
findings from the study is important.
Findings and Dissemination
In general, the way findings are disseminated differs between basic and
applied research. Once basic research is completed, researchers tend to draft
journal articles to be peer reviewed for consideration of publication in a
journal. The manuscripts describe the purpose of the research, relevant
literature, methodology, findings, and conclusions. A part of that includes
recommendations about future research to continue research on the topic of
interest. Once submitted to a journal, the manuscript is sent to three other
researchers who assess and comment on the research. Peer review is
generally conducted using a blind review, meaning those reviewing the
research manuscript generally do not know who conducted the research, and
those conducting the research generally do not know who is conducting the
review. Once the reviews are completed, the reviews are sent to the
researcher with a decision by the journal editor as to whether the research
will be published (rare without revisions), requires revisions and
resubmission to the journal (an R&R), or rejected outright. Should the
research manuscript be accepted and published in the journal, the knowledge
in the article is available to anyone accessing the journal. Publication in
journals can take many months, but more often it takes a one or more years to
accomplish.
Blind review: Method of reviewing journal manuscripts in which
neither the reviewers nor the manuscript writer knows the identity
of the other.
Evaluation research differs greatly from basic research in that the research
report describing the evaluation, findings, and recommendations is submitted
to the stakeholders for consideration. As a result, it is important that
evaluation reports be written for impact. They must convince the stakeholders
that the study just completed made a difference and that positive changes will
result from this research. Recommendations in evaluation reports are focused
on the program (and not on future research about the findings). Peer review is
not used, and broad dissemination is not the goal. This is not to say that
evaluation research is never published in journals; it is. In fact, several
journals focus specifically on program evaluation. It is just generally not the
primary goal of the evaluator to publish for a broader audience.
Characteristics of Effective Evaluations
Four Standards of an Effective Evaluation
1. Utility
2. Feasibility
3. Propriety
4. Accuracy
When conducting any research, the researcher strives for effectiveness.
Nevertheless, how do you know whether your research is useful or effective?
Peer reviewers can assist with this. Following accepted practices in research
methodology when conducting research can assist with this. Unlike basic
research, evaluation research is associated with specifically stated and
accepted standards to assess the quality, effectiveness, accuracy, ethical, and
usefulness of an evaluation. In 1994, the Joint Committee on Standards for
Education Evaluation published four standards of effectiveness: utility,
feasibility, propriety, and accuracy. Utility focuses on the need that
evaluations address significant research questions, provide clearly articulated
findings, and make recommendations. Feasibility addresses the need that
evaluation research be practical, reasonable, and affordable. Propriety
requires evaluations be ethical and legal. Finally, accuracy indicates the need
that evaluations be conducted using best research practices, free of biases,
and accurate. The Joint Committee on Standards for Education Evaluation
also provided multiple criteria for each standard that reflects several of the
guiding principles put forth by the American Evaluators Association
discussed earlier in this chapter. The specific criteria of effective evaluation,
based on the Joint Committee on Standards for Education Evaluation (1994)
are discussed next.
Utility
Utility is the first standard of an effective evaluation. Utility addresses
whether an evaluation responded to the needs of the client and was
satisfactory to those using it. Utility can be assessed by focusing on seven
specific standards. The first utility criterion is stakeholder identification,
which requires that an effective evaluation identify and engage all individuals
involved in or affected by the evaluation. Identification of all stakeholders is
vital to ensure their needs can be, and are, addressed during the course of and
after the evaluation. Evaluator credibility is a utility standard that indicates
that evaluators must be both competent and trustworthy. The presence of
competence and trust in the evaluator benefits the evaluation in that findings
can be fully credible and accepted. A third utility standard is information
scope and selection, which requires that the data collected for use in the
evaluation must be directly related to the specific purpose of the evaluation.
Values identification is a utility standard that requires the evaluator to
carefully describe the perspectives, procedures, and rationale used to interpret
the findings. The foundation for making the value judgments inherent in an
evaluation must be clearly articulated. An important utility standard is report
clarity, which indicates that evaluation reports must clearly articulate
multiple elements of the program being evaluated. First, the program itself
must be described. In addition, the context and purposes of the program must
be articulated. Third, the procedures of the program must be made clear.
Finally, the findings of the evaluation must be clearly presented in the report.
This detail and clarity ensures all audiences easily understand the evaluation.
Report timeliness and dissemination is a utility standard that requires that
interim and final reports stemming from an evaluation be shared with in a
timely fashion to facilitate their use. And the final utility criterion is
evaluation impact, which is evident when the findings and recommendations
are accepted and implemented. The impact goal should guide all elements of
the evaluation including evaluation planning, implementation, and reporting.
With attention to evaluation impact throughout the process, stakeholders will
be more likely to value and use the findings, as well as to implement the
recommendations made in the report.
Utility: One of four standards of an effective evaluation. Refers to
whether the evaluation conducted met the needs of and was
satisfactory to the people using it.
Stakeholder identification: One of the utility criteria of an
effective evaluation. Requires that the evaluator identify all
individuals or organizations involved in or affected by the
evaluation.
Evaluator credibility: One of the utility criteria of an effective
evaluation. Requires that evaluators must be both competent and
trustworthy.
Information scope and selection information: One of the utility
criteria of an effective evaluation. Requires that the data and
information collected for use in the evaluation must speak directly
to the specific research questions guiding the program evaluation.
Values identification: One of the utility criteria of an effective
evaluation. This criterion requires that an evaluator carefully
describe the perspectives, procedures, and rationale used to
interpret the findings.
Report clarity: One of the utility criteria of an effective
evaluation. This criterion requires that evaluation reports clearly
articulate multiple elements of the program being evaluated.
Report timeliness and dissemination: One of the utility criteria
of an effective evaluation. Calls for all interim and final reports
resulting from the evaluation to be shared with stakeholders in a
timely fashion to facilitate their use.
Evaluation impact: One of the utility criteria of an effective
evaluation. Present when the findings and recommendations of an
evaluation are accepted and implemented by key stakeholders.
Feasibility
The second major standard of effective evaluation research is feasibility. In
other words, is the evaluation research requested possible? The feasibility
standard focuses on whether an evaluation is both viable and pragmatic.
Three specific feasibility standards are considered when assessing an
evaluation. These specific standards consider whether the evaluation was
practical, realistic, and diplomatic and involved the prudent use of resources.
These characteristics are evident based on three criteria. Practical
procedures is a feasibility standard that indicates that evaluation research
should be practical, and it should result in minimal disruption to the program
under investigation. For instance, is it practical to get the records of juvenile
delinquents given the increased confidentiality under which they are kept?
Similarly, political viability is a feasibility standard that calls attention to the
need to consider the affected stakeholders to gain and maintain their
cooperation during all stages of the evaluation. In addition, this standard
requires the evaluator to be alert to any stakeholder attempts to thwart, bias,
or disrupt the evaluation. The final feasibility standard is cost-effectiveness,
which requires evaluators to consider the efficient and effective use of
resources. In particular, this standard guards against engaging in unneeded
data gathering and analysis. There is no need to conduct a Ferrari® evaluation
(e.g., randomized experimental research) when a less resource-intensive one
(e.g., quasi-experimental research) is adequate. Any costs incurred during the
evaluation should be directly related to the needs of conducting a useful
evaluation.
Feasibility: One of four major standards of effective evaluations
that focuses on whether an evaluation is viable, pragmatic,
realistic, diplomatic, and involves the prudent use of resources..
Political viability: One of the feasibility criteria of an effective
evaluation. Requires the evaluators to consider the affected
stakeholders to gain and maintain their cooperation during every
stage of the evaluation.
Practical procedures: One of the feasibility criteria of an
effective evaluation. Requires that evaluation research be
practical, and it should result in minimal disruption to the program
under investigation.
Cost-effectiveness: Criterion of feasibility that indicates an
effective evaluation. Requires evaluators to consider the efficient
and effective use of resources.
Propriety
The third major standard of an effective evaluation is propriety. Propriety
standards offer evidence that the evaluator has conducted an ethical
evaluation. Eight specific standards are used to assess propriety, all of which
ensure evaluations are legal, ethical, and conducted with regard for the
welfare of those involved, as well as anyone else affected by the evaluation.
The first of eight propriety standards that provide evidence of an ethical
evaluation is service orientation, which emphasizes the service nature of
evaluation research. Evaluation research should be used to assist
organizations, as well as to address and serve the needs of the targeted
population affected by the program of interest. A second propriety standard
requires the use of formal agreements. This standard requires the use of a
formal, and written, agreement involving all major parties involved in an
evaluation. This written agreement should identify each individual’s
obligations, how associated tasks will be accomplished, who will conduct
required tasks, and when tasks should be completed. Formal agreements are
binding unless all parties choose to renegotiate their terms. The next two
standards speak to human decency when conducting evaluation research. The
rights of human subjects focuses on the requirement that evaluation
research be planned and conducted using ethical research practices.
Evaluation research must be conducted in a way that the rights and welfare of
human subjects are respected. The human interactions standard dictates the
evaluation research be conducted with respect to all individuals associated
with the program of interest. No individual should be threatened or harmed,
and all engaged must be respected. An important propriety standard calls for
complete and fair assessment. This standard requires the comprehensive
and objective assessment of the strengths and weaknesses of the program.
With a comprehensive and fair assessment, the evaluation is useful. Without
it, the value of the evaluation is poor. As noted throughout the text, research
is only as strong as its weakest part, and in the case of an evaluation, that
means that incomplete and biased evaluation would lead to limited, and
possibly damaging, outcomes. The disclosure of findings is a propriety
standard that speaks to the need to ensure all stakeholders—who have put
time and effort into the study—deserve to see the findings. In addition, the
findings from the evaluation must be made available to those with a legal
right to them. During the course of evaluation research, conflict may arise. A
propriety standard referred to as conflict of interest emphasizes that any
conflict of interest that arises must be dealt with transparently and honestly to
ensure the evaluation and results are not compromised. And finally, the fiscal
responsibility propriety standard indicates that funding used during engaging
in a program evaluation must be justifiable, ethical, and prudent.
Evaluation research, like all research, must be conducted legally, ethically,
and with regard for all involved in the process.
© PlusONE/Shutterstock
Propriety: One of four standards of an effective evaluation.
Based on eight criteria that demonstrate that an evaluation was
conducted ethically, legally, and with regard for the welfare of
those involved, as well as of anyone else affected by the
evaluation.
Service orientation: One of the propriety standards of an
effective evaluation. Evaluation research should be used to assist
organizations and to address and serve the needs of the targeted
population affected by the program of interest.
Formal agreements: One of the propriety standards of an
effective evaluation that requires the presence of a formal, written
agreement involving all principal parties involved in an
evaluation.
Rights of human subjects: One of the propriety standards of an
effective evaluation. Requires that evaluation research, be planned
and conducted using ethical research practices.
Human interactions: One of the propriety standards of an
effective evaluation. Requires that evaluation research be
conducted with respect to all individuals associated with the
program of interest.
Complete and fair assessment: One of the propriety standards of
an effective evaluation. Indicates that an evaluation must offer a
complete and fair assessment of the strengths and weaknesses of
the program.
Disclosure of findings: One of the propriety standards of an
effective evaluation. Indicates that evaluation findings must be
made available to those with a legal right to the findings or to
those affected by the evaluation.
Conflict of interest: One of the propriety standards of an
effective evaluation. Requires that any conflict of interest that
occurs must be dealt with transparently and honestly to ensure the
evaluation and results are not compromised.
Fiscal responsibility: One of the propriety standards of an
effective evaluation. Requires that the use of resources during the
course of the evaluation be sound, ethical, and prudent.
Accuracy
The fourth major evaluation research standard is accuracy. Accuracy refers
to idea that an effective evaluation should produce correct findings. Twelve
standards of accuracy are used, including presenting the program in context,
using systematic procedures, using the appropriate methods, and producing
impartial reports with justified conclusions. Each is described next. The
accuracy standard focuses largely on the quality of the data used. This should
seem logical to you since this book has focused on the need to gather quality
data. Without quality data, a piece of research is of limited value. The first
accuracy criterion is program documentation that requires the program
evaluation to include clear, complete, and accurate documentation used in the
work. This is documentation about how the evaluation was done as well as
about how the documentation was used in the evaluation. In addition, the
accuracy standard requires the evaluation to include a context analysis. This
means that the program must be fully understood in the context in which it
exists. The context must be examined in detail by the evaluator so that likely
influences affecting the program are known. Third, accuracy standards
require the evaluation to describe purposes and procedures. An accurate
evaluation is one that clearly articulates and outlines the purposes and
research methods (aka procedures) of the evaluation. In addition, the stated
purposes and research methods of the evaluation should be described in a
comprehensive fashion allowing for continual monitoring and assessment.
The accuracy standard requires defensible information sources that are used
in the evaluation be identified and described in detail. Information shared
about sources should allow others to assess these data sources. A part of
being accurate is that the information used in the evaluation be valid
information. This accuracy standard requires that the research methods used
to gather information and data should ensure a valid interpretation for use in
the evaluation. Similarly, reliable information is almost a must for effective
evaluations. Reliable information requires that the research methods used to
gather data and information be designed to provide reliable information for
the evaluation. A part of science is the systematic collection of data and
information. It is no surprise then that an accuracy standard is the inclusion of
systematic information and data collection, analysis, and reporting during
an evaluation. As with all research, any issues encountered during the
information and data gathering should be addressed. The appropriate use of
quantitative and qualitative data is required to demonstrate accuracy. As
noted in the standards, analysis of quantitative information requires the use
of appropriate scientific analysis of quantitative data used to address the
research question being investigated. Similarly, the analysis of qualitative
information requires the use of appropriate scientific analysis of qualitative
data used to address the research question being considered. An additional
accuracy criterion is the presentation of justified conclusions. This accuracy
standard points to the need that all conclusions and findings be clearly and
comprehensively justified based on the data and information analyzed.
Justification must allow stakeholders to assess how the justifications were
reached. Impartial reporting is an accuracy standard that requires care be
taken to avoid distortion or bias in findings. The evaluator must strive to
report on the evaluation regardless of personal feelings. And finally,
metaevaluation is an accuracy standard that indicates that evaluation
research should be judged on these standards as a means of examining its
strengths and weaknesses by stakeholders. Melde summed up these
characteristics of an effective evaluation during his video interview
conducted for this book by noting that effective evaluations stay true to the
principles of good science.
Accuracy: One of four evaluation research standards that refers to
the notion that an effective evaluation will offer correct findings.
Accuracy is evident by presenting the program in context by using
systematic procedures, using the appropriate methods, and
producing impartial reports with justified conclusions.
Program documentation: Accuracy standard of an effective
evaluation that requires that an evaluation be documented clearly,
completely, and accurately.
Describe purposes and procedures: Standard of accuracy that
requires the evaluator to clearly articulate and outline the purposes
and procedures of the evaluation.
Context analysis: Accuracy standard of an effective evaluation
that requires a program to be examined in context as to allow
deeper understanding of the program.
Analysis of quantitative information: Accuracy standard
requiring the use of appropriate scientific analysis of quantitative
data used to address the research question being investigated.
Analysis of qualitative information: Accuracy standard that
requires the use of appropriate scientific analysis of qualitative
data used to address the research question being considered
Justified conclusions: Accuracy standard that indicates the need
for all conclusions and findings be clearly and comprehensively
justified allowing stakeholder assessment.
Defensible information sources: Standard of accuracy requiring
the sources of information used to conduct a program evaluation.
Valid information: Accuracy standard that requires that the
procedures used to gather information and data used in the
evaluation must ensure a valid interpretation.
Reliable information: Accuracy standard that requires that
procedures used to gather data and information must be designed
and used to provide reliable information for the evaluation.
Systematic information: Accuracy standard that focuses on the
need for systematic data and information collection, analysis, and
reporting during an evaluation.
Impartial reporting: Accuracy standard that calls for care to be
taken to avoid distortion or bias in findings.
Metaevaluation: Accuracy standard that refers to the need that
the evaluation be judged on these standards to allow an
examination of its strengths and weaknesses by stakeholders.
Research in Action: Evaluating Faculty Teaching
Most university professors are reviewed annually with regard to
teaching. Teaching evaluations can include things such as in-class
student evaluations, external reviewer observations, and reviews
of syllabi. Students may be more familiar with evaluations found
on rate myprofessor.com (RMP). This website rates the easiness,
helpfulness, clarity, overall quality, and hotness of professors. A
valid question about all of these approaches is whether they
measure teaching effectiveness or something else. Johnson and
Crews (2013) conducted research on RMP to determine what
faculty characteristics determined ratings on the website. This
research was guided by several research questions:
1. What faculty characteristics are correlated with RMP scores?
2. What faculty characteristics influence RMP scores after
accounting for the easiness and hotness of the faculty?
3. Does the number of raters influence RMP scores given the
small sample of self-selected individuals who post
evaluations on RMP?
To address these research questions, Johnson and Crews
randomly selected 10 states from which to draw the faculty
sample. All four-year colleges and universities were included.
Next all full-time criminology and criminal justice faculty in these
institutions who had been rated on RMP were included in the
sample (N = 466). Personal characteristic information about
faculty was taken from university websites and the Internet.
Information on 59 faculty members could not be found, and they
were excluded, leaving 407 criminal justice and criminology
faculty members in the sample.
Analyses included univariate descriptives, and ultimately many
regressions that could isolate the effect of the independent
variables (e.g., race, sex, terminal degree, number of publications,
practitioner experience, years of teaching, and employment at an
RI university) on the dependent variables. Five regressions were
run, one for each dependent variable: easiness, helpfulness,
clarity, overall quality, and hotness.
The findings showed that a faculty RMP easiness score was
predicted by practitioner experience, employment at a Research I
(RI) university, and teaching experience. Specifically,
practitioners and those at RI universities were more likely to be
rated easy. In contrast, more teaching experience predicted lower
ratings in easiness. The regression findings indicated that
publication rates and practitioner experience predicted higher
helpfulness ratings. In contrast, a doctorate and years of teaching
predicted lower scores on helpfulness. A third regression model
examined predictors of clarity and found that being White, male, a
practitioner, publication count, and number of raters predicted
clarity. A doctorate and teaching experience was related to lower
clarity scores. Higher ratings for overall quality were associated
with Whites, publication count, and practitioner experience.
Lower overall quality scores were associated with a doctorate,
years teaching, and being at an RI university. Finally, what
influences hotness scores? According to this research, none of the
independent variables did.
Nevertheless, the researchers continued their examination and
found that when they controlled for easiness and hotness ratings,
“faculty ascribed characteristics (such as race and sex) explained
the greatest proportion of variance in clarity, helpfulness, and
overall quality scores. Professional characteristics, such as years
of experience, publication rate, and possession of a doctorate were
less influential on Ratemyprofessors.com scores.”
What sort of policy implications come from this work? First,
teaching evaluations are heavily influenced by the easiness of the
instructor. Second, faculty ascribed characteristics matter. Third,
professional characteristics have only a small influence on RMP
ratings. And finally, the number of raters has very little influence
on RMP scores. From a policy perspective, these findings point to
the need to have teaching evaluated in a variety of ways given that
RMP reflects things other than teaching competence and places
some groups of faculty at a disadvantage.
Johnson, R. R., & Crews, A. D. (2013). My professor is hot!
Correlates of RateMyProfessors.com ratings for criminal justice
and criminology faculty members. American Journal of Criminal
Justice, 38, 639–656.
Common Pitfalls in Evaluation Research
Because evaluation research is conducted using any of the research
methodologies discussed in this text, the pitfalls associated with each
methodology are also relevant here. For example, if interviewing others, you
the researcher must establish rapport and ask clear questions. If you are using
survey research, care must be taken to avoid double-barreled or leading
questions. If you are using a focus group, the moderator must ensure all
participants are sharing views and no single attendee is shutting the process
down.
Evaluations as an Afterthought
One pitfall associated with evaluation research must be discussed: the
afterthought nature of evaluation. That is, evaluation research is too often
considered as an afterthought. Although this is not the fault of the researcher,
it is something with which many researchers are confronted. Often well-
meaning individuals implement a program and only later wonder whether the
program is working as they want it to or whether the program is providing the
benefits hoped for. At times, no logic model was developed, making the
identification of inputs, outputs, and outcomes challenging to discern. In fact,
a program may have been launched with no explicit consideration about what
success would look like or how the plan was to be delivered.
Dodge finds that waiting until after data are collected to involve a researcher
is a common error made by organizations. Dodge shared with us that not
including a researcher in the collection of the data (and framing of the
evaluation before getting started) often means the quality of the data are
problematic. And this means the evaluation is much more challenging than it
should be. Alternatively, Dodge finds that many organizations articulate
unclear and often unmeasurable goals for the evaluation. Other times,
organizations have individuals with clashing ideas about what the goals are.
By working with a researcher from the beginning, Dodge noted, feasible,
measurable, and useful goals can be agreed to before gathering data.
Dodge also finds that it is too often the case that those implementing a
program have collected no data or other information about it. They may not
know how many individuals were served. They may not know whether those
being served are benefiting. All of these issues point to the value of bringing
in a researcher from the beginning.
Zaykowski was brought onto an evaluation at the last minute. The client
knew this evaluation was required but failed to act on the evaluation until
time was a problem. When she arrived on the project, she found that needed
data were not gathered. The grant was supposed to gather data on 80
respondents, but the client had only gathered information on 20. There was
no person clearly acting as the leader of the project. The project investigator
(PI) of the grant and evaluation was running the program of interest, although
she only worked 10 hours a week. This created a perfect storm of evaluation
disaster. The problems continued once Zaykowski began working with them
as getting access to additional subjects proved difficult. In the end,
Zaykowski noted to us that the project was completed but that this episode
represents the all-too-common issue that everyone does not know how to
conduct an evaluation. Bring in an expert, and bring him or her in early.
Waiting until a program has been launched to consider an evaluation presents
several obstacles. For example, it may preclude the use of experimental
methods for testing program effects, forcing evaluators to use other less-
informative approaches. In addition, early planning for an evaluation tends to
make any evaluation that is ultimately conducted take far more time, and this
reduces the chances the findings will be used. When a program is being
planned is the time to consider evaluation of the program. Failure to do so
increases the odds that stakeholders do not buy-in to the work done by the
evaluator. This in turn increases the risk that the report, findings, and
recommendations of the evaluator are disregarded.
Political Context
A second and related pitfall results from the fact that “evaluation is a rational
enterprise that takes place in a political context” (Weiss, 1993, p. 94). Failure
to navigate the intersection of the worlds of politics and evaluation will result
in major challenges for evaluators. Consider that programs and policies are
the result of political decisions. As you are likely aware from your everyday
life, few political decisions are arrived at without great and often heated
debate, arguments, and challenges. Thus, before a program or policy is
launched, it has built-in stakeholders who can be at odds. Recognizing this
built-in tension is imperative for successful evaluation. Recognizing the
tangled worlds of evaluation and politics, and that an evaluator must work
with all stakeholders, is imperative if you want to be successful in this role.
Trust
A third pitfall is to fail to develop a solid and trusting relationship with
stakeholders. Without the buy-in of stakeholders, the evaluation research will
be far more difficult to accomplish. Dodge, who has engaged in a great deal
of evaluation research, has seen firsthand the challenges that a lack of trust
brings, and why it is so important to develop trust early. Dodge finds this
especially the case in law enforcement agencies. She noted that “when
dealing with law enforcement many of the officers are skeptical that an
evaluation can be the impetus for change.” Similarly, Dodge finds that trust
can be an issue between parties beyond the researcher. She has dealt with
situations in which a “city manager or police chief requests an evaluation but
then decides not to release the report. Obviously, this happens most when the
evaluation shows weakness in the leadership. I always encourage the
leadership to make the report available either internally and/or externally.
This helps encourage transparency for the department and city.” As noted
earlier in the chapter, Melde views trust as an essential requirement to have
between the researcher and those in the organization. Those in the
organization must have trust about the importance of the research design or
things can go sideways. Similarly, Zaykowski finds that “the key challenge to
evaluation research is that it always involves people outside of academia who
don’t understand academics. As researchers we want to follow the text book
method about how to execute our research and be objective, but from a
program’s standpoint, they want their program to look good.” Zaykowski
believes it is good practice to require an MOU (memorandum of
understanding) that outlines expectations and states up front that the results
may not look like what the client wants them too. Although an MOU doesn’t
solve all problems, Zaykowski finds it makes these interactions easier.
Confidentiality
Even though maintaining confidentiality when promised is an ethical
consideration for all types of research, it can be a bit trickier in evaluation
research. Imagine you are conducting interviews of key stakeholders. They
are providing excellent, although negative, information about the program
that will be useful in conducting a fair and comprehensive evaluation. There
are not many key stakeholders, however, so including this information in the
evaluation report without revealing the source of the information may be
difficult if not impossible. Alternatively, imagine that as an evaluator you are
unable to access certain information due to a challenging gatekeeper. As a
result, the evaluator may not be able to assess that information. If there is
only one gatekeeper, then maintaining the confidentiality of this individual
may be impossible. Dodge echoed this sentiment noting that at times,
maintaining confidentiality is nearly impossible. Regardless of the individual
involved, if promised confidentiality, the evaluator must strive to honor it.
Politics
Politics and evaluation research are members of an uneasy marriage. In the
middle of this is the evaluator who must act ethically. This can be
challenging when the evaluator must work with multiple stakeholders who
are at odds. To do so, the evaluator must be guided by the more general
ethical requirements associated with research but also by the standards of
effective evaluation. In particular, standards of propriety must be maintained.
First, the evaluator must be cognizant that his or her role is to serve
stakeholders and the targeted population affected by the program. In addition,
regardless of any political pressures the evaluator might face, he or she must
act with respect to all individuals associated with the program of interest. No
individual should be threatened or harmed. Third, regardless of the political
pressures faced, the evaluator must provide a comprehensive and fair
assessment of the strengths and weaknesses of the program that is
disseminated to individuals affected by the evaluation. As Zaykowski stated
to us during her video interview, a stakeholder may “want to push back on
certain types of findings. It is your role as a researcher to provide fair and
objective findings including strengths and weaknesses of the topic of
interest.” Finally, any conflict of interest that arises must be dealt with
transparently and honestly to ensure the evaluation and results are not
compromised.
Recognizing this built-in tension between politics and evaluation is
imperative to maximize success. The evaluator must remember that
evaluation research findings inform political decision makers. In addition,
findings about the program are assessed in a larger context of many programs
and policies competing for scarce resources and attention. What the
evaluation concludes matters. It matters to the program, to recipients of the
program, and to those administering the program. Nevertheless, the larger
context also matters. Ten years after Weiss’s (1993) work was published,
Rossi, Lipsey, and Freeman (2003, p. 15) summed up the current and future
of the worlds of evaluation and politics: “First, restraints on resources will
continue to require funders to choose the social problem areas on which to
concentrate resources and the programs that should be given priority. Second,
intensive scrutiny of existing programs will continue because of the pressure
to curtail or dismantle those that do not demonstrate that they are effective
and efficient.”
Losing Objectivity and Failure to Pull the Plug
Researchers must be open to the notion that the program they are evaluating
is not working or, even worse, is causing unintentional harm. Cuevas offers
the example of colleagues who have conducted evaluation research around
violence prevention interventions with youth. He has seen situations where
some evaluators have not had an open mind, believing that this program—the
interventions—was failing and making things worse. People, including
researchers, can lose sight of the fact that the program they are evaluating is
not effective or beneficial. Still, people have a hard time letting go as they are
invested in particular programs. Researchers must be attentive to the ethics
around effectiveness of programs, and they must be willing to pull the plug
on programs that are doing harm. Recognizing all of these pitfalls in the
tangled worlds of evaluation and politics is critical if you want to be
successful.
Evaluation Research Expert—Michael Shively, PhD
Michael Shively, PhD, is a senior associate with Abt Associates where he
conducts criminal justice, criminology, and victimology program and policy
evaluations. In other words, Shively uses the skills you are reading about in
this chapter to study policies and programs dealing with human trafficking
and prostitution, hate crime, law enforcement, drug trafficking, intimate
partner violence, and contextual influences on crime. Shively never thought
he would be a researcher, instead, he believed he would work on a boat.
Growing up in the Seattle area, he spent a lot of time with his uncle, who was
a captain of a tugboat, and on the water in general. When Shively graduated
from high school, the economy was in shambles and he could not get a job on
any boat. Instead, he went to college. He didn’t do well until he found
something that really interested him: the social sciences, especially when they
focused on crime and justice issues. In those classes, he was drawn to
research methods because it allowed an exploration about what was true. He
acknowledges that he annoyed some professors when he asked for evidence
supporting what they were saying. This is the essence of what research
methods offer.
Shively ultimately earned a BA and an MA from Western Washington
University in psychology and sociology, respectively. His sociology PhD was
awarded from the University of Massachusetts in Amherst. After graduating,
he served as deputy director of research for the Massachusetts Department of
Correction and as an assistant professor at Northeastern University’s College
of Criminal Justice. In addition, he served on the Massachusetts Governor’s
Commission on Criminal Justice Innovation and on the Massachusetts
Criminal History Systems Board; facilitated the Cambridge Neighborhood
Safety Task Force; was a founding member of the Regional Human
Trafficking Workgroup at the Kennedy School of Government, Harvard
University; and was a member of the Massachusetts Attorney General’s
Human Trafficking Task Force, Subcommittee on Demand.
Courtesy of Michael Shively
At Abt, Shively has worked on many interesting projects including research
on human trafficking. This is an area that Shively notes where “the ratio of
interest, mass media ink, and celebrity advocacy to actual factual information
is out of whack.” Much remains to be learned about trafficking. One topic in
which very little is known is about the actual traffickers. That was the focus
on Shively’s recent work where he used the presentence reports of people
convicted and serving time in federal prison for trafficking. Presentence
reports (PSRs) are full of valuable data including (but not limited to) arrest
records, aggravated circumstances (e.g., use of weapons), mitigating
circumstances (e.g., unknowingly assisted in trafficking), police reports, trial
transcripts, interviews with offenders, and interviews with family. In total,
500 PSRs were coded and analyzed. Shively and his team provided
understanding about the reasons people traffic, if it is organized or haphazard,
what ties an organization together, and whether trafficking represents
organizations diversifying or introducing new activities. An example of one
enterprise represented the diversification of an existing organized drug
smuggling organization. This group recognized that it could add women to
cargo on ships headed to the United States to increase profitability. If for
some reason the traffickers have to ditch the drugs, they can still recoup costs
by forcing these women to work in brothels and strip clubs.
Of the 500 cases reviewed, Shively then randomly selected 50 face-to-face
interviews of the incarcerated traffickers. Shively asked the convicts things
like, “How did you get started in this? Were you pressured? Did someone
suggest it to you? Did you develop this scheme yourself? Who were the
people who got you in? How did you perceive risk from law enforcement?
What did you worry about? What were the effective and ineffective things
police did?” The results of this research are incredibly useful. For law
enforcement, they now have an improved understanding about what they do
that is effective as well as ineffective. They better understand how criminals
evade justice.
Shively offers both general and specific advice to students. First, learn as
much as you can because you have no idea what is in your future. Do not turn
down any training. Never say that does not apply to you. You never know
when you will need that material or skill. Second, learn as much about the
career you think you are interested in, and don’t believe what you see about it
on TV or the media. Some very exciting TV careers comprise crushing
boredom in reality. Keep your eyes and options open, and investigate a bit
before you commit to a single career path. Third, if you are really passionate
about a career, do not let others dissuade you. Investigate it and what it takes
to be successful. Shively is a firm believer that “that there is always room for
the best in any career.”
In terms of specific skills, Shively finds several characteristics and skills that
serve students well when they apply for a position like his. The
characteristics needed in a successful candidate are those who are personable,
organized, deadline oriented, flexible, and smart. In addition, a researcher or
evaluator must have foundational understanding of research methods. Shively
finds these skills cluster together in their best project managers. The skills he
finds critical are research methods skills and quantitative analysis skills.
Evaluators and researchers must understand about sampling, validity,
reliability, and basic statistics to do well in several roles. Someone looking
for work must be able to communicate the basis of research to all types of
audiences. What does not work Shively finds are individuals who are content
experts—they may know everything about trafficking, but without underlying
research methods skills, the content is not useful.
Chapter Wrap-Up
This chapter presents foundational information on evaluation research.
Although all of our case study authors do not engage in evaluation research,
you should be able to see how their research expertise would be useful in an
evaluation. A great example is Brunson, who, to date, has not conducted
evaluation research. Yet, Brunson’s strong skill set in gathering qualitative
data would make him a valuable evaluation research team member. In this
chapter, we described why anyone would want to conduct evaluation
research. The book began by describing four major purposes for research:
exploration, description, explanation, and evaluation. This chapter focuses on
the last of those purposes. Evaluation research is useful to understand if a
program is needed, if a program is working, if the program has intended or
unintended consequences, if the process is efficient or cost effective, and
even if the program should be terminated. We also described what evaluation
research is, several guiding principles of evaluation research, the seven steps
of evaluation research, and how this type of research is connected with the
policy process.
A key point in this chapter is that there is no single type of evaluation
research but many. Two primary types of evaluation covered in this chapter
are formative evaluations and summative evaluations. Formative evaluations,
including needs assessments and process evaluations, occur during the
formative period of programs and serve as a means to improve programs. In
contrast, summative evaluations, including impact evaluations and outcomes
evaluations, tend to occur after a program has been implemented. Summative
evaluations offer a broader assessment focused on the target population of the
program and the success at reaching the goals of the program. Furthermore,
no single type of evaluation research is better than another. Each type serves
a different, but equally important, purpose.
An overarching theme of this chapter is that evaluation research uses the
research approaches focused on in this text such as observations, content
analysis, secondary data analysis, and experimental research. Yet, it differs
from the material presented in earlier chapters in that it is applied versus
basic research. The difference in applied versus basic research is clear when
you consider the type of knowledge created, the origination of the research
questions, the comparative and judgmental nature of the work, the challenges
in working with stakeholders in what can be a noncooperative environment,
and how findings are disseminated.
The four standards of effective evaluations were described: utility, feasibility,
propriety, and accuracy. In addition, the multiple criteria associated with each
of these standards were offered. Evaluation research can be challenging for
reasons that other research purposes do not encounter. First, in many cases,
an evaluator is called in after the fact. In other words, a program has been
launched with little consideration of the purpose of the program and how to
assess whether the purpose is being met. In addition, evaluation research
must navigate the tricky political context in which it exists. You cannot
separate program evaluation from politics, which means you are often dealing
with multiple stakeholders who are at odds with each other. Understanding
the political nature of this work and that it can present challenges is
necessary. Because evaluation research uses all methodologies, it is subject to
the ethical concerns that all other research is. There are two ways additional
focus on ethics is required. First, confidentiality can be challenging in
evaluation research. It may be difficult or impossible to disguise the source of
information. In addition, the political pressures that can exert themselves on
evaluators must be constantly checked. The evaluator must focus on his or
her service-oriented goal and on the standards of propriety.
An evaluation research expert, Michael Shively, PhD, described how he uses
research methods skills to conduct real, meaningful evaluation research on
topics such as human trafficking. Shively feels that some of the most
important questions anyone can ask in any career or role are “Do it make
sense to do this?” “Is this designed in a defensible way?” “Is this working as
it was intended?” “Does this help?” and “Is it producing results?” These
questions fuel evaluation research. People hope that what they do matters. If
you are addressing these types of questions, you are making a difference. As
the human trafficking case indicates, this type of work is real and not esoteric
stuff. It matters.
Several of our case studies included evaluation research. Melde and
colleagues’ research used data that were collected as a result of a process and
outcome evaluation (Melde, Taylor, & Esbensen, 2009). Although his
specific research on gang members’ fear of crime and perceived risk of being
violently victimized did not involve an evaluation, the data came from an
evaluative project. Santos and colleague’s work was in part a process
evaluation designed to ascertain whether intensive policing focused on
targeted offenders would influence crime rates in hot spots (Santos & Santos,
2016). Their research involved a mixed-methods practice that is highly
desirable according to Zaykowski and the larger literature. Santos and
Santos’s work is extremely applied, and the outcome of this research directly
informs the way the Port St. Lucie, Florida, Police Department approaches
policing. Table 11.3 presents some characteristics of each case study with
information about their evaluation ties.
The next chapter in this text focuses on analysis. At this point, you have
learned about the four purposes of research (explore, describe, explain, and
evaluation), and the book has covered many methodology approaches
(qualitative research, secondary data analysis, survey research, experimental
research, etc.) used to gather data to accomplish those purposes. Next, you
need to understand how to analyze the data gathered using those
methodological approaches. How do you use the data and information
gathered to arrive at findings and conclusions? These are some questions
addressed in Chapter 12.
Applied Assignments
The materials presented in this chapter can be used in applied
ways. This box presents several assignments to help in
demonstrating the value of this material by engaging in
assignments related to it.
1. Homework Applied Assignment:
Designing an Evaluation of a Program You
Are Engaged In
Think of a program you are engaged in. It could be the academic
program you are majoring in, a social program (e.g., fraternity,
sorority, extramural club, or academic club) or a somewhat formal
social club. Whichever group you focus on should have a mission
statement associated with it. Now, using that mission statement,
design a summative evaluation to see whether the group is
meeting its stated goals. Be sure to clearly articulate your research
question (given the mission statement and goals). Next identify
your research methodology. What data will you gather? How will
you gather it? How will you know based on those data the group
is achieving its goals and mission? You are welcome to use
multiple research approaches in your evaluation. Finally, offer a
conclusion that comes from those data about how well your group
is doing in meeting its goals. Are there negative consequences
from the group? Should the group continue or be terminated? Be
prepared to share your paper and findings with the class.
2. Group Work in Class Applied
Assignment: Using Secondary Data
The university has hired your group to assess your class. Using
the syllabus (where you should have an objective of the class
statement), work with your team to devise a plan to assess your
class. Keep in mind that you are only offering an evaluation
proposal because all data will not be available until the end of the
semester. For now, using the objectives outlined on your syllabus,
design a summative evaluation to see whether the class is
achieving its stated objectives. Be sure to clearly articulate your
research question (given the objectives listed). Next identify your
proposed research methodology. What data will you gather? How
will you gather it? How will you know based on those data the
class objectives are being met? You are welcome—and
encouraged—to use multiple research approaches in your
evaluation. What findings would you expect to see if the class is
being successful? What findings would you expect if the class
were not meeting its objectives? Be prepared to share your paper
and findings with the class.
3. Internet Applied Assignment: Reviewing
an Evaluation—No-Drop Policies
Using the Internet, access Smith and Davis’s (2004) An
Evaluation of Efforts to Implement No-Drop Policies: Two
Central Values in Conflict found at
https://www.ncjrs.gov/pdffiles1/nij/199719.pdf. In a paper,
describe the purpose of this evaluation and the background that
justifies doing this research. Next, identify the types of evaluation
the researchers conducted. Describe the methodology used to
conduct this evaluation, including where it happened, types of
data gathered, and types of research done. Note the advantages
and disadvantages of each approach. Next, describe the findings
presented by the researchers. Do you agree or disagree with their
conclusions? Should no-drop policies be implemented
nationwide, be dropped nationwide, or some combination. Why or
why not? What next steps for research do you see are needed? Be
prepared to share your findings with the class.
Key Words and Concepts
Accuracy 362
Analysis of qualitative Information 364
Analysis of quantitative information 364
Analyzing the data 349
Applied research 356
Basic research 356
Blind review 360
Communicate the findings and recommendations 350
Complete and fair assessment 362
Conflict of interest 362
Context analysis 364
Cost-effectiveness 361
Defensible information sources 364
Describe purposes and procedures 364
Design of the methodology 349
Develop findings and conclusions 349
Develop the research question 348
Disclosure of findings 362
Evaluation impact 361
Evaluator credibility 361
Feasibility 361
Fiscal responsibility 362
Formal agreements 362
Formative evaluations 351
Gap analysis 353
Gather data/evidence 349
Human interactions 362
Identify and Engage Stakeholders 348
Impact evaluations 356
Impartial reporting 364
Information scope and selection 361
Inputs 353
Justified conclusions 364
Justify these recommendations 350
Logic models 353
Long-term outcomes 355
Metaevaluation 364
Needs assessments 352
Outcome evaluations 355
Outputs 353
Policy 350
Policy process 350
Political viability 361
Practical procedures 361
Process evaluation 353
Program 350
Program documentation 364
Propriety 362
Reliable information 364
Report clarity 361
Report timeliness and dissemination 361
Rights of human subjects 362
Service orientation 362
Short-term outcomes 355
Stakeholder identification 361
Stakeholders 348
Summative evaluation 351
Systematic information 364
Utility 361
Valid information 364
Values identification 361
Key Points
Evaluation research is an applied approach to research involving the
systematic assessment of the need for, implementation of, or output of a
program based on objective criteria. When using the data gathered in an
evaluation research, you can recommend improvements, enhancements,
expansions, or the termination of a program. The assessment of a
program can be conducted by using any of the research approaches
described in this text, including observations, interviews, content
analysis, survey research, secondary data analysis, and so on.
Evaluation research involves seven steps: identifying and engaging
stakeholders, developing the research question, designing the
methodology, gathering data/evidence, analyzing the data, developing
findings and conclusions, justifying the recommendations, and
communicating findings and recommendations.
Evaluation research differs from basic research in many ways including
the purpose of the research, origination of the research questions,
comparative and judgmental nature of the work, working with
stakeholders, a challenging environment, and the ways findings are
disseminated.
Stakeholders play an important role in evaluation research and can be
the key to success or the key to failure. Stakeholders are any person or
organization who has a direct interest in the program being evaluated
and can include funders, program administrators, the community in
which the program is administered, and clients of the program. Knowing
who the stakeholders are, and developing and maintaining a good
relationship with stakeholders, is critical during the planning stages, the
gathering of data, and presentation and buy-in of findings. Without the
support of stakeholders, evaluation research cannot be successful.
There are two primary types of evaluations: formative and summative.
Formative evaluations are conducted in the earliest stages of the policy
process while a program is being developed with the purpose of
ensuring the program is feasible prior to full implementation. Formative
evaluations can be thought of as “trouble-shooting” evaluations in that
what is discovered from the evaluation is “fed back” into the program to
strengthen or enhance it. Two types of formative evaluations considered
in this text are needs assessments and process evaluations. Summative
evaluations are used to ascertain whether a program should be funded,
continued, or terminated. Two types of summative evaluations are
discussed in this text: outcome evaluations and impact evaluations.
Logic models are graphic depictions that illustrate the ideal about how a
program is intended to work. There are three benefits of logic models,
including that (1) they point to key performance measurement points
useful for conducting an evaluation, (2) that they assist with program
design and they make improving existing programs easier by
highlighting program activities that are problematic, and (3) that they
help identify the program expectations.
An evaluation is effective when it meets four standards: utility,
feasibility, propriety, and accuracy.
When conducting evaluation research, you must be cognizant of the
real-life consequences of their work. Should a program be changed or
terminated, individuals lose access to programs, and others may lose
jobs.
Review Questions
1. What distinguishes evaluation research from other types of
research?
2. What types of evaluations are there? When are each best used?
3. Who are stakeholders, and why are they important during
evaluation research?
4. What value is a logic model? What can it tell someone?
5. What is a gap analysis? How does a gap analysis relate to
evaluation research?
6. What does a formative evaluation do? What types of formative
evaluations were covered here, and how do they differ from the
others?
7. What does a summative evaluation do? What types of summative
evaluations were covered here, and how do they differ from the
others?
8. What are the four standards of an effective evaluation? What are
the standards of each? Why are these important to have?
9. What ethical considerations are important to focus on when
conducting evaluation research? Why?
10. What types are common pitfalls are found when conducting
evaluation research?
© fbatista72/Canstockphoto
How Should Data Be Analyzed?
In the previous section, we said that we conduct data analysis so that we can
identify patterns in our data, make claims about research hypotheses, and
answer our research questions. Although this explanation addresses the
“why” question about data analysis, it does not address the “how” question.
There are many ways to conduct data analysis, and this section of the chapter
delves deeper into these approaches. The first thing to keep in mind, and it is
worth repeating, is that the purpose of the data analysis is to answer the
research question. For example, if your research goal was to describe or
explore something, then you need to describe the data you gathered.
Describing data allows you to offer findings such as “The average age of my
sample is 33 years”; “A theme emerging in the data was abuse of power. This
theme was identified by 80% of the sample”; or “The median number of
years served in prison was eight.” The goal with descriptive analysis is to
make statements about the something typical in your data.
The next characteristic of your data that drives the type of analysis performed
is the nature of the data. Are your data numeric—quantitative? Are they non-
numeric—qualitative? Do your data include geospatial information? These
are important considerations when selecting the specific approach to data
analysis taken. In the next section, we address data analytic techniques
specific to quantitative data. After that, we turn to a very brief discussion of
approaches used for qualitative data given the treatment of this topic in
Chapter 6. Next we offer information on analysis suitable for other types of
data. Keep in mind that the purpose of this chapter is to provide your first
look at data analytic techniques. To fully answer any research question, you
will need to take a statistics course (for quantitative data) or a qualitative
research methods course (for qualitative data). Do not be intimidated by the
prospect of either of those classes. Once you understand the purpose of these
courses—to provide you with tools to answer research questions—you have
the framework to be successful.
Analysis of Quantitative Data
All quantitative analysis and findings should begin with a description of the
data, regardless of the purpose of the research. This allows you and readers of
your research to assess the nature of your sample. For instance, Cuevas and
colleagues’ research on teen dating violence among Latinos provided this
information about the parents of youth taking the survey (Sabina, Cuevas, &
Cotignola-Pickens, 2016; see Table 12.1).
Distributions
One of the simplest ways to summarize data for a single variable is using a
frequency distribution, which is a table that displays the number of times a
particular value or category is observed in the data. An example includes
asking 15 university students whether they had ever received a speeding
ticket while driving. Response categories provided include “Never,” “Once,”
or “More than once.” Findings from these data may be presented using a
frequency distribution like the one in Table 12.2.
Frequency distribution: Used in descriptive statistics, it is a
table that displays the number of times a particular value or
category is observed in the data for a particular variable.
As a reminder, “n” refers to sample size and % refers to percentage. In this
example, about half (47%) of students have received more than one speeding
ticket, and only one in five (20%) have never received a speeding ticket. This
summary makes understanding the data easy. In addition to frequency
distributions, researchers often describe other aspects of their data, including
what the typical case in a variable’s distribution looks like and what all the
other cases in a variable’s distributions look like, relative to that “typical”
case.
Measures of Central Tendency
This is not a statistics textbook, so we do not delve deeply into a conversation
about statistics. Nevertheless, data analysis is a key part of the scientific
process making it worthwhile to cover some basics. Measures of central
tendency are a great place to start. Measures of central tendency are a group
of descriptive statistics that numerically describe the “typical” case in data
and include the mean, the median, and the mode.
Measures of central tendency: Group of summary statistics used
to numerically describe the “typical” case in a group of
observations.
Mean
The arithmetic mean (or “mean”) is the most commonly used measure of
central tendency. The mean is a number that describes the typical case in your
data. The symbol for a mean is an x with a line or bar over it, and many refer
to it as an x-bar. To calculate a mean, you simply add all the observed scores
(scores are identified using the symbol x) in a data distribution (i.e., ∑x
indicates the need to add the scores) and divide this value by the total number
of observations (n). The formula used to calculate a mean is
© PixelsAway/Canstockphoto
The final step is to determine which value in the data distribution corresponds
to the median position. Returning to our hypothetical university student
speeding ticket example:
Number of tickets = 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 6
This distribution has already been rank-ordered from the fewest to the largest
number of speeding tickets. Next, find the median position—Mp = 8 (i.e., [15
+ 1] / 2 = 8). Count from the left 8 scores until we see that the 8th score
equals 1. The value that corresponds to the median position for this
distribution is one. When the distribution has an even number of
observations, the median position (Mp) will not be a whole number. For
example, if we had 16 instead of 15 students in our speeding ticket study, the
median position would have been 8.5 instead of 8. In these situations, the
median is determined by averaging the values above and below the median
position. If the median position was 8.5, we would average the values in the
8th and 9th position in the distribution and use this value to represent the
distribution’s median (Mdn). If we have one more observation in our
speeding example (6 tickets), the new distribution would look like this:
Number of tickets = 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 6, 6
In this case, the median position (Mp) = 8.5 (i.e., [16 + 1] / 2 = 8.5), and the
values in the 8th and 9th position of the distribution are 1 and 2, respectively.
The median is the average the two values = 1.5 (i.e., [1 + 2] / 2 = 1.5). An
advantage of a median is that it is easily computed and comprehended.
Medians are often used instead of means to describe the “typical” case in a
data distribution when the data have outliers because medians are not
impacted by them like means are. Medians can be used with ratio, interval, or
ordinal data, but they are not appropriate for nominal-level data. The biggest
shortcoming of medians, however, is that they are more difficult to use in
more advanced mathematical calculations.
Mode
The third measure of central tendency is the mode, which describes the value
that appears most in the data. Some distributions may not have a mode
because each value occurs only once, whereas other distributions may have
multiple modes. Going back to the original speeding tickets example, the
mode of the data is 1 because 1 is observed five times and no other value is
observed that often:
Number of tickets = 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 6
Mode: Measure of central tendency used as a summary statistic. It
represents the numeric value observed more than any other in a
data distribution.
Many simply use the word mode when describing the modal category, yet
others symbolize it as Mo. In our speeding ticket example, Mo = 1. An
advantage of mode is that it can be used to describe nominal data. Another
advantage to using the mode is that it is easiest measure of central tendency
to calculate. The primary disadvantage, however, is that the mode of a data
distribution does little more than offer a description of data. It cannot be used
in statistical analysis because it is not algebraically defined (i.e., it is difficult
to use basic arithmetic operations like addition, subtraction, division, etc. on
modal values). Not surprisingly, the mode is the least often used measure of
central tendency.
Which measure of central tendency should you use when? Here are some
general rules for selecting the most appropriate measure of central tendency
to describe a data distribution:
Use the mode to describe data gathered at any level of measurement.
Use the median to describe ordinal-, interval-, or ratio-level data that
have a skewed data distribution (i.e., distributions with extreme
outliers).
Use the mean to describe interval- and ratio-level data that have a
relatively symmetrical distribution.
Measures of Dispersion
Measures of dispersion describe where the other cases in a data distribution
are relative to the “typical” case. Measures of dispersion represent how much
variation in observed scores is found in the data. Measures of dispersion are
important because two distributions of data can have identical means but
wildly different degrees of variation. Four common measures of dispersion
are the range, interquartile range, variance, and standard deviation. Each of
these is described as follows.
Measures of dispersion: Group of summary statistics used to
numerically describe the variability in observed scores among a
group of observations.
Figure 12.1 Boxplots Show the Spread of Observations for Three Groups
Range
The range is the most basic measure of dispersion for ordinal-, interval-, and
ratio-level variables, and it describes the difference between the largest and
the smallest values in a data. The range is calculated by subtracting the
largest value from the smallest (Figure 12.1):
Range = xmax – xmin,
where xmax is the largest score in the distribution and xmin is the smallest.
Going back to our speeding tickets example, if the distribution of speeding
tickets for our 15 students was as follows:
Number of tickets = 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 6
then the range is 6 (i.e., 6 – 0 = 6). Although the primary advantage to using
the range to report the variability of a data distribution is that it is easy to
calculate, the biggest disadvantage to using it is that it is extremely sensitive
to outliers and it does not use all the observations in a distribution to derive
its value.
Range: Measure of dispersion used as a summary statistic. It
reflects the difference between the largest and smallest values in a
data distribution.
Interquartile Range
A second measure of dispersion is the interquartile range (IQR), which
indicates the degree of variability based on the difference between the two
halves (i.e., the two middle quartiles) of the distribution. The IQR can be used
with interval- or ratio-level data and is calculated by first determining the
median of a data distribution. Next, the medians of both the lower half and
the upper half of scores are identified. By identifying these three values in a
distribution, the distribution has been split into four groups (i.e., quartiles).
The final step in determining the IQR is to subtract the upper quartile’s (i.e.,
3rd quartile) median value from the lower quartile’s (i.e., 1st quartile) median
value. The results indicate how many scores are found between the 25th
percentile and the 75th percentile of scores. The advantage of an interquartile
range is that it is not sensitive to outliers. The IQR’s primary disadvantage is
that it’s difficult to use in mathematical equations, so it’s utility is limited.
Interquartile range (IQR): Measure of dispersion used as a
summary statistic. It indicates how much variability there is in a
data distribution based on the difference between the two halves
(i.e., two middle quartiles) of the distribution.
Variance
A commonly used measure of dispersion is the variance, which indicates
how much each score differs, or deviates, from the average score in a data
distribution. The variance, which is symbolized as s2, is used when data are
measured at the interval or the ratio level. The variance is calculated by
1. Determining the mean ( ) of the data distribution
2. Subtracting each individual score in the distribution (xi) from the
distribution’s mean ( ), which is called a deviation score
3. Squaring each deviation score
Revisiting our speeding ticket example where the variance was 2.74, the
standard deviation of the distribution (s) is or 1.66. The main
disadvantage of the standard distribution is that it is not a good measure to
use with skewed data.
Associations
It is common for researchers to want to know whether variables they have
measured are associated with one another. Variables are associated with one
another when the values of one variable changes in a systematic (i.e.,
nonrandom) way with another. Recall this was the topic of association and
causation discussed in Chapter 8 focused on experimental research.
Associations or relationships can be identified in other ways; for example, if
you are working with categorical variables (i.e., nominal- or ordinal-level
measures), you can identify an association using cross tabulations and a chi-
square test of independence (see Figure 12.3). We could, for example, gather
data from 100 university students, record their sex and information about
whether they ever received a speeding ticket (coded “Never,” “Once,” and
“More than once”) and test, using a chi-square test of independence, and
identify whether a college student’s sex was associated or related to receiving
a speeding ticket(s).
Differences
In addition to discovering answers to research question through the
identification of associations, researchers often find answers to their research
questions by determining whether group differences exist in their data. Group
differences can be tested when one variable is measured at the nominal or
interval level (i.e., using different categories to group observations) and
another variable is measured at the ratio or interval level (i.e., using discrete
values [e.g., number of tickets] or continuous values [e.g., temperature]).
When two groups are being compared, researchers often use t tests, whereas
analysis of variance (ANOVA) is used to compare groups when there are
more than two groups.
As with regression analysis, there are several different types of t tests used to
detect group differences, and a discussion of each is well beyond the scope of
this text. Suffice it to say that despite the type of t tests used, they are all
fundamentally doing the same thing: testing whether one group is different
from a second group. Typically, group comparisons are based on a
comparison between the two groups’ means. For example, in going back to
our speeding ticket example, we could test whether the average number of
speeding tickets (measured as whole numbers) for female students was
significantly less than the average number of tickets for male students using a
t test. For the continuous variables used in her study, Zaykowski (2014)
presented group (mean) differences for those who reportedly used victim
services (Yes/No) as part of her descriptive statistics table. This was similar
to what she did for the chi-squared analysis, but she used a t test for the
independent variables that were not categorical (i.e., Age, Household size,
Mental distress, and Physical distress).
When more than two groups are being tested for differences, t tests cannot be
used. Instead, researchers use a type of statistical test called analysis of
variance (ANOVA). ANOVA can be thought of as a test similar to the t test,
but it is used when making comparisons between three or more groups. For
example, if we want to know whether class status (i.e., Freshmen,
Sophomores, Juniors, or Seniors) is related to driving ability, we could
conduct an ANOVA test to see whether the mean number of speeding tickets
for each group of students differed significantly.
Analysis of variance (ANOVA): Statistical analysis technique
used to compare the group averages among three or more groups
or across three or more points in time.
Qualitative Data Analysis
The methods used to organize and analyze qualitative data were described in
Chapter 6. Because qualitative data analysis does not use statistics and related
analyses, it does not require the space in that chapter to discuss them. There
are some things to note about qualitative analysis, however. First, providing a
separate chapter for collecting qualitative data, and analyzing qualitative data,
creates a false dichotomy. Why? Because qualitative data analysis cannot be
separated from quantitative data collection. As we noted in Chapter 6,
analyzing qualitative data begins as soon as the first data are collected
because even then the researcher is watching for patterns or relationships.
Once data collection has ended, the researcher continues with the analysis.
The first step is to organize the data. Next is a focused analysis that requires
the researcher to engage in the iterative process of reducing, condensing, and
coding the data into broader categories or themes.
This was the approach used by Dodge and colleagues (Dodge, Starr-Gimeno,
& Williams, 2005). They gathered data from their policewomen about
working as decoys. As they began gathering the data, they adjusted later
interviews to incorporate their findings. At some point, Dodge and her
colleagues reached saturation and were not gathering new data. For Dodge et
al.’s research, saturation occurred after 25 women were interviewed. From
the data gathered, they identified several themes that framed their findings:
“The decoys,” which considered how officers viewed this work
“Cops: too pretty to prostitute,” which described how decoys should
look at behave
“Negotiating the deal,” which described the need for decoys to “dirty
talk” and be familiar with jargon
“Johns of all types,” which described the huge variety of men arrested
for soliciting a prostitute and how the officers viewed them
“Safety concerns,” which dealt with the real danger this work entails
“Who’s the victim?” which described the widespread impact of
prostitution on people, neighborhoods, homeowners, and businesses
“Effectiveness from a decoy’s perspective,” which described the decoys’
views of deterrent effect of this type of police activity
Brunson and colleague’s work was conducted similarly (Brunson & Weitzer,
2009). After gathering data from the 45 young men in the three
neighborhoods, Brunson and his colleague organized, sorted, and identified
themes. In addition, they came up with two primary findings: first, that
“white youth had a less troubled relationship with and more positive views of
the police than Black youth” (p. 864) and, second, that “[p]olice treatment of
residents appeared to be less problematic in the White neighborhood
(Mayfield) and more problematic in the Black neighborhood (Barksdale),
with the mixed neighborhood (Hazelcrest) falling in between” (p. 864).
Major themes were identified and included unwarranted stops, verbal abuse,
physical abuse, police corruption, racially biased policing, and police
accountability. These themes are especially interesting when you consider
this research was done a few years before the August 2014 shooting of
Michael Brown in Ferguson, Missouri (which is situated less than 5 miles
from these neighborhoods).
Although neither Dodge nor Brunson and their respective colleagues
analyzed their data with the assistance of software programs (Brunson &
Weitzer, 2009; Dodge et al., 2005), others do. As noted in Chapter 6,
qualitative software programs cannot take the place of the researcher, yet they
may provide some time-saving approaches to data analysis. Some of those
programs are described in the sections that follow.
Data Analysis Software
Collectively, the different statistical methods described thus far in this
chapter are part of a broader group of analytic methods used in research.
Many of these statistical methods can be applied to data through the use of
statistical analysis software applications. Some software applications are
designed to be more useful for data analysis associated with quantitative
research, whereas other software applications are specifically designed for
analysis of data produced from qualitative studies. A description of these
applications, starting with quantitative analytic software applications,
including a description of some of their unique features and benefits, is
provided first.
Software Applications Used in Quantitative
Research
Software applications give researchers the ability to analyze large numerical
data sets used in quantitative research. Although there are many quantitative
statistical software applications on the market, the particular program that
researchers choose to use to analyze their data often depends on the aim of
the research project, the research questions that must be answered, and the
researcher’s familiarity with the software. The following subsections provide
an overview of some of the more commonly used applications used in
quantitative criminological research.
Excel
Most people have heard of Microsoft® ExcelTM and know it as a useful
electronic spreadsheet for keeping track of the family budget, stock
investments, and small business income and expenses or for creating charts
and graphs used in class presentations. Technically, a spreadsheet is a type of
document used to arrange data in rows and columns within a grid. Electronic
spreadsheets, like those contained in Excel, can be manipulated and used to
make myriad calculations (e.g., averages, standard deviations, chi-square
tests, etc.) based on formulas created by the user. Data contained in Excel can
also be used to create visual displays such as bar charts, line graphs, and
pivot tables. More detailed information about visually displaying data is
provided later in this chapter (see the section titled “Reporting Findings From
Your Research”). The Excel interface resembles the image provided in Figure
12.5.
Spreadsheet: A type of document used to arrange data in rows
and columns within a grid.
© PixieMe/Shutterstock
Figure 12.5 Blank Spreadsheet in Microsoft Excel
IBM Analytics
Figure 12.6 Excel Spreadsheet Calculated the Mean Number of Students’
Speeding Tickets
Figure 12.7 Data View Window in SPSS Shows the Sample Speeding
Tickets Data
SPSS is designed around three primary interfaces: a Data Editor, a Syntax
Editor, and an Output Viewer. The Data Editor is the most commonly used
interface of the three, and it consists of two components: a data viewer and a
variable viewer. The data viewer looks very much like an Excel spreadsheet
and is where you can see your data. In the SPSS data viewer, columns
represent variables and rows represent each observation from the unit of
analysis. For example, Figure 12.7 is a SPSS data view window that shows
the data collected for one variable (Tickets) from 15 students (one
observation for each of the 15 students surveyed). The variable viewer allows
the researchers to view the structure of each variable contained in his or her
data set. For example, information about the type of variable (i.e., numeric or
text), the variable’s name, and its level of measurement can all be accessed in
the variable viewer.
Researchers use the SPSS Syntax Editor to create SPSS syntax (i.e., code)
that can be run within the SPSS environment. SPSS syntax is a programming
language that allows researchers to manipulate and analyze data in SPSS, as
well as to document how their data are analyzed, without having to rely on
using the built-in drop-down menus. Researchers can save their SPSS syntax
files for future use or even share them with colleagues with whom they might
collaborate on research projects. For example, Figure 12.8 show the SPSS
syntax that could be run on the Hypothetical Study data file to produce the
average number of speeding tickets (along with the standard deviation,
minimum value, and maximum value) for our 15 university students surveyed
in our study.
The third SPSS interface is the Output Viewer. As the name suggests, this is
where the results of procedures run, either through the use of SPSS menu
items or in the Syntax Editor, and are presented as output. When the syntax
shown in Figure 12.8 is run on the Hypothetical Study data on university
students’ driving, the output shown in Figure 12.9 is produced.
From a series of intuitively designed drop-down menus, researchers can
conduct hundreds of analytic procedures in SPSS, including univariate
analysis (i.e., frequency distributions, descriptive statistics, or cross
tabulations), bivariate analysis (i.e., correlations, t tests, and ANOVA), and
multivariate analysis (i.e., linear regression). The SPSS learning curve is
steeper compared with Excel, but the quantitative analytic tools available in
SPSS are more powerful.
Figure 12.8 SPSS Syntax That Could Be Run on the Hypothetical Study
Data File to Produce the Average Number of Speeding Tickets
Stata Press
SAS, which stands for Statistical Analysis Systems, is another commercial
statistical analysis software package distributed by the company “SAS”
commonly used in the field of criminology and criminal justice to analyze
quantitative data. SAS was first developed in 1966 at North Carolina State
University and since then has become one of the biggest advanced analytics
products on the market. In 2005, Alan Acock compared SAS with SPSS and
STATA to answer the following question: “Which software analysis package
is the best?” Based on his findings, Acock (2005, p. 1093) explained, “SAS
programs provide extraordinary range of data analysis and data management
tasks,” but they were difficult to use and learn. In contrast, SPSS and STATA
were both easier to learn in part because of their better documentation, but
they had fewer analytic abilities. Acock further noted that these limitations
could be expanded with paid (in the case of SPSS) or free (in the case of
STATA) add-ons. So which product was considered the best? Acock
concluded that of the three, SAS was best for “power users,” while
occasional users would benefit most from SPSS and STATA.
© 360b/Shutterstock
Software Applications Used in Qualitative Research
The assumptions that the world we live in operates according to general laws
and that these laws have properties and relationships that can be observed and
measured are at the heart of quantitative research. Through rigorous
observation and measurement, and the application of reason and logic,
quantitative research aims to produce an empirical understanding of the world
in which we live. In contrast to this positivist, quantitative-oriented approach
to knowledge discovery, qualitative researchers believe that our focus should
be on understanding the totality of a phenomenon, based on an interpretive
philosophy. This understanding is derived from careful, detailed analysis of
structured and unstructured text founded in writings, news articles, books,
conversations, interview transcripts, audio or video recordings, social media
posts, and electronic news feeds.
Qualitative data analysis (QDA) often starts with a broad, open-ended
question(s) compared with the explicit questions associated with quantitative
research. From this starting point, researchers move toward greater precision
or greater refinement of the original question, based on information that
emerges during QDA, in what typically is a four-step process. The first step
in QDA usually involves organizing the data that the researcher have
collected from texts, conversations, recordings, posts, and so on. This process
may involve transcription, translation, or cleaning data.
Qualitative data analysis (QDA): Approach to data analysis that
emphasized open-ended research questions and moves from these
toward greater precision, based on information that emerges
during data analysis.
Next, the researcher will identify a QDA framework that he or she will use to
analyze the data. The framework may be an explanatory framework (i.e.,
guided by the research question), or it may be an exploratory framework (i.e.,
guided by the data). Part of this framework will involve the researcher
identifying a coding plan for his or her data. In other words, the researcher
will determine how he or she will structure, label, and define the data.
Step 3 of QDA involves coding the data and sorting it into the researcher’s
framework, which is typically done through the use of a QDA software
application. In this context, coding refers to the process of attaching labels to
lines of text so that the researcher can group and compare similar or related
pieces of information; coding sorts are compilations of similarly coded
blocks of text from different sources that are converted into a single file or
report.
Coding sorts: In qualitative data analysis, they represent
compilations of similarly coded blocks of text from different
sources that are converted into a single file or report.
Step 4 involves a researcher using the framework developed in Step 2 for
descriptive analysis of his or her data, based on themes that are identified in
the analytic process. In qualitative data analysis, a theme refers to an idea
category that emerges from grouping lower level data points together as part
of the analysis, which is often developed around specific characteristics that
are identified in qualitative data. Characteristics represent a single item or
event in a text, which is similar to an individual response to variables used in
quantitative data analysis.
Theme: In qualitative data analysis, it refers to an idea category
that emerges from grouping lower level data points together as
part of the analytic process.
Characteristics: In qualitative data analysis, they represent a
single item or event in a text, similar to an individual response to
variables used in quantitative data analysis.
If the QDA is not exploratory in nature, step 4 can also involve the researcher
conducting second-order analysis as part of the final step. Second-order
analysis involves identifying recurrent themes, patterns in the data, and
respondent clusters. It also can be used to build event sequences and to
develop new hypotheses.
Second-order analysis: In qualitative data analysis, it is the
process that involves identifying recurrent themes and respondent
clusters. It also can be used to build event sequences and to
develop new hypotheses.
Just like there are many different software applications commonly used in
quantitative data analysis, several qualitative data analysis software packages
are available to researchers. Because we discuss quantitative and qualitative
research in greater detail in Part 4, we provide details of some of the most
popular applications used in criminology and criminal justice research that
are qualitative in nature in the following subsections, including QDA
MinerTM, NVivoTM, ATLAS.tiTM, and HyperRESEARCHTM. In doing so,
we highlight some of their more noteworthy features and the benefits of each.
QDA Miner
QDA Miner is a qualitative data analysis software application developed by
Provalis Research®. This commercial software was first made available in
2004 and is available for the Windows operating system. In 2012, a “Lite”
version of QDA Miner was released for free but with reduced functionality.
QDA Miner is used in the field of criminology and criminal justice by those
who conduct qualitative studies and who typically study data produced from
journal articles, radio or television scripts, social media or RSS feeds, images,
or interview transcripts derived from focus groups or in-depth interviews.
© iStock.com/JacqieDickens
Geostatistical Approaches
Chapter 10 focuses specifically on GIS and crime mapping. Although we’ve
already introduced some of the common crime analysis techniques (e.g., hot
spot mapping, predictive policing, risk terrain modeling, and repeat/near
repeat victimization), our discussion of geostatistical analysis was limited.
We consider geostatistical analysis to be any mathematical technique that
uses the geographical properties of data as part of a statistical or analytic
method. Unlike many of the well-established statistical methods used with
nongeographic data, and discussed previously in this chapter, some of the
spatial statistical methods presented in the following subsections, and used by
researchers in the field of criminal justice and criminology, are still being
developed and improved. Some of the more common geostatistical
techniques, and the software available to apply these methods, are discussed.
Geostatistical analysis: Any mathematical technique that uses the
geographical properties of data as part of a statistical or an
analytic method.
Introduction to Spatial Statistics
Many of the spatial statistics used in geostatistical analysis are extensions of
traditional statistical methods discussed earlier in this chapter. For example,
we have already outlined how descriptive statistics can help you summarize
your data in a succinct and meaningful way. We also explained that you
could describe your data using univariate analyses to produce measures of
central tendency (i.e., mean, median, and mode) and measures of dispersion
(i.e., variance and standard deviation). These statistics are intended to
describe the “typical” case, and all the other cases relative to the “typical”
case in a data set, respectively. Similar descriptive statistics are used in
spatial statistics.
Figure 12.12 Illustration of How the Mean Center of a Spatial Distribution Is
Determined
Spatial Description
In geostatistical analysis, spatial descriptions refer to a group of spatial
statistics that describe the overall spatial distribution of geographic data (i.e.,
how data in a data set are distributed throughout a study area). Mean center
and standard distance, the standard deviational ellipse, and the convex hull
are three examples of spatial descriptive statistics.
Spatial descriptions: Group of spatial statistics that describes the
overall spatial distribution of geographic data.
Mean Center
Imagine that the locations of four crime incidents in a given area were
depicted by the four dots shown in the box to the left in Figure 12.12. You
could find the “average location” of these four crimes by calculating their
mean center. In the box on the right of Figure 12.12, the mean center
location of the four purple dots is represented by the black dot and is
determined by computing the average of the four X-coordinates and the
average of the four Y-coordinates for all four incidents’ X–Y-coordinate
pairs. Finding the average location for a set of geogrphic data can be very
useful for tracking changes of locations of things over time or between
different things in the same area. For example, crime analysts can determine
whether the mean center for burglaries in their jurisdiction is different for
those that occur during the daytime compared with where nighttime
burglaries occur.
Mean center: Average x- and y-coordinates of all the features in
the study area.
Standard Distance
Standard distance is a spatial description method that is reported along with
the mean center. The standard distance is used to describe the compactness
of geographic data, and it is used in a similar way to how the standard
deviation is used with nongeographic data. Standard distance is often
represented as a single radius (measured in units of feet, miles, meters,
kilometers, etc.) that surrounds the mean center location of a distribution of
geographic data. Concentric radii can also be used to display 1, 2, and 3
standard distances from the mean center location. The standard distance is
calculated using the X- and Y-coordinates for all the geographic data in a
distribution, using the following equation:
Standard distance: Summary measure of the distribution of
features around any given point.
where xi and yi are the X- and Y-coordinates for each individual data point (i)
and and ȳ represents the mean center location for each data point. The
symbol n denotes the total number of data points in a file.
Convex Hull
A convex hull is third common way to describe the distribution of geographic
data. A convex hull or convex envelope can be thought of as the smallest
polygon that can be drawn around the outer points of all the data in a data
distribution of geographic locations. The convex hull is commonly known as
the minimum convex polygon and can be used by researchers to define a
particular study area, one that encompasses all the phenomena that were
studied and represented in a geographic data set. Convex hull is the least
common of the three types of spatial description methods used in
geostatistics.
Convex hull: Smallest polygon that can be drawn around the
outer points of all the data in a data distribution of geographic
locations.
Spatial Dependency and Autocorrelation
In 1970, Waldo Tobler famously said, “Everything is related to everything
else, but near things are more related than distant things” (p. 236). Tobler’s
statement about how things are related to each other in space has since
become known as Tobler’s Law or the First Law of Geography; it is the
foundation for understanding patterns of spatial dependency and spatial
autocorrelation that may be present in spatial data.
First Law of Geography: Also known as “Tobler’s Law,” it
states that everything is related to everything else, but near things
are more related than distant things.
In spatial statistics, spatial dependency is a measure used to define the
spatial relationship between values of geographical data. Most statistical tests
used in spatial data analysis assume that the data being analyzed do not have
spatial dependence. In other words, they assume spatial independence. The
assumption of spatial independence is assessed through the various tests of
spatial autocorrelation, including the Moran’s “I” statistic, Geary’s “C”
statistic, and the Get-Ord “G” statistic. All of these spatial statistics are used
on count data (e.g., number of burglaries) that have been aggregated to a
larger unit of analysis (e.g., census blocks) to determine whether the
assumption that observations are spatially independent of one another has
been violated. Assessing the underlying assumptions of statistical tests is an
important part of data analysis, and the concept of spatial autocorrelation is
one of the most important in spatial statistics.
Spatial dependency: Measure used in geostatistical analysis to
define the spatial relationship between values of geographical
data.
Spatial autocorrelation: Degree to which geographic features are
similarly arranged in space.
Spatial Interpolation and Regression
Two other types of spatial statistical analysis include methods involving
spatial interpolation and regression. Spatial interpolation techniques refer to
a group of geostatistical techniques that rely on the known recorded values of
phenomena at specific locations to estimate the unobserved values of the
same phenomenon at other locations.
Spatial interpolation: Group of geostatistical techniques that
relies on the known recorded values of phenomena at specific
locations to estimate the unobserved values of the same
phenomenon at other locations.
Kernel density estimation (KDE), used in crime hot spot analysis, is an
example of a spatial interpolation method. In crime hot spot mapping, for
example, KDE is used to estimate the density of crime across an entire study
area, based on the known locations of discrete events. KDE begins by
overlaying a grid over the entire study area and calculating a density estimate
based on the center points of each grid cell. Each distance between an
incident and the center of a grid cell is then weighted based on a specific
method of interpolation and bandwidth (i.e., search radius). Figure 12.13
illustrates the KDE process and shows several parameters that must be
considered before a density estimate can be produced. These parameters
include the grid cell size, the method of interpolation (i.e., the kernel
function), and the bandwidth.
When the kernel function is placed over a grid cell centerpoint, the number of
crime incidents within the function’s bandwidth is used to determine the
density estimate assigned to each cell. Although KDE is the most common
spatial interpolation method used in criminology and criminal justice
research, other methods of spatial interpolation used in geostatistical analysis
include inverse distance weighting and kriging.
Earlier in this chapter, we described regression analysis as a popular method
for predicting a particular outcome based on variables believed to be
associated with the outcome, using a linear equation. This approach to
quantitative data analysis can be extended and applied to spatial data in
geostatistical analysis. One of the most common approaches to applying
regression analysis to spatial data is through a technique called
geographically weighted regression (GWR).
Geographically weighted regression (GWR): Spatial analysis
technique similar to linear regression analysis involving the use of
predictor variables to estimate their relationship to an outcome
variable or dependent variable.
Figure 12.13 Visual Illustration of the Parameters That Define the Process of
Kernel Density Estimation (KDE)
Many ethical guidelines exist for conducting data analysis, but few are as
clear and concise as those developed by Rachel Wasserman (2013). Those
most relevant to the field of criminology and criminal justice include the
following:
1. Researchers must be competent and sufficiently trained in the techniques
they use and/or only delegate analysis responsibilities to other members
of the research team that are competent. If an analyst doesn’t know how
to properly conduct data analysis, then he or she risks producing and
reporting inaccurate or misleading information.
2. Analysis should only be conducted with software designed to conduct
the specific analysis required to answer the researcher’s research
question(s).
3. Researchers should choose a minimally sufficient analysis based on the
research question and assumptions of the statistical technique.
4. Researchers must “know their data” (i.e., how the data were collected,
and how the data were prepared for analysis, including proper
exploratory analyses).
5. Changes made to raw data should always be documented, along with the
reasons that the changes were made.
6. Don’t cherry-pick findings. In other words, researchers should avoid
“mining the data” for answers that support their personal views.
7. Researchers should make it clear to the research team that high ethical
standards are valued and that it is everyone’s responsibility to apply this
standard to all aspects of the research project—including data analysis.
Cuevas provides an excellent summary of the importance of maintaining high
ethical standards when conducting data analysis. He put it this way during his
video interview conducted for this book: “We can do analysis quickly and
easily now because of computers, but that can give you a false sense of
security. Everything has to be done well and to a high ethical standard before,
during, and after the data analysis . . . if the results of the analysis are good.”
© zrfphoto/iStockphoto.com
Why Conduct Policy-Relevant Research?
Policies directly influence all of our lives in many ways on a daily basis. For
example, policies reflected in speed limits affect how fast we each drive (at
least when we do not think a police officer is around). Policies determine at
what age we can drink alcohol, serve in the military, and marry. Policies
dictate not only when we can marry but who we can and cannot marry.
Policies affect student loan availability and repayment schedules.
The late 1960s saw an alarming increase in crime. In response, the Law
Enforcement Assistance Administration (LEAA; the precursor to the Office
of Justice Programs; see Chapter 9) was established in part to advance the
criminal justice discipline. A part of this included funding research to
influence criminal justice policy. Today, as a result of this work, you are
likely familiar with many criminal justice policies. Some controversial
policies include the three-strikes policies in effect in 28 states that require a
person who is found guilty of committing a violent felony after having been
convicted of two previous crimes to be imprisoned for life. Also widely
known are sex offender registry policies. Although the specific policy differs
by jurisdiction, sex offender policies require convicted sex offenders to
register with their local law enforcement agencies. The amount of
information they must provide differs, but the purpose of the registries is to
allow law enforcement to better monitor these individuals, as well as to allow
the public to be aware of potential risks who may live near them.
Another widely known criminal justice policy concerns mandatory arrest
resulting from a domestic violence incident. Mandatory arrest policies require
the arrest of a person when the law enforcement officer has probable cause
that an individual committed a violent act against a domestic partner. In these
instances, the officer does not need a warrant, and the officer did not need to
witness the violence.
It seems reasonable to expect that policies we all live with such as three-
strikes, sex offender registries, and mandatory arrest were designed and
implemented based on findings from a body of well-conducted research.
Although that is reasonable, it does not always happen. Not many of us
would be comfortable to learn that our lives are affected by policies crafted
based on a single piece of research (no research is perfect, so using a body of
research findings is important), a policy maker’s whims, political or other
ideology, or random chance. Most of us hope or assume that decisions about
what policies to implement, and the shape of those policies, were based on
our understanding about what is best for the public and those involved given
a body of research findings.
It almost seems silly to state clearly that we want our policy to be based on a
body of good research. Nevertheless, it has to be stated because in reality,
policy design and implementation is guided by more than good research. In
the past, it has been guided by a single imperfect piece of research, political
or religious ideology, and other seemingly random factors. This means that
policies that affect your life are not always influenced by the best research
available. This can lead to unnecessary suffering, expensive approaches to
social issues that do not work, and a failure to ameliorate a problem of
interest. In sum, we want research to be policy relevant because we want to
solve problems and make the world a better place. We want to live under
policies that improve the world and not worsen it for anyone.
Mandatory arrest policies adopted widely mandate officers to make an arrest
with probable cause, but no warrant in domestic violence cases, even if the
violence was not witnessed. Are these policies based on well-conducted
research? What might explain the adoption of the consequential policies?
© wh1600/iStockphoto.com
What Is Policy-Relevant Research?
Policy-relevant research is research that directly influences policy makers
or agency personnel who are developing and implementing policy. Policy-
relevant research can be used to provide an understanding about what societal
problems exist and why those problems are important to solve, what policies
are needed, how policies should be shaped, how policies should be
implemented, how existing policies should be adjusted, and what policies are
not beneficial to the group they are designed to assist (to name a few
purposes). Policy-relevant research can be used by policy makers to inform
and address policy needs in two ways. First, policy-relevant research can be
used by policy makers to identify and develop needed policies focused on
important issues. Second, policy-relevant research can be used by policy
makers to improve and enhance existing policies. Policy-relevant research is
not research on a policy but research that directly affects or influences policy.
To be clear, no single piece of research can (or should) change the direction
of policy. Rather, a body of research should inform policy design and
implementation. Producing policy-relevant research means generating
research that adds to a body of literature that influences policy makers and
that influences small policy changes on the margin.
Policy-relevant research: Research that directly influences the
development of and implementation of the principles, rules, and
laws that guide a government, an organization, or people by
informing and influencing policy makers.
What Is Policy?
Before further discussing policy-relevant research, it is useful to clearly
identify what we mean by policy. As is the case with complex topics, there is
no one widely agreed upon definition of policy. Policy is multifaceted,
making it difficult to define. Here are several common definitions:
“A definite course or method of action selected from among alternatives
and in light of given conditions to guide and determine present and
future decisions.” (Merriam Webster Dictionary Online, n.d.)
“A definite course of action adopted for the sake of expediency, facility,
etc.” (Dictionary.com, n.d.)
An “action or procedure conforming to or considered with reference to
prudence or expediency.” (Dictionary.com, n.d.)
“Prudence or wisdom in the management of affairs.” (Merriam Webster
Dictionary Online, n.d.)
“Management or procedure based primarily on material interest.”
(Merriam Webster Dictionary Online, n.d.)
“A high-level overall plan embracing the general goals and acceptable
procedures especially of a governmental body.” (Merriam Webster
Dictionary Online, n.d.)
“The basic principles by which a government is guided.” (Business
Dictionary Online, n.d.)
“The declared objectives that a government or party seeks to achieve
and preserve in the interest of national community.” (Business
Dictionary Online, n.d.)
“A course or principle of action adopted or proposed by an organization
or individual.” (Oxford Dictionary Online, n.d.)
What do you want influencing policy? Would you be okay to learn that
horoscopes influenced the design and implementation of policy? Would you
prefer policy be based on well-conducted research? What can you do to
ensure the later happens more than the former?
© Svetlana_Smirnova/iStockphoto.com
By blending elements of these commonly available definitions, we offer a
simple definition of policy as the principles, rules, and laws that guide a
government, an organization, or people. Examples of criminal justice
policies, as described earlier, include three-strikes policies, sex-offender
policies, and mandatory arrest policies. Policy is broad and includes actions
or the adoption of principles, rules and laws in governments, nonprofits,
quasi-governmental agencies, and the private sector. A more specific type of
policy is public policy. Public policy refers to policy designed and
implemented by governmental agencies specifically. Policy expert Paul
Cairney (n.d.) defines public policy as the “the sum total of government
actions, from signals of intent, to the final outcomes.” It too is broad, but it is
limited to policy actions in a government. Given this information about
policy, we can expand our earlier definition of policy-relevant research to be
research that influences the design and implementation of principles, rules,
and laws that guide a government, an organization, or people.
Policy: Principles, rules, and laws that guide a government, an
organization, or people.
Public policy: Policy designed and implemented by governmental
agencies specifically.
When thinking about policy, you may hear a variety of terms such as policies,
procedures, and guidelines. This section offers some insight into what each of
these terms means, although they bleed together. In some ways, they all refer
to policies but with different levels of specificity. As noted, policies are the
principles, rules, and laws that guide a government, an organization, or
people. In general, we think of policies as being broad statements containing
little detail that are formally adopted by the appropriate board or authorizing
group. At times, however, a policy is produced that is very detailed that gives
almost no discretion to the regulatory agency in promulgating regulations. On
the other hand, policy makers have also at other times written legislation and
policies that are very brief (e.g., a page long) that leave nearly all of the
nuance and discretion to the agency responsible for the policy. In general,
procedures are more detailed protocols, standard operating procedures, or
the step-by-step processes that should be followed to accomplish the spirit of
the policy. Although policies are formally adopted by a body given the power
to do so, procedures are generally crafted by a different group of individuals.
Finally, a regulation, rule, or guideline offers recommendations about how
to accomplish the step-by-step procedures. Regulations, rules, and guidelines
outline the expected behavior and actions one should take in following the
procedures. Regulations, rules, and guidelines frequently provide examples of
how to deal with specific instances an individual may encounter. Unlike
policies and procedures, rules, regulations, and guidelines are not
compulsory, but they are suggestions or best practices.
Figure 13.1 Relationship Between Policy, Procedures, and Guidelines
Procedures: Step-by-step or standard operating procedures, that
should be followed to accomplish the policy.
Regulations: Recommendations about the expected behavior
during the course of following procedures, with examples of how
to deal with specific instances one may encounter.
Rules: Recommendations about the expected behavior during the
course of following procedures, with examples of how to deal
with specific instances one may encounter.
Guidelines: Recommendations about the expected behavior
during the course of following procedures, with examples of how
to deal with specific instances one may encounter.
Who Are Policy Makers?
For your research to influence policy makers, you know who the policy
makers are. Most broadly, policy makers are individuals in a position with
the authority to decide the principles, rules, and laws that guide a
government, organization, or people. For much of Santos’s (Santos & Santos,
2016), Brunson’s (Brunson & Weitzer, 2009), and Dodge’s (Dodge et al.
2005) research, police chiefs are the policy makers. For much of Melde’s
research (Melde, Taylor, & Esbensen, 2009), policy makers are school
superintendents. And is Cuevas’s (Sabina, Cuevas, & Cotignola-Pickens,
2016) and Zaykowski’s (2014), policy makers are generally those at the state
and the federal level who can change policies related to victimization. For
example, Cuevas and colleagues’ published research (Sabina et al., 2016)
focused on sexual violence assault against Latina women was used in
congressional briefing documents. Zaykowski’s continued relationship with
those in the Department of Justice who focus on victimization means her
work (Zaykowski, 2014) will be influential in policy going forward. Many of
our featured authors engage in evaluation research, which by definition is
relevant. By using the findings from this work, programs or policies are
influenced. Policy is so complex that it cannot be managed by only a handful
of people. This means that policy makers can be found in a multitude of
places. Policy makers exist at the local, state, and federal levels. Policy
makers can be elected officials, bureaucrats, civil servants, or individuals
appointed to important roles in the community. Policy makers are found at
the International Association of Chiefs of Police (IACP), the U.S. Senate,
county commissioner offices, and university presidential suites. Policy
makers may also be individuals who work closely with those just named.
Policy makers can lead agencies in the executive, legislative, and court
branches of government, and they can be found in think tanks, lobbying
groups, professional organizations, or other organizations. Brunson argues
that we all have the potential to be policy makers. Are you a community
leader? Do you work in a place that has influence over others? Are you a
member of a social club or religious organization? A policy maker, Brunson
notes, is just a person who is positioned politically, or socially, to have his or
her directives and recommendations put into practice. That may be you.
Source: Adapted from “Most Americans Still See Crime Up Over Last
Year,” Justin McCarthy, Gallup News, November 21, 2014.
A well-known advocacy group affecting criminal justice policy is the
National Rifle Association (NRA). The NRA was founded in 1871 to
“promote and encourage rifle shooting on a scientific basis” (NRA, n.d., para.
1). Even though the NRA continues to be a force dedicated to firearm
education, it has expanded its influence to include other activities including
lobbying. The NRA began direct lobbying in 1975 with the formation of the
Institute for Legislative Action (ILA). The ILA lobbies policy makers to
implement assorted policies in response to what it perceives as ongoing
attacks on the Second Amendment. Today, the NRA views itself as the oldest
operational civil rights organization, and it is a major political force when it
comes to firearm policy in the United States. For example, the NRA
successfully lobbied policy makers in Congress who then required that “none
of the funds made available for injury prevention and control at the Centers
for Disease Control and Prevention may be used to advocate or promote gun
control” (Luo, 2011, para. 11; NRA-ILA, 2001). This example shows how
some advocacy and interest groups can influence policy by drowning out the
voices of some researchers, and can even prevent research from being
conducted although it might inform important criminal justice policies.
The NRA successfully lobbied policy makers in Congress who enacted
policies requiring “none of the funds made available for injury prevention
and control at the Centers for Disease Control and Prevention may be used to
advocate or promote gun control.” How might this affect policy on gun
violence in the United States?
© DmyTo/iStockphoto.com
Researchers compete with advocacy and interest groups to influence policy
makers and, ultimately, policy. Although there is competition with these
groups for the attention of policy makers, it is not necessarily the case that
researchers and advocacy and interest groups are at odds with one another. In
fact, many researchers and advocacy groups have interests that align, which
could suggest a collaborative opportunity.
Ideology
Ideology, whether religious, economic or political, is another powerful
influence on policy makers that can keep policy-relevant research from
affecting policy. Ideology is a set of ideas that creates one’s economic,
political, or social view of the world. Ideology is powerful and can blind
someone to contrary evidence found in research. It can prompt members of
the public and other groups to lobby policy makers for wanted policy.
Ideology can cause a policy maker to even doubt whether research is valuable
at all. It is challenging to make your research policy relevant when policy
makers themselves do not believe in research and research findings or that it
can offer valuable policy implications. Consider the role of political ideology
on incarceration policy. Most liberals believe that the criminal justice system
should focus on rehabilitation, which means policies promoting less
incarceration. Conservatives generally opt for a punitive approach requiring
longer, and tougher, prison terms be given to those convicted of crimes. What
is your viewpoint? Should we focus on rehabilitation, or should we focus on
a harsher imprisonment? Why do you think that? Is it your ideology, or are
you aware of what research has to say about this topic? When ideology
guides your decision making, it may do so in a way that is contrary to the
findings of a large, rich body of research on the same topic. As a result,
ideology can influence policy in ways that may worsen versus ameliorate an
important social issue.
Ideology: Set of ideas and ideals that form one’s economic,
political, or social views of the world.
Budget Constraints
Budget constraints are something that can keep policy-relevant research from
affecting policy. We live in a world with finite financial resources. This
means that even when the best body of research points to a particular policy
that would produce excellent outcomes, it may be too expensive to
implement. Budgets provide information about how much money and other
resources can be spent in any given period. Budgets are important
considerations when it comes to policy design and implementation. Consider
a policy that offers free housing, education, and job training to those
convicted of a crime after the are released from prison in order to greatly
reduce recidivism. Although research may show that this type of intensive
intervention leads to far better outcomes, funding such an approach may be
prohibitive. This is just one example of budgets and limited funds getting in
the way of research findings influencing policy.
Budgets: Provide information about how much money and other
resources can be spent on an items. Policies are subject to budget
constraints.
Research in Action: Type of Attorney and Bail Decisions
Although defendants are entitled to effective assistance of
counsel, research by Williams (2017) suggests appointed counsel
in particular often fail to provide effective assistance, and
negative case outcomes (e.g., conviction, longer sentences) result.
The purpose of this research is to investigate whether the types of
counsel—public defender versus retained—influences bail
decisions. The following three hypotheses were addressed in this
research:
1. There is no relationship between type of counsel and whether
or not defendants are denied bail.
2. Defendants with public defenders are less likely to be
released prior to case outcome than are defendants with
retained counsel.
3. Defendants with public defenders will be assigned higher
bail amounts than will defendants with retained counsel.
To conduct this research, Williams (2017) used the 1990 to 2004
State Court Processing Statistics data set collected by the U.S.
Department of Justice’s Bureau of Justice Statistics. These
secondary data were downloaded from ICPSR and include felony
defendant data from the nation’s 75 most populated counties. For
the purposes of this study, the researchers focused on counties in
Florida because it allows for indigent defendants who may be
facing incarceration the right to appointed counsel at bail
hearings.
Analytic techniques include first describing the data (using
descriptives) followed by a series of regressions to address the
hypotheses. An examination of whether the type of attorney
influences whether bailed was denied showed that the odds of bail
being denied was 1.8 times higher for defendants with public
defenders compared with those with retained counsel. This
finding does not offer support for Hypothesis 1. The second
regression investigated whether attorney type influences whether
a defendant was released. The findings show that defendants with
public defenders were less likely to be released prior to case
outcome than were defendants with retained counsel. This finding
supports the second hypothesis. And finally, regression output
indicated that defendants with public defenders had lower bail
amounts than had defendants with retained counsel, which does
not support Hypothesis 3.
This research has important policy implications. First, the
difference between appointed and retained counsel is vital in the
earlier stages of a case when decisions are made regarding a
defendant’s fate. Although most attention considers case outcome,
this research highlights the need to be alert to disadvantages
throughout the process. Yet, the news reported here is not all bad.
Even though defendants with public defenders were more likely
to be denied bail and less likely to be released, they also benefited
from lower bail amounts and from nonfinancial release options.
All defendants deserve equal representation regardless of the
stage of the process and the type of attorney representing them.
Williams, M. R. (2017). The effect of attorney type on bail
decisions, Criminal Justice Policy Review, 28(1), 3–17.
Maximizing Chances of Producing Policy-Relevant
Research
So far, we have identified all those things that make getting policy-relevant
research to policy makers challenging. This section take a more positive view
and offers actions that you as a researcher can do to maximize the chance that
your research will gain the attention of the policy maker and ultimately
influence policy.
Plan to Be Policy Relevant From the Start
One way you can maximize the probability that your research will be policy
relevant is by thinking of policy relevance early when the research project is
being designed. A common error that researchers make in regard to
producing policy-relevant research is not considering policy relevance until
the research is complete. A researcher must think about the policy relevance
of their research at the earliest stages of planning the research. Waiting until
research is complete may be too late, or at best, it will minimize the chances
that the research will be policy relevant. You as a researcher must understand
existing policy and policy gaps that require research attention to produce
policy-relevant research. You as a researcher must formulate research
questions that are useful to policy makers. You as a researcher must have a
relationship with policy makers before research has begun. In some cases,
including a policy maker on the research team is beneficial for all parties. The
researchers gain a great deal of understanding of what is important to policy
makers. And policy makers as research partners feel some ownership of the
research. This relationship means the researcher and the research has the
assistance in getting the attention needed to influence the policy process. By
thinking about policy relevance in the planning stages of research, you can
maximize the chances that your work will be useful in the design and
implementation of policy.
Relationship
Another “must do” to maximize the chance of producing policy-relevant
research is to develop and maintain a relationship with policy makers relevant
to your research interests. First, you must learn who the policy makers
relevant to your area of research are. You must learn where they are. Then
you must reach out and make contact with those policy makers. This may
happen on a one-on-one basis or at a networking event. Should you get some
one-on-one time with the policy maker, be prepared to share, in plain
English, what your research is about, how it relates to the policy of interest,
and how your research can guide the policy maker. Offer the policy maker
policy briefs of your work. And make clear that you are available to the
policy maker for her future needs. Access to a policy maker can also be made
through his or her associates. Find out who they are as well. Reach out to
them and develop a relationship with them. Include them on research
projects. Be respectful of the policy maker and his or her staff’s time as they
are busy and have many competing issues and voices making demands of
them. Your goal is to let them know how you can help them and make their
lives easier.
Translating Your Research
And third, translate your research and findings to policy makers to maximize
the chances it will be policy relevant. Do not expect policy makers to find
you and your research. It is up to you as a researcher to find them and let
them know you and your work is available and important to them. One option
is for researchers to submit their research that bridges the academic and
policy/general audience population at the following website:
https://theconversation.com. This website is also used by the media when it is
looking for an expert to speak with on a specific topics. A good example of
an accessible piece of research is found at https://theconversation.com/what-
do-special-educators-need-to-succeed-55559, regarding an education topic.
As a researcher, you must present your results in a way that allows policy
makers to use them to make their own arguments convincing to others.
Writing in an accessible way and providing an accurate, but compelling,
statistic are some ways to accomplish this.
Ideally, you will have an ongoing relationship with policy makers and their
associates. You need to provide them information in an easy-to-use format
that is jargon free to help policy makers and other audiences see how your
research can inform policy. As noted, using policy briefs is ideal. Additional
means to communicate policy-relevant research to policy makers exist. One
is to attend or host a policy forum or, ideally, a series of forums where policy
makers are invited to both attend and to participate on a panel dedicated to a
particular topic. Additional, you can produce a regularly published
newsletter, or blog, focused on policy issues and related policy findings that
is disseminated to policy makers. For example, the Alaska Justice Forum
offers a large assortment of policy publications that make connecting with
policy makers (and the public) easy (see
https://www.uaa.alaska.edu/academics/college-of-health/departments/justice-
center/alaska-justice-forum/). Providing clear and concise information about
the issue and clear policy recommendations that they can take back and
implement maximizes the chances that your research is valuable to policy
makers who are busy and thrive on easy to access to, and easily digestible,
policy information.
Common Pitfalls in Producing Policy-Relevant
Research
Several common pitfalls associated with attempts at producing policy-
relevant research have been emphasized in this chapter but bear repeating
here. These include believing all research is policy relevant (it is not), failing
to address policy-relevant questions, and waiting too long to consider the
policy relevance of your research.
Producing Research That Is Not Policy Relevant
Researchers often mistakenly believe that all research they conduct is policy
relevant. It is not. Every piece of research conducted and published is not
useful to policy makers. Cuevas has witnessed some researchers who try to
shoehorn everything they do into policy work when it simply does not fit the
bill. If you as a researcher have that relationship with relevant policy makers,
then they can help in developing research questions that will allow
researchers to address policy questions.
To know whether your research is policy relevant, consider these questions.
Does your research address a policy need? Which policy? How does it
address the need? Have you as a researcher educated yourself about existing
policies related to your research topic? What policy gaps exist? How does
your research fill those gaps? What new information does your research
offer? If you as a researcher cannot answer these questions, and you cannot
identify the policies your work is related to, then you have work to do before
you can produce policy-relevant research. Although it may be true that your
research may be picked up and used to influence policy, your chances are
better if you design your work with an eye toward existing policies and
whether your research addresses policy gaps.
Failing to Recognize How Your Research Is
Relevant
Policy-relevant research focuses on specific research questions that produce
findings of interest to policy makers. Recall earlier chapters where we
discussed several types of research guided by different questions. We
described exploratory research as useful when little or nothing is known
about a topic. The purpose or goal of exploratory research is to answer
questions such as “What is it?” “How is it done?” or “Where is it?”
Descriptive research is similar to explanatory research, although it is much
more narrowly focused on a topic given knowledge gained from earlier
exploratory research. Descriptive research addresses questions such as “What
is it?” “What are the characteristics of it?” or “What does it look like?” In
contrast, explanatory research provides explanations about a topic to answer
questions such as “Why is it?” “How is it?” “What is the effect of it?” “What
causes it?” or “What predicts it?” Exploratory and descriptive research offers
some insight into what social problems exist. In this way, they can inform the
agenda setting part of the policy cycle. Nevertheless, all the rich descriptive
and exploratory research in the world cannot inform policy implementation
or implications. If your goal is to bring attention to a social problem,
descriptive and explanatory research questions are useful, yet this work
cannot offer insight into implementation and implications.
In earlier chapters, we described explanatory research as useful for
identifying what characteristics are related to a topic, as well as what impacts,
causes, or influences a particular outcome or topic of interest. In addition,
through explanatory research, you can gain understanding about how to
predict outcomes or topics of interest. Explanatory research is ideal for
policy-relevant research because it focuses on more or improved
understanding of complex causation associated with an issue. For example,
explanatory research can provide new ideas about what works and what
doesn’t work regarding a policy. Explanatory research can offer new
information about what works for different people in different circumstances
regarding a policy. Research that influences the design implementation and
implications is based on more complex research questions, making
explanatory approaches ideal. Explanatory research can provide an
understanding as to why something is the way it is (which requires an
understanding of what causes it) or what predicts something. This type of
information is useful in the creation of and implementation of a policy.
Failure to Know Relevant Policy Makers
Another common pitfall is to not have a relationship with policy makers. If
you don’t know who policy makers are, you can’t take your findings to them.
To think that policy makers will find your research is fantasy. You must take
the findings to them. This pitfall is related to the fourth pitfall, which is to fail
to produce information about the research for more general audiences.
Handing someone your journal article to read means it won’t be read (try this
at a party and see how it goes). Handing a policy maker a journal article ends
the same way. One must create policy briefs or develop other means of
communication of the research that is easy to access and easy to understand.
Going Beyond Your Data and Findings
And finally, a pitfall of producing policy-relevant research is to go beyond
your data and findings. This is true of all research as well. A researcher must
base his or her findings and policy recommendations on the data gathered.
And a researcher must base his or her policy recommendation on the findings
from those data. A good researcher does not go beyond the evidence and
information he or she has systematically gathered and analyzed to develop
the conclusions and policy recommendations presented. Other influences
such as ideology, intuition, or personal beliefs have no place in a policy
discussion. As a researcher, the policy maker relies on you to provide
information based on evidence and data. Your expertise is valuable—your
unrelated opinions are not.
Another pitfall is that researchers too often assume that science and evidence
should and can trump politics. Policy decisions are inherently political
decisions, and therefore, the policy makers must consider trade-offs between
values, evidence, economics, and so on. Although this may seem frustrating
and disappointing, it is a part of the process. It should not mean that you as a
researcher should not interface with policy makers, even if it doesn’t always
translate to the outcomes our research points to.
Ethics and Conducting Policy-Relevant Research
Being guided by ethics is a constant in all research we conduct. When one is
producing and sharing policy-relevant research that does not change. This
section offers some ethical considerations to keep in mind when conducting
policy-relevant research. First, whether in writing or verbally, you as a
researchers must always be clear about the limitations of your research. No
research is perfect, including yours. Policy makers may hope that your
research offers some important information regarding a policy, and it is up to
you to ensure that the policy maker understands exactly what your research
can and cannot be used to support. Never go beyond your data and findings
regardless of the temptation to do so.
© Wavebreak/iStockphoto.com
Also, while you are still in college, attend functions on topics you care about.
Go to discussion panels on topics that you are interested in. Do not just attend
these events, but introduce yourself to the speakers and thank them for their
time and expertise. Introduce yourself as a student who hopes to get a job in
the field you care about when you graduate. You can even buy your own
business cards to hand out so these experts and leaders in the field can
contact you again. People attending these talks, or sitting on the panels of
these talks, are connected. It is true, and no exaggeration, that it is a small
world, so becoming known as an eager, energetic, and absolutely qualified
person can pay off hugely when it comes to starting your career.
Finally, recall the advice that Michael Shivley, offered earlier in the book:
“Get all the training you can, and learn everything you can.” Attend
workshops on Microsoft® ExcelTM. Attend workshops on interviewing.
Attend workshops on résumé writing. Become more of an expert on
programs like IBM® SPSSTM by watching YouTube® videos. You cannot
attend too many workshops, and you cannot be overtrained or overprepared.
You never know when this information and these skills will come in handy.
Anything you learn is guaranteed to come in handy. If you take advantage of
the opportunities the university offers you, and you view your professors as
allies, you will be well situated and have given yourself many advantages
when it is time to look for the first job in your rewarding career.
Career Search Documents
Almost every position you apply for will require you to produce a variety of
documents as part of your job application. It is advisable that you have these
documents complete and polished prior to starting your job search in earnest.
Although they may be ready to go, each application will require tweaks to
those documents. Keep in mind it is easier to tweak a document than it is to
develop a new document when you might be up against a deadline for
submission of an application. Three of the most important job-related
documents you’ll need to provide at some point during the application
process are (1) a cover letter, (2) a professional résumé, and (3) letters of
recommendation.
Cover Letters
When it is time to start looking for a job, and a potential employment
opportunity is identified, one of the first things that an applicant should do is
prepare a job cover letter. A cover letter is formal correspondence, written to
the hiring official, explaining your interest in an employment opportunity
with his or her organization. There are several reasons why job applications
are accompanied by cover letters. Cover letters do the following:
Cover letter: Formal correspondence, written to a hiring official,
explaining a candidate’s interest in an employment opportunity
with his or her organization.
1. Introduce you to the organization and the hiring official
2. Present a clear, concise, and compelling case for why you are the best
person for the advertised job
3. Fill in gaps and elaborate on aspects of your résumé that would
strengthen your application
Although each cover letter is a unique chance to “sell” yourself to the
employer, they all share similar characteristics. A strong cover letters begin
by clearly conveying the job applicant’s contact information followed by the
current date. The hiring official’s contact information, including his or her
title, name of organization/company, and the organization’s address should
also be available. The letter itself begins with a salutation, which should be
personalized and respectful—avoid the all-too-common “To Whom It May
Concern” as most job advertisements show the name and title of the person
the application should be submitted to. You must take the time to ensure the
salutation is correct in terms of both the title and the preferred gender
pronoun: Dr. Hart, Dr. Rennison, Ms. Smith, Mr. Johnson, Ms. Schwartz, and
so on.
Because a cover letter “supports” or “clarifies” other information in your
application, it should be clear and concise. Keep in mind the person hiring (or
committee hiring) has many applications to go through. In some cases, there
are hundreds of applications to go through. To be one of the applications
considered, the body of your cover letter should consist of only two or three
paragraphs. The opening paragraph should be designed to grab the hiring
official’s attention. It should state why you are applying, the position to
which you are applying, and the advertised job announcement number (if
applicable). The second paragraph should promote you as the applicant,
emphasizing your knowledge, skills, and abilities. The final paragraph should
reinforce your “fit” for the job, restating your interest in the position and
referencing your résumé. Each paragraph must be able to stand on its own,
each supporting an application in a unique way.
A cover letter always ends with a complimentary close. In this context, a
complimentary close is a polite ending to the cover letter. In job cover
letters, common closes include Warmest Regards, Regards, Sincerely, and
Best Wishes (for some examples). A quick search of the Internet will produce
numerous examples of high-quality examples of cover letters that you can use
when applying for your next job.
Complimentary close: Polite ending to a letter.
Figure 14.1 shows an example of an excellent cover letter.
What Makes a Great Résumé?
A great résumé
Answers the question, “What do I have to offer the
employer?”
Provides personal contact information
Contains a clear and concise goal statement
Lists current and past employment, demonstrating
employability
Contains details of formal educational achievements,
including the names and addresses of the institutions
awarding the degree(s) and time spent at that institution
Presents key skills that an employer desires
It is no exaggeration to state that cover letters can make or break a job
candidate’s application. The authors of this text have sat on countless job
search committees and have more stories about botched cover letters than you
can imagine. A surprisingly frequent error we see is addressing the letter to
the wrong place—to the wrong city even! It is never useful to say in your
cover letter how you have always dreamed of working in Boulder, Colorado,
when the job is located in Denver, Colorado. They are very different places!
Another common fatal error is addressing the hiring authority using the
wrong pronoun. Is the hiring authority a doctor? Is so, address him or her
accordingly. Do not assume all hiring authorities are male either. Do not
assume all women use the title of Mrs. This error is common and a reason to
throw away an application when attention to detail is part of the job
requirements. If these basic things are botched, the hiring committee wonders
else about the applicant’s ability to do the job he or she will foul up when he
or she can’t get these simple things correct. Seemingly small errors like this
will get your application a one-way trip to the trash. That is no exaggeration.
Figure 14.1 Cover Letter Example
Résumés
In preparing to go on the job market, one must also craft a résumé. The
following sections explain what a résumé is, why you need one, and what it
should (and shouldn’t) contain. This section concludes with a sample résumé
that can be used as a guide when crafting your own.
What is a résumé? A résumé is a written summary of your educational
record, work history, certifications, memberships, and accomplishments that
often accompanies a job application. Potential employers can use applicants’
résumés to filter out those not meeting the criteria of the job so they can
develop a “short list” of those people who will be interviewed for the
opening. Therefore, it is vital to develop a very good résumé, one that
presents a strong and convincing argument that you are the right person for
that job you are applying for.
Résumé: Written summary of your educational record, work
history, certifications, memberships, and accomplishments that
often accompanies a job application.
Given the gravitas of a résumé, and how it can stand between getting an
interview or not, you should make sure your résumé is designed with a few
important objectives in mind. First, your résumé should provide clear,
straightforward, and easy-to-find answers to a very important question:
“What are the main things that I have to offer which will benefit this
employer and that are relevant for this job?” In short, the résumé should tell
prospective employers everything that might interest them but nothing that
will waste their time. Know your audience!
Second, your résumé should provide your personal details, including your
name and, most important, your contact information (e.g., your home
address, mobile phone number, and e-mail address). It is shocking the
number of résumés that are submitted with no contact information!1 Make
sure that if you include an e-mail address, that the e-mail address is
appropriate. You wouldn’t want to risk missing an interview opportunity
because your e-mail address suggested you might not be the right person for
the job (e.g., allnightpartymonster@gmail.com; qqqq@yahoo.com;
lazybarfly@hotmail.com; reefermania@ganga.com; xxxxlips@aol.com). In
terms of personal information, be aware that there are some things that you
are legally not required to include on your résumé (or in an interview) such as
your gender, date of birth, nationality, ethnicity, or marital status.
1. There is one caveat regarding contact information, however: Because most
résumés are being uploaded onto public spaces now, contact information,
such as mailing address and phone number, are often being left off in those
cases, and only an e-mail address and/or LinkedIn profile URL is provided.
Therefore, consider having a version that can be sent directly to employers
with your full contact information and another that can be placed online with
just the e-mail and LinkedIn URL.
Third, a résumé should contain a clear and concise objective statement. This
is usually a single sentence that describes the role and field in which you
wish to work. For instance, you may want to indicate that you wish to be a
crime analyst in the Lincoln, Nebraska, Police Department (assuming you are
applying to the Lincoln, Nebraska, Police Department). Be careful that this
objective statement isn’t too broad or too vague. The more focused it is on a
specific job, the stronger it will be. If you just tell prospective employers that
you are hardworking and want to work in their field, it doesn’t really add
much value to your résumé. Instead, use this space to give them a specific
understanding of what you are looking to achieve and why you are the best
person to do so.
Fourth, your résumé should include a section presenting your current and past
employment. Documenting all of your paid work, including part-time,
temporary, and internship work, demonstrates your experience and
employability. If you have completed volunteer work or an internship,
include that information as long as it is relevant to your employer’s interests.
In addition, note that it was voluntary or an internship. To pass this off as a
paid position would be unethical. When presenting your employment history,
it is a good idea to present it in reverse chronological order. This means your
employment history should start at the top of the résumé with your present or
most recent employment, going back in time down the page.
Applicants should use all of this information to determine whether they are
interested in and qualified for the job.
Furthermore, all jobs within the U.S. federal government are classified by the
General Schedule (GS), which determines the pay scale for most federal
employees, especially those in professional, technical, administrative, or
clerical positions. The GS pay scale consists of 15 grades, from GS-1, the
lowest level, to GS-15, the highest level (see Table 14.1). Within each scale,
there are 10 steps. The table shows the 2017 GS pay scale for federal
employees (in $1,000 increments). For the Statistician’s position presented
earlier in Figure 14.3, the salary range would be between $43,300 (GS-9,
Step 1) and $68,000 (GS-11, Step 10).
General Schedule (GS): Determines the pay scale for most
federal employees especially those in professional, technical,
administrative, or clerical positions.
Pay close attention to the job qualifications listed in the job advertisement.
Use those same words in your application. If they note that you need
proficiency with SPSS, then you need to put in your application that you are
proficient in SPSS (if this is true). Key words are important and a quick and
easy way to find applicants who are qualified.
Applying to State and Local Positions
Many state and local governments hire employees who are competent in
conducting research or who possess strong research methods and data
analysis skills. State and local agency positions are also posted online.
Applicants can go directly to the state or municipal website to peruse
announcements, or they can use a website like Governmentjobs.com, which
searches government job websites and compiles government job listings in
one place.
Figure 14.4 is an example of a municipal job you can find on the
Governmentjobs.com website. This particular announcement is for a Crime
Analyst position with the City of Round Rock, Texas. Notice that the some of
the skills, experience, and training the city is looking for relate specifically to
several of the topics covered in this text as they pertain to criminological
research. The ideal candidate will have the ability to do the following:
Evaluate data
Analyze findings
Write comprehensive reports
Think critically with respect to sources, research methods, and document
creation
Develop a variety of analytical products
Facilitate the transfer of information between local, state, and federal
agencies
Use a variety of software applications and resources—most at an
intermediate level
Working in the Private Sector
As described in the previous section, public-sector employees are individuals
who work for some sort of government agency, at either the federal, the state,
or the municipal level. Private-sector employees, in contrast, are individuals
who work primarily for private-sector businesses or companies. Although
jobs in both the private and public sectors may sometimes require similar
professional skills, there are important distinctions.
Private-sector employees: Individuals who work primarily for
businesses or nonprofit agencies.
Public-sector employers are government agencies; therefore, certain
constitutional rights are afforded to public-sector employees. Often private-
sector employees do not enjoy similar legal protections. On the other hand,
because public employees often hold positions of trust in the society (e.g.,
law enforcement), some constitutional rights (e.g., union activity and speech)
may be limited so that the government agencies may perform their day-to-day
functions. One of the biggest distinctions between private-sector and public-
sector employment, however, relates to job security. Most private-sector
workers are considered to be “at-will” employees. This means that they can
be fired from a business or organization for almost any reason. It is currently
unconstitutional to be fired because of your race or gender or for exercising
rights provided by statutes (e.g., workers’ compensation or truthfully
testifying in court). This is not the case in the public sector, however.
Government agencies are not allowed to discipline, demote, or fire an
employee unless there is “cause” (e.g., violating rules governing the
workplace, dishonesty, misconduct, or poor performance).
Despite the differences between public- and private-sector jobs, both provide
excellent career opportunities for job seekers. Jobs in both arenas offer job
seekers like you the chance to apply your knowledge, skills, and abilities
related to conducting research in a professional position. Figure 14.5 shows
excerpts from a job advertisement for a Fraud Detection Specialist at Square,
Inc., a mobile payment company based in San Francisco. This particular
position seeks an employee to help identify and protect clients against
financial loss.
Understanding the difference between public and private employment is
important. It is also good to understand what kind of jobs might be available
in each sector that gives employees the opportunity to apply their research
skills. Knowing the best way to find and apply for these positions is equally
important. The next section focuses specifically on where to begin looking
for that dream job, one that will start you down your chosen career path.
Figure 14.4 Crime Analyst Position Advertised on Governmentjobs.com
Figure 14.5 Fraud Detection Specialist Job Announcement
Where to Look for a Career
Before you start looking for a career, there are a few things worth
considering. For example, Featured researcher Chris Melde suggests students
think through why they want a job and be prepared to understand what their
limits are. Understanding what you are (and are not) qualified to do is
important to know before looking for a job. Rachel Boba Santos shares
similar thoughts. She suggests that students make sure that the job they’re
after is one they are qualified to have; students should read job
announcements carefully. But the bottom line is this: To find the perfect job
to start your career, you need to know who is hiring and what jobs they are
looking to fill. As some of the information presented in this chapter has
already implied, a great place to look for answers to these questions is the
Internet. There are myriad ways to search the Web to find jobs, and some
approaches are more effective than others. In this section, we provide some
suggestions for finding the right job for you, quickly and easily.
Online Job Searches
Long gone are the times when we looked for jobs in the classified section of
the newspaper. Most jobs these days are found online. Online job advertising
allows employers to reach a much broader applicant pool than classified
newspaper ads. Employers can use Web-based analytics to determine the
effectiveness of their online ads, and they can develop strategies based on this
information to target specific pools of applicants in more effective ways.
For those looking to be hired, online ads offer real-time information about
who’s hiring and for what kinds of jobs. In addition, online ads often are
linked to application sites that allow people to submit their résumés to the
hiring company’s human resources department immediately (e.g.,
USAJOBS.gov). In short, online job advertising has changed the way
potential employees look for work, and how employers recruit the best
applicants.
Three of the most common job search sites are (1) Indeed.com, (2)
Careebuilder.com, and (3) Monster.com. The trick to finding your dream job
on these, or on any of the other online job listing sites, according to Alison
Doyle (2016), founder and CEO of CareerToolBelt.com, is to understand and
effectively use keyword searches.
Keyword: In the context of an online job hunt, it is a word or
phrase that is particular to the job sought after and that will
narrow a search to include only those that a candidate is interested
in applying for.
A keyword, in the context of an online job hunt, is a word or phrase that is
particular to the job you’re looking for and that will narrow your search to
include only those that you’re interested in applying for. For example, in the
federal government position provided in Figure 14.3, “1530” is a keyword.
Using keywords can enable you to search online job sites more efficiently
and effectively; here are six tips for using keywords while scouring job
search sites:
Field or industry: Begin by putting in the field or industry you would
like to work in, such as “criminology” or “research” or “GIS” or
“analysis.”
Location: It is up to you how precise you would like to be. You can put
in a state, city, town, or even a zip code.
Desired job title: You can try putting in your desired title (e.g., crime
analyst), but keep in mind that not all companies use the same title, so it
is good practice to search on related or alternative titles a company
might use.
Industry-specific skills, tools, and jargon: As well as searching by job
titles, you can search by the functionality required by a job (e.g., SPSS).
Company names: If you happen to have a dream company that you
would like to work for—or a giant multinational company that you
know has a lot of job openings at any one time—you can search directly
by the company name.
Job type: Narrow search results by putting in full time, part time,
contract, and so on.
Online job advertising allows employers to reach a much broader applicant
pool than classified newspaper ads.
© Boogich/iStockphoto.com
In addition to websites specifically designed to disseminate information about
available jobs, social media represents a fantastic way to learn about who’s
hiring. Facebook®, LinkedIn®, and Twitter® are the biggest names in social
media platforms, and they can be used to help identify and land that perfect
job.
When using social media to find and apply for jobs, there are several general
guidelines that you should follow.
© Blackzheep/iStockphoto.com
Social Media
When using social media to find and apply for jobs, there are several general
guidelines that you should follow. One of the first things you will want to do
is to “clean” your Internet image. For example, if you “Google” your name
right now, what sort of information would be returned? If you’ve used
various social media sites to rate former professors or employers, posted less-
than-flattering images of you and your friends, or hosted a website that not
everyone would appreciate, you may want to consider removing anything that
a potential employer might find and use to disqualify you from employment
consideration.
A wide variety of applications are available for searching, finding, and
cleaning/removing unwanted content from the Internet. It is worthwhile
checking for content that may create problems for you as a job applicant and
removing unwanted content before applying for a job. Nevertheless, be aware
that things on the Web really are forever. For example, if you check out the
Wayback Machine (https://archive.org/web/), you can see that pages are
archived forever. Still, cleaning up what you can is better than leaving
everything you are not proud of today posted for your prospective employer
to see. And don’t doubt it, your prospective employer will “Google” you.
That is guaranteed.
It is also a good idea to limit your social media exposure to the platforms that
are important. Facebook, LinkedIn, and Twitter are the platforms most likely
to be checked by a potential employer. Having a social media presence on
other platforms is going to cost you precious time to maintain and will not
likely return a substantial contribution of your time on that investment.
Relatedly, all your social media accounts and e-mails associated with them
should reflect your real name, not a nickname that only your friends know
you by, especially if it is less than flattering or may be offensive to a potential
employer.
Finally, it is a good idea to use your social media accounts to steer potential
employers back to one place—a place where your full story can be told. A
great way to do this is to develop your own professional webpage that
includes content that will help potential employers learn more about what you
can offer their company or organization as an employee. Linking all your
social media accounts to a single website—your professional webpage—
could potentially help get your job application moved to the top of the stack.
In addition to these general ideas about leveraging social media to improve
your chances of getting an interview and landing the job you’re after, there
are there other things that you can do that are specific to particular social
media platforms, such as Facebook, LinkedIn, and Twitter that can benefit
you in your employment search.
Facebook
According to a recent survey by Jobvite, an IT software and services
company, 83% of people looking for a job and 65% of employment recruiters
said that they use Facebook in their job searches and announcements. Susan
Adams (2012, 2014), a writer for Forbes magazine, suggests four ways to use
Facebook to maximize your chances of finding a great job. The following are
excerpts of her recommendations:
1. Fill out your profile with your professional history. Facebook has an
easy way to add your work credentials to your profile. First, go to your
own page (not the newsfeed page). Then, click on the “about” page and
select “edit profile.” Here you can include information about your work
history and education history. If you want to make yourself known to the
65% of recruiters who troll for job candidates on Facebook, fill out this
information accurately and without typographic errors.
2. Classify your friends. If you go to your list of friends, you will see a
“friends” box next to each friend. Click on that box, and you should see
a roster of lists—the default of which has “close friends,”
“acquaintances,” and “start a new list.” Click on “start a new list,” and
title it “Professional” or “Work.” After that list is created, you should
find all of your friends who you would consider professional contacts
and add them to that list. This way, you can target your work-related
status updates.
3. Post content and respond to other people’s professional postings.
There is value in online contact, so pay attention to your professional
friends’ postings. “Like” their posts and make insightful comments
about the things they share. Users can now access five additional
animated emoji with which to express themselves. Each emotive icon is
named for the reaction it’s meant to convey. “Like” you already know—
say hello to “love,” “haha,” “wow,” “sad,” and “angry.”
4. Find networking connections. Type the company name you are
interested in into the Facebook search bar. Then choose the “People” tab
located under the search bar. Next, from the menu on the left of the
screen, click “+ Choose a Company. . .” from the Filter Results by Work
option. Enter the name of the company in the search window that
appears and click Enter/Return. Results that are produced are a list of
people who work at the company whose name you entered. This is a
networking goldmine.
There is a good chance that you are already using Facebook in some manner
as a job search tool. If you incorporate these tips into your job-hunting
routines, you will better leverage Facebook’s potential.
LinkedIn
LinkedIn was launched in 2003 as a business- and employment-oriented
social networking service. Its parent company is Microsoft, and it currently
has over 100 million active users. Because of its popularity, specifically
among those looking for jobs and among those looking to hire, LinkedIn
should definitely be used when it comes time to look for perfect place to
work. The popular Women’s magazine, Marie Claire (2017), recently
provided readers with advice on how to best use LinkedIn to land their dream
jobs. Here is a list of our five favorite recommendations:
© hocus-focus/iStockphoto.com
1. Treat LinkedIn like more than a paper CV. Be sure to include
previous work, videos, or presentations that demonstrate the versatility
of your knowledge, skills, and abilities on your LinkedIn profile.
2. Make it easy for them to find you. Put yourself in an employer’s shoes
and think about how someone is likely to come across your profile.
Consider what recruiters are likely to search, and incorporate this
information into your profile headline.
3. Hand-pick the right skills. Make sure that your LinkedIn profile
reflects the specific skills you possess. If you have “connections” on
LinkedIn, they can endorse these skills. Previous employers can also
reference them in recommendations that they give on your profile.
4. Follow your dream companies. There are over 3 million Company
Pages on LinkedIn, so follow the ones you may want to work for some
day. You will get updates when people leave or join the company, and
you will get notified when that company posts jobs. LinkedIn Company
Pages also show you if any of your contacts know people who work at
those companies.
5. Build your network. It is an unwritten rule that 50 is the minimum
number of contacts needed for a successful LinkedIn profile, but the
more connections you are able to build, the more you will start to show
up in sidebars and searches. Family, friends, colleagues and peers are all
valuable connections. When requesting to connect, keep it personal
instead of the standard message LinkedIn can send—the personal touch
helps forge a relationship.
If you are just starting to look for jobs that will get you started on your career,
you may not have set up a LinkedIn profile, but like Facebook, LinkedIn’s
reach makes it a powerful tool for finding that perfect job! Get your LinkedIn
page up today.
Twitter
Twitter is another popular social media platform that you’re probably familiar
with and likely use. But have you ever thought about using Twitter to find
that dream job, networking with industry insiders, or building your personal
brand so that potential employers will want to hire you? If not, here are some
tips, offered by Pamela Skillings (2017), co-founder of Big Interview, on
how you might use Twitter to accomplish these goals. Skillings suggests four
easy-to-follow steps for finding “hidden jobs” on Twitter: (1) Start following
people in your field and organizations you would like to work for and their
employees, (2) join the conversation when it is relevant to your industry, (3)
take advantage of hashtags (#), and (4) get help. When it comes to looking
for job leads on Twitter, you do not have to go it alone. There are plenty of
apps available that will help you identify potential employment opportunities.
And there are plenty of websites that will help assist you in finding jobs on
Twitter.
© franckreporter/iStockphoto.com
Skillings (2017) also suggests that one of the keys to using Twitter
effectively to find a job is to become someone worth following. This may
seem easier said than done, but she suggests that your number of followers
will likely start to grow if you routinely share tips and industry-related
questions quickly but thoughtfully. You should also be actively sharing
valuable links and ideas. Whenever possible, engage Twitter peers at industry
events, conventions, or during other networking opportunities in person.
Finally, Skillings says that “spammers” who fill up your feed with
unnecessary links for their services or goods should be avoided as this can
have adverse effects on your followers.
Twitter should also be used to build your personal brand if you’re using it as
a social media platform to find that dream job. Skillings (2017) suggests
using Twitter to get more proactive in your networking efforts. For some, it
could mean creating a blog on a niche area of interest and tweeting about it
directly to users of that blog. This might help to build your credibility within
the industry and solidify your brand with potential employers. Alternatively,
you may want to consider using hashtags to announce that you are available
for work. Some consider it a little self-righteous but not if you have presented
yourself as a professional, made the appropriate contacts, and established
your credibility. If that is the case, it could be the perfect time to use a little
self-promoting hash-tagging to let potential employers know you’re available
(e.g., #HireMe #MBA #Candidate #JobSearching #Hire [insert college
nickname or mascot]). Regardless of whether or not you feel comfortable
promoting yourself on Twitter to build your name brand, it is a powerful
social media platform that should not be ignored. It can enhance your efforts
to find that dream job and should be used to position yourself as a promising
candidate to potential employers.
Internships
We introduced internships earlier in the chapter, but they are worth revisiting.
Internships offer one of the best ways to land a full-time job; they benefit
companies and organizations too. One of our featured researchers, Mary
Dodge, recommends students pursue as many internships opportunities as
possible, whenever the chance arises. Whether paid or unpaid, an internship
provides a company with an opportunity to “test drive” potential employees
at a low cost. If the organization also creates an enjoyable environment and
experience for interns, it may also benefit from a positive word-of-mouth
effect. When this occurs, it may result in a greater number of highly qualified
and motivate pool of applicants for jobs the organization advertises in the
future.
Top Twitter Job Search Hashtags
#Hiring or #NowHiring
#Jobs
#Careers
#TweetMyJobs
#JobOpening
#JobListing
#JobPosting
#HR
#Graduate Jobs
Tips on Turning an Internship Into a Full-time Job
Choose an internship that allows you to make a meaningful
contribution to the organization.
Act professionally at all times during your internship.
Establish networks while at the organization.
Ask questions that demonstrate your interest in your job and
the organization.
Set goals that let you to demonstrate your skills during your
internship.
Volunteer without overextending yourself.
Follow up after your internship is completed.
Source: Smith (2012).
Interviewing Well
Why do employers conduct interviews? The answer is simple: They need to
make sure that the applicant has the skills they’re looking for and because
they are assessing whether the applicant is “fit” to be successful in their
company or organization. Interviewing is a two-way street. Applicants should
see interviews as an opportunity to determine whether the company or
organization is a good “fit” for them. During the interviewing process, the
applicant must confirm to the employer that they have the knowledge, skill
set, and ability to do the job. This only can be accomplished if the applicant
quickly establishes a good rapport with the interviewer(s). Therefore, going
into a job interview with a clear profile of what the employer is looking for in
a successful candidate is essential for landing the job.
Before the Interview
The job market is changing quickly, and the way in which interviews are
conducted has also changed considerably in recent years. When applying for
a job, applicants should consider what types of interview will be conducted.
The traditional one-on-one, face-to-face interview is popular, but other types
of interviews are also common. Group interviews, interviews conducted over
the telephone or via Skype®, or even interviews that involve large assessment
centers are not uncommon. One of the keys to a successful job interview is
being prepared for any type of interview the employer uses.
Succeeding in a job interview requires considerable preplanning and
preparation. If you got on an elevator with someone, could you quickly and
accurately describe yourself to him or her before reaching your floor? If not,
then you need to develop your “elevator pitch” before going on an interview.
Knowing what skills you possess, what makes that skill set unique, and how a
potential employee views that skill set is a necessary part of preparing for an
interview. But how are you supposed to know whether an employer will think
your skills are attractive to an interviewee? The answer is simple: by
researching the company.
Researching the company that you’ve applied to is an integral part of any
preinterview preparation process. Going online and visit the company’s
website. Find its Mission Statement, Strategic Statement, Forward-thinking
Objectives, and similar “About Us” information. Figuring out a way to link
your skill sets with a company’s strategic vision during an interview can pay
big dividends! It demonstrates that you are not just looking for a job but that
you are looking to make a contribution to the company or organization.
Succeeding in a job interview requires considerably preplanning and
preparation.
© Antonio_Diaz/iStockphoto.com
Preparing Questions by (and for) the Interviewer
Preparing for a job interview should also involve (a) identifying questions
you will likely be asked and (b) developing a memorable question to ask the
interviewer. There are questions that an interviewer will almost always ask an
interviewee. They might be open questions (e.g., tell me a bit about yourself),
behavioral questions (e.g., tell me about a time when you . . . ), or
hypothetical-based questions (e.g., what would you do in the following
situation . . . ). Some of the other most common ones are
Where do you see yourself in five years?
What is your greatest strength?
What is your greatest weakness?
Why should we hire you?
What are your salary expectations?
Why are you leaving or have left your job?
Why do you want this job?
How do you handle stress and pressure?
If you cannot answer these questions quickly and confidently, then you need
more practice.
During the Interview
Most job interviews begin as soon as the applicant and the employer meet,
before either side says a single word to one another. Therefore, it’s vital to be
on time for a job interview and to be dressed appropriately. A confident
introduction and handshake is also important for a successful interview.
Finally, assuring you are engaged with the interviewer by maintaining
constant eye contact and confident body language is also a must.
These are some of the little things that an interviewee can do to set
themselves apart (in a good way) from other applicants during the interview
process. If you really want to set yourself apart, however, think of a good
question to ask the interviewer when he or she turns the tables on you and
says, “Is there anything you would like to ask us about this job opportunity?”
Asking the Right Questions
A memorable question could be something like “If I asked the person I’d be
working closest with at my job, what are three things he or she likes most
about working for your company and three things he or she likes least? What
do you think he or she would say?”
This is a good way to turn the typical “What’s your greatest
strength/weakness” question back on the interviewer, and his or her answer to
this sort of question could be eye opening. The bottom line is this: When
given a chance to ask an interviewer questions about the company or the job
for which you’re applying, don’t pass up the opportunity. In fact, ask a
memorable question—one that shows a genuine interest in the job!
On the flip side, certain questions are best not asked during a job interview.
For example, during the job interview, it’s not a good idea to ask what your
salary would be, how many days off you will get, and if you sign a contract
and get a better job offer, if you can renegotiate your offer. It’s also not a
good idea to ask a question that can be answered by spending a little time
doing some basic research on the Internet. Doing this will make you appear
lazy and a high maintenance employee.
Your Professional Portfolio
Putting together a professional portfolio to present during a job interview is
something all job candidates should consider doing. A professional portfolio
is a carefully crafted set of documents and material that showcases the work
of a professional and is used, for example, during job interviews to provide
evidence of employability and fit with an organization or company.
Obviously, certain documents should always be part of a professional
portfolio, including a résumé and copies of your reference letters. The
following items are also worth considering incorporating into a professional
portfolio:
Introductory Statement: A one-page summary statement providing an
overview of the contents in your portfolio
Statement of Professional Preparation: A brief description of your
preparedness to work in the field
Professional Commitment Essay: A two- to three-page essay focused
on the relevance of your education to the job that you’re applying for
Reflections on Documents: A summary of your major
accomplishments and outcomes that is relevant to the job that you’re
applying for
Appendix: Contains full documents referenced or discussed in other
sections of the portfolio
Professional portfolio: Carefully crafted set of documents and
material that showcases the work of a professional and is used, for
example, during job interviews to provide evidence of
employability and fit with an organization or company.
Whatever you decide to include in your professional portfolio, make sure it is
well organized and presented cleanly and concisely. Both physical portfolios
(i.e., documents contained in a three-ring binder) and electronic portfolios are
acceptable formats.
Remember, it’s also important to get feedback on your portfolio once it’s
created. This means you may want to have an academic advisor or mentor
review your portfolio before heading out to an interview. Also, be prepared to
give your portfolio to a potential employer during an interview, so it’s a good
idea to make multiple copies.
After the Interview
A successful job application may hinge as much on what an applicant does
after a job interview as on what he or she does before or during it. Each
postinterview situation is different, but there are certain things an interviewee
should always consider doing after their interview is over—even if he or she
doesn’t follow through on each one. For example, as the interview ends, the
applicant should get the card of the interviewer before leaving. This will
make sure the applicant has the necessary information to follow up directly
with the hiring agent.
Once the interviewer is back at home, it is a good idea to reflect on his or her
performance. Reflecting on an interview can involve asking questions like
“What did I say or do, specifically, that impressed the hiring official?” “Is
there anything that I could have said or done better in response to any of the
questions I was asked?” and “How will I respond to similar questions during
my next interview?” Writing down the answers to these questions can help
improve future job interviews.
In most situations, it is appropriate to follow up the job interview by writing a
thank-you e-mail and sending it to the hiring official that conducted the
interview. It is important to make sure the e-mail conveys more than the
generic “thank you for your time” message. Including information that
reminds the hiring official of who you are is the kind of information that
should go into a follow-up thank-you e-mail. In other words, make it
personal. (If you really want to go a step further, consider sending a
handwritten note in a thank-you card. So few candidates do this, and it can
add to the personal touch.)
Finally, if it’s appropriate, try leveraging social media to reinforce your
interview. For example, you can use LinkedIn to send a connection request to
the hiring manager. (Note, however, that not all hiring managers will accept
your request to connect as a matter of policy, but there is no harm in trying!)
Keep in mind, however, that you should not be too quick to follow up after
your interview. If you do, you could come across as too impatient. Also keep
in mind that following up with a hiring official after a job interview may not
always be appropriate. If you have any doubt, check in with a mentor to get
his or her advice.
Professional Interviewing Tips
Each of our featured researchers has his or her own thoughts about what
makes a good interview, and they were willing to share their top three
interviewing tips with us. Here is what they had to say:
Chris Melde: (1) Be prepared and understand the organization you want
to work for (e.g., their history). (2) Have questions ready to ask them (do
your homework). (3) Clean up your social media accounts.
Rachel Boba Santos: (1) Dress appropriately. (2) Look people in the
eye. (3) Bring a portfolio. One other thing: Read and study the job
description. Understand what the job is that you are applying for. Read
news articles. Read an organization’s website. Do research on that
organization and that job!
Carlos Cuevas: (1) Be genuine, and do not try to pass yourself off as
someone or something you’re not. This means being honest about what
you can and cannot do. (2) Be aware of how well you “fit” the
organization; it goes both ways. (3) Learn how to shake hands.
Mary Dodge: (1) Dress appropriately. (2) Remove any unusual jewelry.
(3) Avoid using the word like.
Rod Brunson: If you are looking for a job in academics: (1) Be
informed about the institution and department. (2) Know about what
people do (i.e., the topics they specialize in). (3) Understand where
faculty place their work (i.e., where they are published).
Heather Zaykowski: (1) Do not lie on a résumé. (2) Dress nicely. (3)
Behave appropriately (e.g., have good posture and pretend you want a
job).
Pitfalls and Career Searches
Like any task, there are specific pitfalls to avoid. When it comes to job
searching, avoiding pitfalls means the difference between getting a job and
being rejected for a dream position. A common pitfall when it comes to job
searches is believing you will leave the university with a bachelor’s degree—
or even a master’s degree—and land a position as a six-digit salaried
manager in an organization. This does not happen. All people, regardless of
their educational attainment, must work their way up through the ranks. An
education can speed that process, but it is unrealistic to believe that a BA in
criminal justice will make you a manager of all crime analysts at an agency.
In any role, you have to pay your dues.
A second and related pitfall is to enter any job and believe you have nothing
left to learn. The university provides a solid foundation of information that
provides an advantage to you. Nevertheless, you will find in any role you
take, there is much to be learned. Be open to learning it. By being open and
eager to learn even more, you will position yourself for future opportunities.
A pitfall that happens too often has to do with honesty. When filling out
application materials, writing your résumé, or in an interview, you must be
truthful. Do not claim your internship was full-time employment. Do not
claim to have experience that you do not. Do not claim you have a master’s
degree if you do not. Especially in the field of criminal justice and
criminology, dishonesty is absolutely unacceptable. If you are viewed as
dishonest, you will not be considered a viable candidate.
It was discussed in the text of this chapter but deserves repeating: Clean up
your social media. Those considering hiring you will look at your social
media. What will they see? A foul-mouthed party animal? Someone with
bigoted views? A misogynist? Make sure that when people look at all of your
social media, they see a professional, respectful, and mature person. Finally,
a pitfall we wish we did not have to highlight relates to being a jerk. Do not
be a jerk. No one wants to hire a jerk. No one wants to work with a jerk. Be a
kind and respectful person—the sort of person you want to work with.
Ethics and Career Searches
Applied Assignments
1. Homework Applied Assignment:
Creating a Job Résumé
Create a job résumé, or update your existing résumé, based on the
guidance we’ve provided you within this text. Pay particular
attention to the information we presented about “What Makes a
Great Résumé?” in the Résumés section. Be sure your new
résumé answers the question, “What do I have to offer the
employer?”; provides personal contact information; contains a
clear and concise goal statement; lists current and past
employment, demonstrating employability; contains details of
formal educational achievements, including the names and
addresses of the institutions awarding the degree(s) and time spent
at that institution; and presents key skills that an employer desires.
Be prepared to discuss your résumé in class.
2. Group Work in Class Applied
Assignment: Interviewing for a Job
Form a group with other students in the class so that your group
has an even number of students. Then divide the group in half.
Half of the group will act as a job-hiring official, and the other
half of the group will act as job applicants. Both the applicants
and the officials should prepare three questions for a mock job.
The questions can be open questions (e.g., tell me a bit about
yourself), behavioral questions (e.g., tell me about a time when
you . . . ), or hypothetical-based questions (e.g., what would you
do in the following situation . . . ).
Have the hiring official ask the job hunters their questions and
then vice versa. Then, switch roles. Each of you develop three
NEW questions, and repeat the Q&A process. After all students
have had an opportunity to ask (and answer) questions as both a
hiring official and a job candidate, discuss with your partner what
you liked/didn’t like about your responses and how this exercise
could help you prepare for your next job interview. Prepare to
share your experiences with the class.
3. Internet Applied Assignment: Locating a
Job Using an Online Search Tool
Visit any online job search website and search for a type of job
that you may be interested in perusing after you graduate. Go to
two other job search websites and find two other (different) jobs.
In a reflective narrative, compare and contrast the knowledge,
skills, and abilities that all three jobs have in common and that are
unique, respectively. Next, identify the professional skills that
these jobs desire and that you currently possess or are developing
as part of your degree program. Also identify those professional
skills that these jobs desire but that you lack. Finally, explain
ways that you could overcome these deficiencies so that they are
skills you have developed when it comes time to enter the job
market. Be prepared to discuss your paper in class.
Key Words and Concepts
Complimentary close 453
Cover letter 452
General Schedule (GS) 463
Keyword 467
Letter of recommendation 458
National Association of Colleges and Employers (NACE) 475
Private-sector employees 464
Professional portfolio 473
Professional skill set 449
Public-sector employees 461
Referee 456
Résumé 454
Skill 449
Key Points
The professional set of skills associated with the ability to conduct
research in criminology and criminal justice is attractive to employers.
Jobs in both the public and private sectors need employees who are able
to think logically about social problems, apply scientifically rigorous
approaches to knowledge discovery, and can communicate empirically
based evidence to a broad array of stakeholders.
Online job search tools are a quick and easy way to find out who’s
hiring, and what jobs they’re looking to fill. Three of the most common
job search sites are (1) Indeed.com, (2) Careebuilder.com, and (3)
Monster.com.
Social media platforms, such as Facebook, LinkedIn, and Twitter, can
be leveraged to help you find your dream job.
Internships often lead to full-time employment; they offer both the
intern and the organization the opportunity to “test drive” each other
before making a more permanent commitment.
Three of the most important job-related documents you’ll need to
provide at some point during the application process are (1) a cover
letter, (2) a professional résumé, and (3) letters of recommendation.
A job interview is a crucial part of landing a good job. Interviews give
employers an opportunity make sure that applicants have the skills their
looking for and whether applicants are “fit” to be successful in their
organizations. Applicants should see interviews as an opportunity to
determine whether the company or organization is a good “fit” for them
as well.
Review Questions
1. Why are research methods skills beneficial to you?
2. What are the similarities and differences between public- and
private-sector jobs?
3. What are some types of public-sector jobs that would require an
employee to use some of the professional skills associated with
conducting research?
4. What are some types of private-sector jobs that would require an
employee to use some of the professional skills associated with
conducting research?
5. What are the names of some of the most common job search sites?
6. How can some of the most common social media platforms (e.g.,
Facebook, LinkedIn, and Twitter) be used to identify and apply
for jobs?
7. Why do internships offer a great opportunity to gain full-time
employment with the company an intern works for?
8. What is a job résumé, and what are some of the things it should
(and shouldn’t) include?
9. What is a letter of recommendation, and who are some of the
people you should ask to be your referee?
10. What are some of the “things to remember” before, during, and
after a job interview?
11. What are some of the common pitfalls that should be avoided
when searching for your dream job?