Collaborative Problem Solving
Collaborative Problem Solving
Collaborative Problem Solving
Solving:
ii
Considerations for the National Assessment of
Educational Progress
iii
Defining Collaborative Problem Solving
The term “collaboration” has different meanings in different environments. In K-12, collaboration almost always
means an individual task can be solved by anyone in the group, but collaboration is also an instructional strategy to
enable learning more efficiently or effectively. In the world of work (industry, military), the term “collaborative”
usually means a group task in which no one member of the group can solve the task alone.
Collaborative problem solving involves two different constructs—collaboration and problem solving. The
assumption is that collaboration for a group task is essential because some problem-solving tasks are too complex
for an individual to work through alone or the solution will be improved from the joint capacities of a team. People
vary in the information, expertise, and experiences that they can bring to bear in order to jointly solve a particular
problem. More specifically, collaborative problem solving requires that people share their resources and their
strategies in order to reach a common goal via some sort of communication process. Whether in an individual or
group task, the group can be either face-to-face or virtual. In both cases, some technology is often used to
facilitate collaborative problem solving.
Generally, collaborative problem solving has two main areas: the collaborative (e.g., communication or social
aspects) and the knowledge or cognitive aspects (e.g., domain-specific problem-solving strategies). These two
areas are often referred to as “teamwork” and “taskwork.” The primary distinction between individual problem
solving and collaborative problem solving is the social component in the context of a group task. This is composed
of processes such as the need for communication, the exchange of ideas, and shared identification of the problem
and its elements.
Competency is assessed by how well the individual interacts with agents during the course of problem solving. This
includes achieving a shared understanding of the goals and activities, as well as efforts to pool resources and solve
the problem.
Within the PISA framework, three competencies form the core of the collaboration dimension: establishing and
maintaining shared understanding, taking appropriate action to solve the problem, and establishing and
maintaining group organization. The framework also identifies four problemsolving processes: exploring and
understanding, representing and formulating, planning and executing, and monitoring and reflecting.
ATC21S has also developed a framework for assessing collaborative problem solving. This framework, like that of
PISA, identifies dimensions of collaboration: participation, perspectivetaking, and social regulation. Problem-
solving skills include task regulation skills and knowledgebuilding and learning skills. The main distinction between
the two frameworks is that ATC21S has an integrated approach, in which the distinctions between collaboration
and problem solving are melded. For the purposes of defining collaborative problem solving, research must
determine which of these approaches produces the appropriate level of granularity and accuracy in assessment.
Other research on cognition has identified some additional competencies that could be measured in an
assessment of collaborative problem solving. These include group communication, “macrocognition” in teams,
collaborative learning, and cognitive readiness.
Assessment Development
The experience of ATC21S and PISA suggests that assessment task development for collaborative problem solving
differs little from standard test development. The major departure is in conceptualizing what constitutes a test
item and how students will respond to it. The ATC21S assessment involves human-to-human interaction, while the
PISA assessment involves human-toagent interaction. The choice of the approach has strong implications for many
of the other issues. For instance, what will the task environment look like? What will be the group composition?
What collaboration medium should be used? How can the scoring be implemented?
The human-to-human assessment approach is embedded in a less standardized assessment environment and
offers a high level of face validity. A student collaborates with other students, so the behavior of both is difficult to
control. Also, the success of one student depends on the behavior of the other student, as well as the stimuli and
reactions that he/she offers. This has implications for scoring. How can the open conversation and large variety of
stimuli be identified and utilized for scoring?
In the human-to-agent approach, the assessment environment is more standardized. The behavior of computer
agents must be preprogrammed, so the response alternatives of the item to which the student reacts need to be
limited to some extent and every possible response of the student needs to be attached to a specific response by
the computer agents’ stimuli or event in the problem scenario. Such an approach ensures that the situation is
rather standardized, and furthermore enables comprehensive scoring techniques since every possible turn in
collaboration is predefined. Nevertheless, the human-to-agent approach has the shortcoming of its artificial
appearance and other conceptual questions, such as whether the CPS measured in this way can represent real-life
collaborative problem solving
The Need for Collaborative Problem-Solving Assessments
There is an increased interest in collaboration and teamwork in the workforce, higher education, and K-12
education. The Assessment and Teaching of the 21st Century Skills (ATC21S) included collaboration among the
most important skills necessary for a successful career. While there is widespread agreement in the field of
education that collaboration is an important skill, there is less agreement on how to build an assessment to
measure it.
In 2015 the Organization for Economic Cooperation and Development (OECD) published its draft framework, which
included strong rationale for the inclusion of CPS, calling collaborative problem solving “a critical and necessary
skill across educational settings and in the workforce.” This plan by the OECD to add CPS to its 2015 PISA test has
important implications for U.S. assessments, including the National Assessment of Educational Progress (NAEP).
NAEP, which is the largest nationally representative assessment that is administered regularly over time,
measures what America’s students know and can do in various subject areas. A broad range of audiences use the
assessment results, including policymakers, educators, and the general public. Subject areas range from
traditional curricular subjects, such as mathematics, to non-curricular topics, such as technology and engineering
literacy. Each NAEP assessment is built around an organizing conceptual framework. Assessments must remain
flexible to mirror changes in educational objectives and curricula; hence, the frameworks must be forward-
looking and responsive, balancing current teaching practices with research findings. The most recent NAEP
framework created was the Technology and Engineering Literacy (TEL) framework, which was introduced in 2014.
Because of the growing importance of collaborative problem-solving skills in the educational landscape, the
National Center for Education Statistics (NCES) decided that NAEP should investigate state-of-the-art CPS research
and assessment before deciding whether an assessment of CPS should be added to NAEP. Therefore, NCES
assembled a broad array of individuals to develop this white paper, with the goal of fully conceptualizing the
assessment of CPS skills as it currently exists, to inform not only NCES, but also the broader field of researchers and
policymakers interested in CPS. This paper represents the culmination of a process that began with the NAEP
Symposium on Assessing CPS Skills in September 2014.
In this white paper, the CPS assessment is assumed to be part of the NAEP context in terms of the data collection
design, instruments, and sampling. That is, it is assumed that a NAEP CPS test would be a group-score assessment
as opposed to an individual-score test (Mazzeo, Lazer, & Zieky, 2006); not tied to a particular curriculum;
permitting the use of item pools, rather than fixed test forms; and administered in a matrix-sampling design (i.e.,
a student only takes a subset of the item pool). It is also assumed that summary scores (e.g., average scores,
percentage achieving a certain level of proficiency) would be reported at the national, state, and/or jurisdiction
level but not at the individual student level, and that scores would be disaggregated by major reporting
subgroups, including gender (male-female), race/ethnicity, socioeconomic status levels (e.g., full, partial, or not
eligible for the National School Lunch Program), English language learner (ELL) status, and students with
disabilities (SD) status. It is also assumed that the assessment would be administered at the fourth-, eighth-, and
3
twelfth-grade levels. As with all NAEP assessments, the assumption is that (a) the resulting CPS scores should be
accompanied by statements about the precision of the measurements, and (b) scores are valid and fair for all
major test takers groups (see AERA, APA, NCME Standards, 2014; Thissen & Wainer, 2001).
The rest of this introductory chapter is organized into two sections. The first section discusses the value of CPS as a
21st century skill, its importance to education and workforce readiness, and the need for CPS assessments. The
second section briefly reviews the history of the research around collaboration and collaborative problem-solving
Conclusion
NAEP is a unique resource that has played a prominent and influential role as a respected and highprofile monitor
of student achievement and as a pioneer in state-of-the-art assessment techniques.
This dual role has led NAEP to exert a strong influence on policy and practice.
Because of NAEP’s prominence, stakeholders need to exert caution in developing and implementing a new type of
assessment. An assessment of collaborative problem solving would introduce a number of new elements to NAEP,
which can pose challenges to interpretation. These include the use of multiple content areas (and possibly content-
neutral problems), the use of student groups in test administrations, and the use of process variables. NCES and
the Governing Board should be very clear about the information the assessment results provide, as well as its
limitations.
Despite these challenges, an assessment of collaborative problem solving would represent a bold step that could
strengthen NAEP’s role as an assessment leader. As a so-called “21st century skill,” collaborative problem solving is
considered vital to students’ future in the workplace. Policymakers and practitioners need to know how well
students can demonstrate that skill, and NAEP is in an ideal position to provide that information.
4
5