The results section is structured by the main deductive categories used to analyse the 107 included records. First, we describe the dataset, including the types of records, publication venues, publication year, geographical distribution, and the emerging technology focused on (Section
4.1). Next, we discuss the target groups mentioned in the records, as well as the role of teachers and other actors involved in the preparation and/or facilitation of learning activities (Section
4.2). Then, we move on to discussing the learning objectives – both high- and low-level – and the ways in which learning objectives are integrated into existing or new curricula, if at all (Section
4.3). Following this, we dive into the theoretical learning frameworks and practices reported in the records. Here we also give an overview of preferred pedagogical strategies (Section
4.4). From there, we move on to discussing technology and other tools developed or used by researchers to support educational practices (Section
4.5). Finally, we look at how these practices and tools were evaluated, and if and how students’ learning was assessed (Section
4.6). Important to note is that we grouped AI and ML as well as AR and VR in the presentation of the findings. In addition, we only distinguished between the three groups of emerging technologies when the analysis of the data revealed clear differences in the ways in which these technologies are taught in K–12 education.
4.1 Description of the Dataset
This section describes the dataset, starting with the distribution of records across publication year and type of emerging technology. As Figure
4 clearly shows, interest in teaching emerging technologies such as AI, ML, IoT, AR, and VR in K–12 education has increased. From the 107 records included in the review, 57 primarily focus on AI/ML, 28 on IoT, and 27 on AR/VR. Only five articles focus on more than one of these technologies, of which three combine IoT with AR/VR and two IoT with AI/ML. The vast majority of articles (82 out of 107) were published in the last three years, with 27 records in 2020, 36 in 2019, and 19 in 2018. Especially for records that focus on teaching AI and ML (57 in total), there has been a sharp increase in the past two years, with 18 records published in 2020 and 20 in 2019.
The 107 records were published in 78 different venues, including 58 conferences, 17 journals, and three books. Within this broad range of venues, the following groups can be distinguished: HCI and IxD venues (13 venues for 27 records), learning sciences (four venues for 26 records), computing and/or engineering education (19 venues for 26 records), learning technologies or EdTech (16 venues for 20 records), computing and/or engineering (14 venues for 14 records), intelligent systems (10 venues for 12 records) and miscellaneous (two venues for two records). Put differently, 24 venues have a predominantly technical focus, 23 venues focus on (technology) education, 16 on educational technology, and 13 on HCI and IxD.
Among individual venues, Interaction Design & Children Conference (IDC) is the most represented (9 records), followed by CHI (Conference on Human Factors in Computing Systems) (6 records), and Innovation and Technology in Computer Science Education Conference(ITiCSE) (4 records). Runners-up are Frontiers in Education Conference (FIE), Global Engineering Education Conference (EDUCON), International Conference on Teaching, Assessment, and Learning for Engineering (TALE), European Conference on Technology Enhanced Learning (ECTEL), and AAAI (Conference on Artificial Intelligence) (three records each), and the International Journal of Child–Computer Interaction, Conference on Informatics in Schools: Situation, Evolution, and Perspectives (ISSEP), Symposium on Educational Advances in Artificial Intelligence (EAAI), and T4E (International Conference on Technology for Education) (two records each). All other venues are represented once only. With regard to the publication type, the selection includes 56 (full) papers, 16 journal articles, three book chapters, and 32 non-archival publications (e.g., magazine article, work in progress, demo).
With regard to geographical distribution (i.e., the country in which the first author's home institute is located), 30 countries are represented in the selection. The United States is represented by approximately one-fifth of the records (21 in total), followed by Spain (11 records) and Greece (9 records). China, Germany, and India each have six records in the selection, Finland and Israel five, Portugal and the UK four, and Brazil and Italy three. The remaining countries are represented by either one or two records. As for distribution across continents, Europe is represented by half of the records (53 in total), Asia by 25, North America by 23, South America by 4, Australia by 2, and Africa by none (see Figure
5). Important to note here is that we selected only English records for this review (see Section
3 for more details about inclusion and exclusion criteria).
In sum, the vast majority of records included for review were published in the last three years, and records focused on AI and ML especially have sharply increased. When looking at the venues in which these records were published, we observe a large diversity of technical, educational, and HCI/IxD-oriented conferences and journals. Less diversity was found in the geographical distribution of the institutions of the first authors, showing a clear overrepresentation of Western countries.
4.2 Target Group and Roles for Teachers and other Actors
In this section, we look at the primary target groups for the learning activities, and at any other actors playing an active role in the development (i.e., backstage work) and facilitation of these activities, most notably researchers and teachers, as presented in the 107 reviewed records.
The most common target group for learning activities teaching about emerging technologies is students in secondary education, in particular, grades 8 to 10 (ages 13–16) (e.g., [
57,
81,
163]). Lower primary education, and especially preschool, are less represented (e.g., [
70,
75,
158]) (see Figure
6). Of the 107 included records, eight use generic terms such as “K12” or “secondary education,” and 17 do not mention a specific grade or age range. All other records provide detailed information about the target group, which often covers several grades. Very few reports focus on specific groups of students, such as girls or other underrepresented groups in computing education [
79]. On the other hand, gender balance seems to be an explicit aim in several studies when recruiting participants (e.g., [
28,
54,
120,
163]).
Of the 107 included records, 65 reports original empirical work, with an average of 34 students and a mean value of 25 students per study. Thus, most studies seem to aim for “depth” with fewer students rather than “breadth” with many respondents. Notable exceptions include one study with 150 students [
129] and another that impacted over 3,000 students by rolling out a STEM and IoT programme in 18 schools [
79]. In contrast, two studies involved as few as three students [
44,
58]. Important to note is that we refer to “a study” here also as a combination of multiple studies (e.g., a presurvey, pilot study, and main study) reported in a single article.
Teachers are frequently referred to as important actors in learning activities. Teachers’ roles, and the ways in which they collaborate with researchers, are specified in fewer than one-third of all records (33 in total). Although researchers take the lead in preparing learning activities (i.e., the backstage work), in 12 records teachers are actively involved in this stage, either by co-creating learning content and activities with researchers or simply by providing feedback on initial work prepared by researchers (e.g., [
78,
97,
115,
134]). The co-creating option gives teachers the most impact on and ownership of the outcomes of this preparation stage. Teachers are also involved in the next stage, that is, facilitating learning activities in formal and non-formal learning settings. In 14 records, teachers either co-facilitate the activities with the leading researchers (e.g., [
123,
126,
167]) or facilitate the activities alone while researchers attend the sessions passively, for instance, by taking notes and collecting research data (e.g., [
14,
46,
58]).
Only in a few cases (five records) are teachers actively involved both in backstage work to prepare an intervention and in facilitating the resulting learning activities. Charlton and Avramides [
23], for instance, actively involved teachers in constructing knowledge and experimenting with ideas on how an IoT system could be used for collaborative, problem-based, and multidisciplinary STEM education. Heinze et al. [
65] in turn, report on a three-year-long collaboration between an AI researcher and two local teachers to develop a K–6 AI curriculum as part of the Scientists-in-Schools program in Australia. The learning objectives, content, and activities were developed collaboratively and tried out by the teachers across subjects and in multiple iterations.
In just two records, teachers do not partake in backstage work or facilitation of interventions, but instead participate in in-service and professional development programmes set up by researchers [
79,
148]. The aim of these programmes is to train teachers to integrate emerging technologies in K–12 education, something that is deemed important for scaling and sustaining research-led initiatives on technology education. On that account, Vazhayil et al. [
148] developed a course for teachers on how to introduce AI to middle and high school students. 34 teachers with different educational backgrounds and levels of experience participated in the course. They learned theoretical concepts of AI and the various stages of an AI project cycle, and afterwards applied this knowledge in CS subjects in their respective schools.
Besides K–12 students (i.e., the target group) and teachers, only a few records (10 in total) mention active involvement by other actors such as parents [
146], additional researchers and/or industry partners [
149], education experts [
151] and policymakers [
162]. The role of these actors is diverse, ranging from the co-facilitation of learning activities to providing input for the development of these activities.
In sum, the primary target group for learning activities for emerging technologies is students in secondary education, in particular grades 8 to 10. Very few records specifically address very young and underrepresented target groups. On average, 34 students participate in a single study, meaning that “depth” with fewer respondents is a preferred research strategy. Including a broader range of students from early years throughout secondary education, and complementing small with larger cohorts of student participants, would offer great potential in terms of outreach and diversity. As for teachers, we found their role in learning activities for emerging technologies to be fairly limited: fewer than a third of all records involved teachers in the development, facilitation, or evaluation of learning activities. A further limiting factor on long-term impact was the scarcity of in-service and professional development programmes, which are important for scaling and sustaining research-led initiatives. These findings are consistent across all three types of emerging technologies.
4.3 Learning Outcomes and (Curricular) Implementation
This section zooms in on the learning objectives associated with emerging technologies, the implementation of those objectives, and additional overlapping and interconnected aspects. In the analysis, we first looked for information about the higher-order objective for teaching emerging technologies in K–12 education. Next, we looked at the different types of concrete learning objectives: these formed a disparate list, ranging from technical understanding and skills, design skills, to societal and ethical implications, attitudes, and other objectives including developing STEM and transversal skills. We then analysed whether learning objectives were integrated with existing or new curricula, their level of detail or abstraction and whether they account for a range of progression levels, and lastly, the extent to which prior knowledge and skills are required to participate in the learning activities targeting these objectives.
A higher-order objective for the authors’ belief that children should be taught about emerging technologies – AI, ML, AR, VR, and IoT in particular – is identified in fewer than half the records (48 out of 107). In the other half, such a higher-order objective is either missing or not explicitly communicated. For those that do mention a higher-order objective, a distinction can be made between records foregrounding a career perspective (20 records), often in STEM disciplines (e.g., [
3,
60,
63]), and those that advocate a broad literacy perspective (28 records), most often with regard to AI and ML (e.g., [
37,
72,
80,
88]).
In the literacy perspective, education about emerging technologies is considered relevant for all children, regardless of future career trajectories, as with the core subjects of maths, reading, and writing. In this perspective, developing a critical understanding of emerging technologies and the operational skills to work with them will in the near future be a precondition to full participation in society. A good example is provided by Druga et al. [
42], whose aim is to develop children's AI literacy through physical tinkering and learning activities using smart toys and agents. The authors argue that it is important for children in contemporary society to understand how machines perceive and model the world as they grow up with these technologies. Tissenbaum et al. [
133], in turn, argue that providing low-barrier means for all types of students to design and implement IoT solutions to problems that have personal relevance to them can help them develop computational identities or “the ability to create meaningful change using computing and recognising one's place in the large computing community,” as well as their sense of digital empowerment or “opportunities to put that identity into action.” These notions of identity and empowerment clearly point to a broad literacy perspective, even if the term itself is not used [
133].
In addition to higher-order goals, we assessed whether learning activities were integrated in or across existing curricula, or presented as new curricula altogether. We found that almost half of the records (48 out of 107) lack such integration and present standalone learning activities (see Figure
7). This means that researchers are conducting activities in formal or non-formal learning contexts, during or after regular school hours, but without integrating the objectives they are introducing with established ones (e.g., [
17,
73,
82,
86]).
If not presented as standalone activities, curricular integration is quite common in STEM (19 records, of which ten focus on IoT: e.g., [
96,
119]) and
computer science (
CS) education (9 records: e.g., [
75,
149]) (see Figure
7). Along these lines, Ota et al. [
105] combined general STEM objectives, especially in relation to mathematics, with IoT-specific objectives. They developed a 15-hour STEM course with IoT learning materials and modules in which high school students create prototypes that solve real-world problems of their own choosing. Through this process, students learn how to collect and analyse sensor data with probability statistics (i.e., mathematics).
An alternative approach to integrating learning activities in a single school subject is a cross-curricular approach, pushing against traditional subject boundaries (10 records, of which seven focus on AI/ML: e.g., [
23,
142]) (see Figure
7). A good but rare example of a cross-curricular approach is provided by Chow [
26], who engaged high school students in a seven-month-long project that ran across different subjects. Students collaborated in small groups to create a VR model of their school campus. This required them to learn a range of skills, many of which were not covered in their regular school subjects. Among other things, students learned advanced programming, 3D modelling, project-management skills, synthesising literature, heritage science as applied to the history of their school campus, and graphic design.
A further approach, taken by six records, is to propose a dedicated curriculum, covering one or multiple grades, for introducing emerging technologies in K–12 education (e.g., [
115,
162]) (see Figure
7). All six of these records focus on AI/ML. Gong et al. [
59] for instance, present an “AI education system” for primary and high school students including different cognitive and practice-based objectives, hardware and software tools, and concrete cases that can be used in a modular way depending on students’ level and grade.
With regard to the degree of detail of the learning objectives, half of the records (57 in total) provide only high-level descriptions with little operational detail. In contrast, 12 records provide very detailed learning objectives (e.g., [
48,
85]). Only a few records (9 in total) suggest progressive learning objectives, such as specifying objectives per grade for a dedicated technology curriculum (e.g., [
33,
65]). Often cited in this regard is Touretzky et al. [
137], who present five big ideas about AI and detail what students in different grades should be able to do and know in relation to each of them.
Although providing low barriers to entry seems to be the norm for learning activities about emerging technologies, in one-fifth of the reviewed records (22 in total) a certain degree of prior skills and knowledge is required to participate in the activities. Knowledge of fundamental concepts in computing, robotics or AI is most often required (e.g., [
15,
155]), followed by experience with Scratch or other (block-based) programming languages (e.g., [
17,
28]), followed in turn by having met the objectives of prior grades or modules (e.g., [
75,
137]), and knowledge of mathematics in order to better understand ML and other algorithms (e.g., [
104]).
Regarding the types of learning objectives, a distinction can be made between (1) technology understanding and skills, (2) ethical and societal technology implications, (3) design skills, (4) attitudes and mindsets, and (5) other forms of knowledge and skills (see Figure
8). These different types of objectives are combined in varying combinations in the reviewed records. They will be discussed in more detail in the remainder of this section.
Technology understanding and/or skills are mentioned as learning objectives in all but three records (i.e., [
95,
97,
120]) (see Table
3). The term “understanding” (or knowledge) refers to familiarity with factual information and theoretical concepts, whereas the term “skills” refers to the ability to perform a certain task or role and requires the application of knowledge to specific situations. Technology understanding is formulated both in fundamental CS concepts (e.g., [
70,
82]) and in terms of the way emerging technologies such as AI/ML (e.g., [
111,
165]), AR/VR (e.g., [
26]) and IoT (e.g., [
93,
123]) work and what their application domains are. Technology skills, in turn, are diverse. They include block-based and more advanced programming (e.g., [
25,
27,
46]), building, training and testing ML models (e.g., [
88,
166]), prototyping IoT applications, including electronics (e.g., [
56,
99]) and creating AR and VR environments (e.g., [
26,
78]). Developing these skills often goes hand in hand with deepening the understanding and knowledge of emerging technologies – and vice versa. Moreover, many researchers do not make a clear distinction between technology “understanding” or “knowledge” on the one hand and the development of practical technical “skills” on the other, approaching these two objectives as one and the same. An example can be found in Williams et al. [
158] who introduced the AI concepts of knowledge-based systems, supervised machine learning, and generative AI to young children (ages four to six) by letting them explore and tinker with real examples using the block-based programming platform PopBots.
Remarkably, half of the records that focus on AR and VR (14 out of 27) use the technology as an instructional aid in teaching programming skills, rather than a learning objective in itself. This means that the objective in these records is not to develop students’ understanding of how AR and VR work and what their functional strengths and limitations are (e.g., [
27,
34,
87]). These records were nevertheless included in the review because their focused objective is technology education, which aligns with the inclusion criteria.
About one-quarter of the records (28 in total) include the ethical and societal implications of emerging technologies as learning objectives. Most of these (24 out of 28) focus on AI/ML. However, this focus on ethical and societal implications is often merely an afterthought, with most attention given to developing students’ technology understanding and skills. A few reports refer very broadly to ethical and societal implications without further specification [
159]. Others focus on concrete issues such as bias [
35], fairness [
18], data privacy [
129], security [
33], accuracy [
134], accountability [
4], transparency (e.g., [
48], and how the technology should or should not be used [
85]. Complex issues related to power and democracy are rarely touched upon. One of the few records that foregrounds ethics is Bilstrup et al. [
9], who engaged high school students in the design of a supervised ML application that addresses a real-world need in their lives, while simultaneously exploring the ethical implications of the proposed design. During the process, students become aware of how difficult it is to avoid ethical issues, even with the best intentions.
Almost one-quarter of the articles (25 in total) combine technology understanding and/or skills with one or more design skills as learning objectives. “Ideation” is the design skill that is most often mentioned (e.g., [
54,
120]), followed by “designing IoT applications” (e.g., [
5,
133]), “presenting and providing arguments for design concepts” (e.g., [
9,
93]), and “design and creative thinking” (e.g., [
17,
123]). “Iterative testing of design concepts” (e.g., [
105]) and “game design” (e.g.,[
82]) are mentioned a few times. Not all records that refer to design skills as learning objectives attribute the same importance to them. Noteworthy is that design skills always co-occur with other learning objectives, especially with technology understanding and/or skills. In many cases, practising design and technical skills go hand in hand, both contributing to students’ technology understanding. A good example is provided by Toivonen et al. [
134], who developed a two-week program to introduce students to the core theoretical concepts of ML (i.e., training set, prediction accuracy, class label) as well as the practical skills involved in training an ML model to solve a predefined problem. The program includes ideation, user-interface design, iterative testing, and pitching design concepts [
134].
Attitudes and mindsets are referred to in almost one-quarter of the records (24 in total) as study objectives in the context of emerging technologies. A recurrent objective in this regard is for students to develop a positive attitude towards and interest in STEM (10 records) (e.g., [
57,
141]), AI (e.g., [
159]), or digital technology more broadly (e.g., [
117]) (8 records). Other, less frequently mentioned attitudes include “active and engaged citizenship” (e.g., [
133]), “resourcefulness” (e.g., [
148]), and “confidence in one's own abilities” (e.g., [
73]). Just as with the presentation of ethical and societal implications and design skills, these attitudes are presented in conjunction with technology-related learning objectives.
This is also the case for other learning objectives (52 records) – for instance, in relation to STEM subjects such as physics and maths (e.g., [
51,
119]), or transversal “twenty-first century” skills applicable in a wide variety of contexts including collaboration (e.g., [
23,
104]), critical thinking and reflection (e.g., [
22,
142]) and problem-solving (e.g., [
5,
151]). Roughly half of the records (52 in total) refer to learning objectives like these in addition to technology understanding and/or skills. A good example is an educational scenario developed by Spyropoulou et al. [
129]. This aims at familiarising students aged 14–16 with IoT technologies through a cross-curricular STEM approach including learning objectives related to science (i.e., ultrasonic physics), technology (i.e., Arduino programming and the use of sensors), engineering (i.e., developing and testing IoT applications), and maths (i.e., volume and distance calculation).
In sum, initial steps have been taken towards defining learning objectives with regard to emerging technologies, sometimes in the form of dedicated curricula or cross-curricular approaches. Although learning objectives tend to lack operational detail and multiple progression levels, it is encouraging that technology-specific objectives are often combined with a range of other objectives. This is the case for all three emerging technologies, although clear differences could be discerned in terms of emphasis (see Figure
8 and Table
3). Objectives include the ethical and societal implications of emerging technologies, design thinking, positive changes in student attitudes, a broad range of the twenty-first century or transversal skills such as critical thinking and collaboration, and STEM objectives beyond technology, such as maths and physics. But despite this combination of different types of objectives, a humanities perspective foregrounding the implications and design aspects of emerging technologies is generally either lacking or treated as an afterthought in the learning activities. More holistic approaches need to be developed, which balance technology-specific objectives with a humanities perspective to promote more comprehensive understandings and skillsets. Related to this, the urgency of “why” students need to attain these learning objectives needs to be explicitly addressed by researchers. In more than half of the records, this is currently not the case. An almost similar proportion of the records present standalone activities that are not, or are insufficiently, integrated in school environments and curricula. These findings are indicative of a rather narrow focus of research on emerging technologies in K–12 education.
4.4 Educational Frameworks and Practices
This section looks into the theoretical learning frameworks and educational practices adopted or proposed for introducing emerging technologies to K–12 students. Educational practices include the format and duration of suggested activities (within a larger setting of formal, non-formal, or informal learning), as well as the pedagogical strategies used to attain the learning objectives explicitly or implicitly referred to in the reviewed records.
The majority of records do not present a solid theoretical learning framework for introducing emerging technologies. Records that integrate learning theory (24 in total, of which 15 relate to AI/ML and nine to IoT) rely on one or more of the following theories: constructionism (e.g., [
25,
158]), socio-constructivism [
97], actor-network theory [
51], participatory and collaborative learning theories [
5], situated and experiential learning [
93], universal design for learning [
111], and design-oriented pedagogy [
147].
Of these articles, the majority (13 in total, of which ten relate to AI/ML and three to IoT) rely on constructionism as the underlying theoretical framework, but typically without providing much explanation, using it as a synonym for hands-on activities or learning-by-making (e.g., [
73]). In short, constructionism holds that learning happens most effectively when children engage in making tangible objects in the real-world, thereby creating mental models to understand the world around them and using what they already know to acquire more knowledge [
107,
108]. Constructionism advocates student-centred discovery learning, whereby students make connections between different ideas and areas of knowledge, facilitated by the teacher through coaching rather than lectures or step-by-step guidance [
108].
Among the reviewed records, Dhariwal and Dhariwal [
35] illustrate clearly how they implemented constructionism by scaffolding an open-ended creative learning process with a custom-developed extension for Scratch. Students aged 14 to 17 used the tool called “Let's Chance” to create their own data, models, and possibilities, allowing them to explore powerful ideas of probabilistic thinking and modelling, which in turn helped them to understand how AI technologies make predictions based on training data [
35]. Another example is provided by Kandlhofer et al. [
75], who state that they developed teaching modules to introduce AI and ML to high school students largely based on principles of constructionism, comprising a wide range of hands-on activities, tools, and platforms as well as different pedagogical strategies including project-based teamwork, storytelling, and peer tutoring [
75].
Regarding the format and duration of the suggested learning activities, about one-quarter of all records (24 in total) report on a single workshop or session of one or a few hours (e.g., [
43,
44,
167]). A similar number (28 in total) report on a limited number of short sessions, typically three to six sessions of a few hours each (e.g., [
78,
92,
118]). Notable exceptions include a small number of records (nine in total) reporting interventions that ran on a regular basis for several months (e.g., [
58,
63]) up to a whole year, for instance in the form of a dedicated AI course (e.g., [
65,
115,
126]). Three additional records propose AI and ML curricula covering multiple grades, but without having conducted any activities yet [
85,
137,
162]. For the remaining records, the format and duration were unclear or not relevant.
Not surprisingly, formal education in primary or high schools is the predominant context for learning activities about emerging technologies (66 records) (e.g., [
42,
116,
118]), followed by non-formal education (17 records) such as after-school robotics clubs [
29] or workshops [
67], science fairs [
3], makerspaces [
112], and summer schools [
155]. Only a few records (five in total) target both formal and non-formal education (e.g., [
18,
50]). Two records focus on informal education in a home context, facilitated by parents (e.g., [
122,
146]). As discussed in the section on learning objectives, however, the preference for the formal education context does not necessarily mean that the learning activities conceived by researchers are integrated in existing school curricula. Oftentimes, researchers move rapidly in and out of schools to iteratively evaluate learning activities and tools and collect empirical data. A strength, on the other hand, is that most studies are conceived as interventions in the real-world and cover a variety of learning contexts. These findings are consistent across all three emerging technologies under consideration.
Finally in this section, we look into pedagogical strategies used or suggested by researchers. We use the term pedagogical strategies to refer to the ways in which learning content and materials are created and presented to learners – in this case, K–12 students. Detailed information about pedagogical strategies was found in approximately one-fifth of the records (20 in total, of which 12 focus on AI/ML, seven on IoT and one on AR/VR) (e.g., [
60,
79,
147]). One-third (34 in total) disclose no information about such strategies, or the code was not applicable; however, some information about pedagogical strategies could be derived indirectly, for instance, by looking at whether learning activities were hands-on or collaborative.
Based on this explicit and implicit information in the reviewed records, 15 distinct pedagogical strategies could be discerned (see Table
4), of which three are predominant: active and engaged teaching, as opposed to passive listening (e.g., [
52,
100,
125]), small group work and peer learning (e.g., [
1,
68,
88]), and technology-mediated teaching by letting students use, modify, and/or construct technology artefacts. This last strategy includes block-based and more advanced programming activities (e.g., [
34,
87]), developing IoT applications (e.g., [
56,
99]), building or modifying and testing ML models (e.g., [
88,
166]), and creating AR games [
78] and VR environments [
26].
Among other pedagogical strategies used in varying combinations are: low-entry barriers to student participation in activities (e.g., [
48,
117]), inquiry- and project-based approaches in which students engage with open or wicked problems (e.g., [
56,
60]), design-process driven activities (e.g., [
109,
133]), creative exploration and tinkering with learning materials and content (e.g., [
35,
158]), reflective practices (e.g., [
9,
120]), an emphasis on authenticity and closeness by structuring activities around real-world and personally meaningful challenges (e.g., [
79,
133]), self-guided or student-centred learning (e.g., [
26,
71], knowledge-driven approaches introducing new concepts with (short) lectures, often complemented with hands-on activities later to apply this new knowledge (e.g., [
119,
149]), unplugged activities in which the use of digital technology is deliberately avoided (e.g., [
48,
81,
103]), embodied learning or using one's body via actions and gestures to create new knowledge (e.g., [
126,
167]), crossdisciplinary perspectives that are not confined to traditional subject boundaries (e.g., [
43,
142]), and modular and adaptive activities (e.g., [
59,
65]) (see Table
4 for an overview). Although these pedagogical strategies are used across the three types of emerging technologies, no distinct patterns could be discerned. The reviewed records also provide little information about the suitability of these strategies for particular age cohorts, although generally speaking less emphasis is placed on cognition (see strategies “knowledge driven” and “reflective practice”) and self-efficacy (see “self-guided” and “inquiry or project based” learning) when younger pupils are targeted in lower primary school.
An excellent example in which multiple pedagogical strategies are integrated is provided by Byrne et al. [
17]. The authors frame their research on teaching IoT to high school students within a social constructivist framework: more specifically, the Bridge 21 model for collaborative, student-centred, technology-mediated, hands-on, and project-based learning, which aims at supporting an innovative, twenty-first-century learning environment within schools. With these pedagogical strategies in mind, they developed a four-day hackathon event in which students expand their domain and technical knowledge, investigate a design challenge and IoT possibilities, come up with ideas, and develop a working prototype, realise a digital media campaign to promote their concept, and finally, pitch and critically reflect on their work. In this approach, the development of technical and twenty-first-century skills such as problem solving, communication, and teamwork are integrated [
17]. Rattadilok et al. [
111], in turn, proposed an active and engaged approach to introduce basic ML concepts to students with little interest in technology. To motivate students, they used the pedagogical strategy of “closeness” by using a well-known and popular mobile game, Clash of Clans, among students as an object of study and experimentation. Students collaborated in small teams to develop game strategies on a worksheet, and fed these to a game bot called “iGaME” (In Class Gamified ML Environment) that used the input a predefined number of times, after which it created an output file about its success rate. Students iteratively improved their strategies and competed with other teams to find who had developed the most successful strategy. The session concludes with a class discussion on lessons learned [
111].
In sum, the format and duration of learning activities typically ranges from a single workshop to a few sessions of one or more hours. The preferred setting for these activities is formal education, followed by non-formal education, including after-school robotics clubs, science fairs, maker spaces, and summer schools. To scaffold students’ learning in these settings, a wide range of pedagogical strategies are suggested in the literature, among which three are predominant: active and engaged learning, collaboration, and technology-mediated teaching by letting students use, modify, or construct technology artefacts. These are complemented with a range of other pedagogical strategies in varying combinations such as learner-centred and inquiry-based teaching, authenticity and closeness, tinkering, and reflective practice. These findings are more or less consistent across all three types of emerging technologies. Remarkably, only one-fifth of the records – none of which focus on AR/VR – connect these strategies to the existing corpus of learning theory, and often in an ad hoc manner. This lack of clear and well-developed theoretical trajectories hampers the advancement of this new research field.
4.5 Technology (and Other) Tools for Learning
This section looks into technology and other tools developed or used by researchers to support learning activities about emerging technologies. Here we distinguish between unplugged tools, tools that incorporate emerging technologies, and any other digital technology tools. The section first looks into tools used to teach or introduce AI and ML, followed by IoT, and finally AR and VR.
Only 16 records do not present or refer to new or existing technology tools – no surprise, as the topic of interest is technology education. From the 57 reports that focus on AI or ML education, 42 present technology tools, often in the form of ML-powered tools specifically designed to teach ML and referred to in this section as “ML tools.” What these ML tools have in common is that they allow students to develop, train, and test models, be it with different types of input data including, among others, images (e.g., [
88,
122,
155]), sounds (e.g., [
86]), and gestures (e.g., [
67]). A good example of a custom-developed ML tool is the iOS application AlpacaML, which facilitates the construction and use of ML gesture models based on data from wearable sensors [
166,
167]. With this tool, students create, improve, and test models of their own sports-related gestures and get real-time feedback (e.g., about a bad or good soccer pass). By using students’ sport expertise and identity as a point of departure, the authors hope to foster curiosity about ML [
166].
Another characteristic of ML tools is that they expose or glass-box some ML concepts or processes while black-boxing others, often to lower the barrier for novice learners. This is, for instance, the case with the gesture-based supervised ML tool developed by Hitron et al. [
66]. The ML tool aims at familiarising students with data-labelling aspects (i.e., sample size, sample versatility, and negative examples) and model evaluation, while deliberately black-boxing other aspects such as feature extraction, model selection, and validation to reduce students’ cognitive load [
66].
ML tools differ in their degree of complexity in operation and, related to this, the possibilities they offer to explore and learn about ML. A distinction can be made between ML tools using
graphical user interfaces (
GUIs), low-code, or block-based coding environments, and more advanced programming environments. GUIs provide the lowest entry level as they require no coding to train, test, and execute predefined ML models (e.g., [
111,
117,
147]). An example of an easy-to-use GUI is
Google Teachable Machine (
GTM). Vartiainen et al. [
146] used GTM to introduce young children (ages 3–6) with no programming experience to ML. GTM is a web-based ML system powered by classification algorithms such as convolutional neural networks. It allows people to quickly train their own ML models, without programming, using images, gestures, and sounds as predictive modes. Children used GTM to create models for three different facial expressions, then explored the relationship between input (facial expressions) and output (sounds and images) [
146].
ML tools that adopt block-based programming environments are more challenging to operate, but offer more possibilities in return. These ML tools utilise a visual drag-and-drop learning environment whereby students use coding instruction “blocks” to develop simple ML applications. Most often used are extensions for Scratch (e.g., [
1,
35,
113,
157]), Snap! [
73], and App Inventor [
112]. García et al. [
52], for instance, developed an educational resource to teach ML in schools with ML4K (Machine Learning for Kids), a web platform for children to build ML models that can be exported to Scratch or App Inventor to develop ML-powered applications [
52].
In a few records, students use more advanced programming environments such as Python, C++, and/or Java to develop ML-powered applications (e.g., [
43,
88]) or autonomous robots and vehicles (e.g., [
58,
68]). Since these tools are technically complex, prior knowledge of computing is usually required. To extend the possibilities, block-based and more advanced programming environments are sometimes combined with general ML platforms and open-source libraries such as ExpliClas [
4], WatsonAI [
148], Tensorflow [
100], and AzureML Studio [
159].
In addition to ML tools, a range of complementary digital tools are used to support K–12 students’ learning about AI and ML, including Lego WeDo 2.0 [
158], Microsoft Excel [
46], and video communication and cloud computing tools [
68]. A few reports (six in total) deliberately use unplugged tools to provide alternative pathways to develop an interest in and understanding of AI and ML. These studies often target students with little motivation for or expertise in technology-related subjects. Examples include a card-based design game to introduce AI ethics [
9], a man-machine simulation game [
103], and a Turing roleplaying activity [
81].
Of the 28 records that focus on IoT education, 27 present or refer to the use of technology tools. The characteristic of these tools is that they combine existing with custom-made components and modules, often in the form of open-source IoT toolkits. IoT toolkits usually include a range of tangible components, especially compared to ML tools that are often primarily web-based. Typical components are microcontrollers, programmable sensors and actuators, software, and coding environments including brands such as Arduino, Udoo, Raspberry Pi, ThingsBoard, Micro: bit, CloudBits, Bee-Bots, Lego Mindstorms NXT, and Cubelets (e.g., [
5,
25,
33,
56,
99]). Important to note is that some authors use the term IoT interchangeably with ‘physical computing,’ thereby disregarding the cloud component (e.g., [
54]).
Educational Lab Kit is an example of such an IoT toolkit [
98]. It is an open-source platform that combines widely available and custom-made hardware electronics that are Arduino- or Raspberry Pi-compatible with web-based tools, gamification elements, and activity guides. With the toolkit's sensing devices and cloud infrastructure, students can collect real-time data from their school buildings and use it in maker-like STEM activities in a context of energy awareness and sustainability [
98]. Another example is the UMI-Sci-Ed toolkit that consists of an Udoo-Edu hardware kit with an accompanying online programming environment and different educational scenarios [
57]. The Udoo-Edu hardware kit is packed in a suitcase and includes an Udoo Neo Board with USB, ethernet, and Wi-Fi, a 1 GB RAM processor, and multiple sensors and actuators. The educational scenarios provide teachers with all the necessary components to facilitate students in exploring and developing IoT applications, including design challenges (e.g., smart recycling), learning objectives, activities and materials [
60].
Overall, IoT toolkits require prior knowledge of electronics and programming to operate, both for students and facilitators. Substantial efforts have nevertheless been made to provide low barriers to entry for novice learners, not least through the development of easy-to-use GUIs and software platforms to program IoT components (e.g., [
77,
150]). Setiawan et al. [
124], for instance, developed a visual mobile programming tool for IoT applications with Raspberry Pi, which students with no programming knowledge or skills should be able to use. Along the same lines, Vakaloudis et al. [
141] introduced a new software platform that enables teachers with no experience to integrate IoT in STEM education, and Rizzo et al. [
112] developed an extension for the App Inventor (UAPPI) that transforms it into a GUI to program physical objects. Arora et al. [
5], in turn, developed a physical and digital toolkit called DIO that consists of custom-made dome-shaped modules with an embedded assortment of input and output functionalities. The modules can be easily programmed through an AR-based application leveraging 3D identification patterns present on each module. This way, DIO facilitates children to develop multiuser wearables and environmental interaction designs [
5].
Only one record relies exclusively on unplugged learning tools for IoT education. The Tiles IoT Inventor Toolkit, developed by Mavroudi et al. [
93], enables children to generate ideas for IoT ecologies within a specific domain and without the use of technology. In other records, IoT toolkits are sometimes combined with unplugged tools, such as educational scenarios and/or activity guides (e.g., [
56,
98,
129]), but these tools merely have a supporting role.
All 27 records focusing on AR and VR present technology tools. Five of these are VR tools, 22 AR tools. AR and VR tools are typically custom-made with technologies such as Unity 3D, Vuforia Engine, TopCode, EasyAR, and Google ARcore. They can be grouped into tools that aim at developing students’ computational thinking and programming skills (18 records), or alternatively, authoring tools to create AR and VR (game) environments (seven records).
Characteristic of the largest group, here referred to as AR and VR programming tools, is that these tools do not scaffold learning about AR and VR, but rather, use AR and VR as instructional aids to provide real-time visual feedback during programming and related tasks (e.g., [
34,
44]). A good example is the mobile visual programming environment for the Thymio II robot, which runs on Android and iOS [
87]. Students use the environment to solve increasingly challenging tasks, while learning robot programming and event handling. Another example is the AR game CodeCubes, which combines physical programming with simultaneous visualisation to promote an interactive and engaging learning experience [
28,
29]). Similar approaches are used for tools such as AR maze [
70], CodAR [
125], CodeBits [
61], HyperCubes [
50], and ARQuest [
53].
The second group, authoring tools, enable students to build their own AR games (e.g., [
109]) or VR environments (e.g., [
63]). Examples of AR authoring tools include TaleBlazer, an AR platform to make geolocation games [
109], and the interactive storytelling platform ARIS, which consists of a web-based editor and a client-based app to develop AR games with physical objects in a specific location [
82]. Examples of VR authoring tools include [
63], who developed a low-cost VR-driving simulator and graphical programming interface, among others (e.g., [
26,
91,
92,
95]).
In sum, a wealth of digital learning tools to introduce emerging technologies to K–12 students have been developed. ML tools are typically designed to glass-box some aspects of the technology while black-boxing others, and they differ in degree of complexity and possibilities. GUIs and block-based coding environments are most often used to enable students to develop, train, and test simple ML models, but in a few cases, students engage in more advanced programming to develop ML-powered applications. IoT toolkits are often open-source and combine existing with custom-made components and modules to enable students to design and develop IoT solutions. As with ML tools, IoT toolkits do not glass-box all processes in order to manage complexity. Although IoT toolkits require some prior knowledge in programming and electronics, easy-to-use GUIs and software platforms have been developed over the years to make programming IoT components more accessible. AR and VR tools, finally, are predominantly programming tools that do not necessarily enable students to learn about AR and VR, with an additional small group of authoring tools to create AR and VR (game) environments. The development of learning tools for AR and VR, their characteristics, and their implications provide opportunities for future research. Another interesting line of research, found in only a few records, is the use of unplugged tools to engage students with little interest and few prior skills in digital technology in learning activities.
4.6 Empirical Evaluation and Student Assessment
This last section presenting our results looks into the ways in which technology tools and teaching approaches are empirically evaluated, whether and how students’ learning is assessed either formatively and/or summatively, and the degree of constructive alignment between learning objectives, activities, and student assessment. This section furthermore provides insight into the methods used to collect empirical evidence, and it provides examples of typical findings reported in the included records.
Of the 107 reviewed records, 65 present original empirical data, of which 30 focus on AI/ML, 18 on IoT and 17 on AR/VR. The majority of these include studies evaluating learning activities (55 records) and tools (50 records) (see Figure
9). Activities and tools are often evaluated in tandem (38 records), and these constitute the main focus of the record (e.g., [
52,
96,
100]). This is especially the case with records that introduce tools for learning about emerging technologies. Examples include Glaroudis et al. [
57], who evaluated a new open-source learning environment deployed in an inquiry-based and collaborative learning approach to introduce IoT, as well as Lindner et al. [
81], who evaluated whether unplugged tools and hands-on activities are suited for teaching AI in a comprehensive way to high school students.
From an in-depth inspection of the quality criteria used to evaluate technology tools, three main criteria surface: usability (e.g., [
25,
34,
124]), students’ perception of tools (e.g., [
29,
42]), and the ways in which they interact with said tools (e.g., [
50,
146]). However, less studied are the particular aspects and mechanisms of technology tools (e.g., balancing the glass- and black-boxing of features) and how they contribute to students’ learning.
For learning activities, the quality criteria are more diverse. They include students’ engagement (e.g., [
54,
74]) and collaboration with team members (e.g., [
23,
53]), students’ learning experiences, for instance in relation to factors such as “satisfaction” and “enjoyment” (e.g., [
56,
166]) or with regard to how they perceive the activities and content (e.g., [
15,
88]), and finally, possible shifts in students’ interests and attitudes that can be attributed to the activities (e.g., [
60,
126]). These quality criteria for evaluating learning activities are sometimes combined in a single study, as in Schneider et al. [
123], who used observation notes to study students’ engagement during the activities and a post-questionnaire to collect information about their experiences. This resulted in nuanced findings about students’ learning process, the perceived workload, and how the activities unfolded [
123].
This example furthermore shows how different qualitative data collection methods are combined to evaluate learning activities and tools. The most frequently used techniques are open and closed questionnaires (e.g., [
117,
118]), semi-structured exit interviews (e.g., [
23,
28]), and participant observation (e.g., [
53,
115]). Mixed-methods studies, in which qualitative and quantitative data are triangulated, are rare (e.g., [
129]).
From the 65 records presenting original empirical data, slightly more than half (36 records) include some form of assessment of students’ learning (e.g., [
15,
67,
105]) (see Figure
9). Assessment of students’ learning is never the sole focus, and always takes place in conjunction with an evaluation of learning activities and/or tools. Furthermore, assessment is typically summative in nature, which means that students’ learning is evaluated at the end of an instructional unit by comparing it against a standard or benchmark (e.g., with a pre- and post-test). Summative assessment serves the purpose of accountability, ranking or certifying competence [
10]. This contrasts with formative assessment, which summarises students’ development at a particular point, and primarily aims at promoting students’ learning [
10]. Surprisingly, though, no records could be identified that focus on formative assessment or, related to that, feedback and feed-forward practices to improve students’ learning. This does not mean that such practices did not occur during the learning activities, either spontaneously or deliberately, but a formative assessment was not an explicit research objective in these studies.
In the reviewed records, summative assessment is directed towards students’ knowledge and understanding of technology concepts and processes (e.g., [
5,
67,
74]), and to a lesser extent, students’ skill development, including technical skills (e.g., [
34,
82]), design skills (e.g., [
54,
134]), and problem-solving skills (e.g., [
75,
93]). Pre- and post-tests are most often used to assess students’ knowledge acquisition (e.g., [
67,
105,
118])), whereas artefacts (i.e., worksheets, models, prototypes) and students’ complementary explanations of these artefacts are used to gain insight into students’ skill development and the ways in which they applied and further developed new knowledge by engaging with practical problems (e.g., [
5,
117]). Students’ self-perceived learning, in turn, is typically captured with open-ended questionnaires and semi-structured exit interviews (e.g., [
15,
98]).
A related finding is that approximately half of the 36 records that include student assessment showcase constructive alignment. Constructive alignment is a principle devised by Biggs [
7,
8] which refers to developing learning activities and assessment tasks that directly address the intended learning objectives. In the reviewed records, alignment often breaks down because assessment criteria do not or only partly align with pre-set learning objectives (e.g., [
46,
82,
105]). Even with constructive alignment established, learning objectives may be described in high-level or vague terms, making it hard to judge whether these objectives are indeed aligned with the proposed learning activities and assessment tasks (e.g., [
54,
91,
167]). There are, however, a few good examples of how constructive alignment can be established when introducing emerging technologies to K–12 students (e.g., [
117,
149]).
One such example is provided by Hitron et al. [
66] who showcase how they assessed students’ understanding of three data-labelling aspects (i.e., sample size, negative examples, and versatility) in relation to classification problems in supervised ML. These learning objectives were set in advance and taught in three different activities, one for each main objective. The assessment procedure consisted of a pre-test before the learning activities, a short interview with students in which they explained the process they went through, a post-test in which they were given two ML examples (one similar to the learning experience and one different) and had to explain the underlying data-labelling processes, and two open follow-up questions to gauge whether students could relate the learning content to their own lives. This straightforward, yet effective procedure shows constructive alignment in practice.
Although providing a qualitative meta-analysis of empirical findings is not the focus of this review, it is apparent that most of the records present positive findings. Only seven records were identified as providing greater nuance by including unexpected or negative findings, for example in terms of students’ learning, unforeseen obstacles (e.g., costs, public support, teacher motivation), or activities and tools that did not lead to desired outcomes (e.g., [
42,
56,
123]). As for the majority of records, it is unclear whether there were indeed no unexpected or negative findings to report, or whether such findings were simply disregarded. Typical examples of positive findings include increased motivation among students to learn abstract concepts [
92], design guidelines to build tools for learning about emerging technologies [
82], positive attitudes towards (careers in) STEM [
129], students being capable to build, train, and evaluate simple supervised ML models [
167], good usability and acceptance of new technology tools [
119], high effectiveness of a proposed curriculum and teaching platform [
158], and so on.
In sum, of the 65 records that present original empirical data, the majority evaluate learning tools and activities, often together. An obvious strength of current research is the use of a range of qualitative methods to study different aspects of students’ learning experiences in real-world settings. However, the results mainly provide a good news show with only a few records reporting unexpected or negative outcomes – a feature indicative of the lack of maturity of this nascent field. Quality criteria to evaluate tools are directed towards usability and students’ perception of and interaction with these tools, but less towards the underlying mechanisms of these tools and how these contribute to students’ learning. For learning activities, quality criteria include students’ engagement and collaboration, their learning experiences, and possible shifts in attitudes. A remarkable finding is that only half of these 65 records include summative assessment of students’ learning. Summative assessment is narrowly directed towards students’ technology skills and understanding, leaving out the ethical and societal implications of emerging technologies as well as other aspects. Other remarkable findings are the lack of constructive alignment between learning objectives, activities, and assessment tasks, and the fact that no research on formative assessment could be identified in the literature.
To recap, the results of this systematic mapping review were structured along the five central topics of interest, preceded by a description of the dataset. The findings show a sharp increase in interest in teaching emerging technologies in K–12 education, especially in recent years, but many challenges remain unaddressed. The next section discusses our main findings, leading to our presentation of a future HCI research agenda to advance and mature the field.