An Evaluation of Primary School Children Coding Using A Text Based Language Java
An Evaluation of Primary School Children Coding Using A Text Based Language Java
An Evaluation of Primary School Children Coding Using A Text Based Language Java
To cite this article: C. B. Price & R. M. Price-Mohr (2018): An Evaluation of Primary School
Children Coding Using a Text-Based Language (Java), Computers in the Schools, DOI:
10.1080/07380569.2018.1531613
Article views: 2
ABSTRACT KEYWORDS
All primary school children in England are required to write Primary schools; computer
computer programs and learn about computational thinking. programing; java;
There are moves in other countries to this effect such as the computational thinking
U.S. K-12 Computer Science Framework (CSF) for develop-
ment. Debates on how to program and what constitutes
computational thinking are ongoing. Here we report on a
study of programing by children aged 7 – 11 using Java and
elements of computational thinking they experience. Our
platform comprises a novel Story-Writing-Coding engine we
have developed. We compare novice (children’s) processes of
coding an animated story with that of experts (college stu-
dents) and evaluate the differences using four measures
based on the progressive coding of a complete program. We
also analyze the use of novice (children’s) computational
thinking in this coding process. This research is set against
a backdrop of approaches to teaching programing and
concepts of computational thinking in recent educa-
tional literature.
in the digital economy. Later that year, the Association for Computing
Machinery, with partners, released the K-12 CSF intended to inform the
development of both standards and curriculum (K-12 Computer Science
Framework, 2016). Like the English primary programs of study
(Department for Education, 2013a), there is core reference to both CT
and programing.
However, within computer science education research, there is a vigorous
discussion on how best to approach the teaching of programing and of the
nature of CT itself. In terms of programing, there is gathering momentum
to use non-textual, block-based tools such as Scratch (Morris, Uppal, &
Wells, 2017), as opposed to traditional text-based approaches taught at uni-
versity and used professionally. In terms of CT, despite the appearance of
this term in statutory requirements or guidance documents, there seems to
be an ongoing discussion on what this means and how to implement it in
the curriculum.
This paper reports on a study that forms part of a large research project
where we are investigating how to teach primary school children to pro-
gram in a text-based language (Java) typically taught to undergraduate/col-
lege students. The programing environment used is our novel Story-
Writing-Coding engine, where children code an animated story. Previous
work (Price & Price-Mohr, 2018) has focused on the creation of meaning
using code. This paper focuses on a comparison of the code written by the
novice children with that written by experts, where the expert comparison
group is taken from our undergraduate students. The rationale for
doing this is detailed in the Methodology section. We have two
research questions:
Programming
The literature uses varied vocabulary to distinguish between text-based and
block-based approaches. The latter are often referred to as visual or graphical
programing environments, which is confusing since many text-based
approaches (such as ours) deliver visual/graphical output. To avoid confusion
we shall refer to these contrasting approaches as block syntax or text syntax
following Stead and Blackwell (2014). There are many languages available for
teaching primary school children to code, most use block syntax. Extensively
used outside the classroom, they are now being used within the English com-
puting curriculum. An excellent overview is found in Morris et al. (2017),
with suggestions of suitable languages and approaches for all stages of pri-
mary education and also at the transition into lower-secondary schools.
The K-12 CSF explicitly refers to programing: “Computers also require
people to express their thinking in a formal structure such as a programing
language” (K-12 Computer Science Framework, 2016, p. 69). By formal
structure we think of flow diagrams and pseudo-code. Yet programing is
more than designing algorithms a computer can execute. “Creating a pro-
gram allows people to externalize their thoughts in a form that can be
manipulated and scrutinized. Programming allows students to think about
their thinking” (K-12 Computer Science Framework, 2016, p. 69). Again
this strengthens the link between teaching CT and programing.
The English national curriculum for computing, in addition to aspects of
CT, aims that all pupils should “have repeated practical experience of writ-
ing computer programs” and specifies the subject content for Key Stage 1
(kindergarten and 1st grade) and Key Stage 2 (2nd to 5th grade;
Department for Education, 2013a, p. 1). Comparing these with the K-12
CSF programing concepts, we find the former are rather abstract, specifying
what while the latter are more concrete, often specifying how. For example,
the former specifies “create and debug simple programs” (Department for
Education, 2013a, p. 2), while the latter, in reference to programs, states
“sprites can be moved and turned, … drawing a shape or moving a char-
acter across a screen” (K-12 Computer Science Framework, 2016, p. 96).
However, commenting on the national curriculum programing require-
ment, some UK computer science educationalists note that “it is deliber-
ately not mandated that this learning should take place on an actual
computer” (Brown, Sentance, Crick, & Humphreys, 2013, p. 9), and they
go on to suggest the Computer Science Unplugged “can cover the require-
ments of the curriculum without programing a physical computer.”
6 C. B. PRICE AND R. M. PRICE-MOHR
More recently Yadav et al. (2017) seem to agree that while CT is “deeply
connected to the activity of programing, it is not essential to teach pro-
graming as part of a preservice computational thinking approach” (Yadav
et al., 2017, p. 58). However, other researchers are quite explicit about the
value of coding: “An introduction to programing is not necessarily a pre-
cursor to teaching algorithmic thinking, but rather provides the very means
to teach algorithmic thinking” (Hromkovic, Kohn, Komm, & Serafini, 2017,
p. 1); here, programing refers to Logo and Python.
Block syntax environments for young learners are widely endorsed by
the Computers At Schools movement (Computing at School, 2015). This is
justified by research into the detailed links between block syntax (Scratch)
and CT, focusing on both concepts and practices of CT (Brennan &
Resnick, 2012). Other educators suggest that starting with block syntax
then progressing to text syntax may be appropriate for primary children
(Morris et al., 2017) and cite Logo as an appropriate tool. It has been sug-
gested that Scratch is a good first language to learn and will help with tran-
sitioning to a text syntax language later on (Dorling & White, 2015). While
this seems a reasonable assertion, there is little evidence to support it. A
more rigorous study suggests the opposite; some 120 students aged 15–16
(10th grade) participated, a group of 44 had prior experience with Scratch
and a group of 76 did not. The results showed no significant differences
between the groups for the concept of variables and conditional execution,
yet significant differences in using repetition in favor of those with prior
experience with Scratch (Armoni, Meerbaum-Salant, & Ben-Ari, 2015).
There is also evidence that block syntax can induce bad habits
(Meerbaum-Salant, Armoni, & Ben-Ari, 2011). This research looked at
14–15 year olds (9th grade) learning Scratch without instructional materials.
It reported bad habits acquired, those which did not encourage designing
algorithms or using programing constructs (selection, iteration), and
showed how these were a consequence of the drag-and-drop nature of
Scratch. Children first collected all blocks they thought appropriate and
then combined them into several scripts without any planning or thought.
One sophistication of Scratch is that it can run many scripts concurrently.
Meerbaum-Salant et al. (2011) showed how this feature, together with the
block-collection behavior, led to incorrectly structured code (which never-
theless ran). Effectively, children were misusing the power of Scratch’s con-
currency. This approach to coding is called bricolage (tinkering), where the
coder assembles code by trial and error without planning. This is con-
firmed by more recent research (see Rose, 2016). Advice to teachers in the
publication “Quickstart Computing” (Computing at School, 2015) suggests
that Scratch and Kodu can make it seem unnecessary to go through the
planning stage of writing a program. It goes on to suggest that it is good
COMPUTERS IN THE SCHOOLS 7
practice for pupils to write down the algorithms for a program, in the form
of rough jottings, storyboard, pseudocode, or flow charts.
There has been one previous attempt to link storytelling and coding;
Storytelling Alice uses block-syntax to produce 3 D animations. It was cre-
ated for middle school girls, to address the under-representation of women
in computer science (Kelleher, 2006; Kelleher & Pausch, 2007). The
Greenfoot environment provides a Java-based text syntax approach aimed
at children aged 14 and above (Kolling, 2010), while Scratch aims at
8–16 year olds and Alice 12–19. Here, users drop objects into a world using
block syntax then code the objects’ methods (in templates automatically
provided) using the full Java language, including many of the characteristics
of object-oriented programing.
With the exception of Greenfoot, these approaches do not provide learn-
ers with experience of syntactic features of conventional languages which
they ultimately must learn; text syntax is replaced by colored jigsaw pieces.
The Drawbridge approach presents a sophisticated combination of block
and text syntax with an easier user interface (Stead, 2016; Stead &
Blackwell, 2014). Block syntax can be viewed side-by side with text syntax
(JavaScript) with simultaneous update. In this way children can move from
block to text syntax. Robust trials with children aged 11–12 (6th grade)
have shown that starting with blocks rather than text does improve under-
standing of text syntax, (Stead & Blackwell, 2014).
Computer games form a useful platform to learn programing; games can
balance challenge against expectations and achievement. Lightbot is such a
game proposed for primary schools (see Morris et al., 2017), where the
objective is to switch on all lights in a level using fixed commands. As the
game levels progress, functions and selection statements are introduced.
Research by Gouws, Bradshaw, and Wentworth (2013), suggested that
Lightbot is useful in developing CT; unlike the potential bricolage approach
using Scratch, Lightbot enforces a sequential mode of program design.
The principle of low floor, high ceiling to guide the development of pro-
graming environments is well known since the days of Logo (Flannery
et al., 2013) and is capitalized in computer game programing, to a profes-
sional level. Here, the Scalable Game Design Initiative (Ioannidou, 2011;
Repenning et al., 2010) suggests that all CT tools should have a low thresh-
old and a high ceiling, so that novices can rapidly create a working game
but allow the creation of a game with sophisticated behavior. Many pro-
graming tools suitable for K-12 education fit this requirement including
Scratch, Alice, Kodu, and Greenfoot (Morris et al., 2017). Of the environ-
ments mentioned, Greenfoot has a relatively high floor while Drawbridge
and Scratch have a low floor. In our experience, Storytelling Alice, despite
its block syntax, has a relatively high floor.
8 C. B. PRICE AND R. M. PRICE-MOHR
Programming affordances
Our engine is based on the Java language where children code using Java
syntax. For example, the line of code grog.flyto(myrobin); which makes
the character “grog” fly to the goal “my robin” uses the object-based syntax,
where “grog” is the object, “flyto” is the method and “myrobin” is a param-
eter. This object-based approach is a simplification of the full object-ori-
ented approach to programing that allows additional classes to be created
(e.g., a new sort of object such as “background” with methods different
from those of characters.) This would require users to work with additional
code entry boxes (and at a higher cognitive level) that was judged too diffi-
cult for children of this age.
Coders are able to add scenery, props and characters to a canvas.
Props and characters can move and characters can display emotions.
Character methods are based upon Halliday’s Systemic Functional
Grammar that aims to teach how to get meaning into a written text
(Halliday, 2004), in this case through code. Halliday (2004) proposes a
classification of linguistic processes (verbs) into classes that encompass
movement, speaking, thinking and feeling. The classes we have imple-
mented, together with the code methods and examples of story vocabu-
lary, are shown in Table 1.
The second designed affordance allows coding either synchronous or
asynchronous character behavior. This is not trivial since users write all
code as a sequence of statements in a single code-text input box. This is
achieved by programing using tuples of statements (e.g., the code for two
characters should be arranged in pairs as shown in the code snippet in
Table 2.)
COMPUTERS IN THE SCHOOLS 9
On the left, pip and grog jump at the same time, while on the right pip
jumps first followed by grog departing. We also provide polymorphic forms
of most methods with decreasing abstraction. For example, this jump(); is
called without parameters using a built-in jump height, while jump(50); is
less abstract, allowing the jump height to be specified. A further reduction
in abstraction jump(50,4); allows the time taken for the jump (4 seconds)
to be included.
The user interface is shown in Figure 1. On the left is the code entry
box where the coder can add one, many, or no lines of code; pressing the
“run” button compiles the code and displays the associated animation on
the canvas to the right. This provides immediate feedback of the effect of
the code written; the user can continue to code, modify existing code, or
correct errors. This rapid feedback is important and motivates the learner
to maintain momentum. Pressing “run” also logs a rich data file contain-
ing the code and details of any compilation errors at that point. We
are therefore able to track changes in the code. This is crucial since
it provides more information about code development and CT than
can be gleaned from the final program, as noted by Brennan and
Resnick (2012).
Figure 1. The user interface showing the code entry box on the left and the canvas on the
right. The user sees a single canvas; this is repeated here to show the effect of the
code execution.
Research methodology
Participants and procedures
This study used an exploratory data analysis design where no initial
hypotheses were stated. Data were collected from two groups where we
sought to analyze the difference in coding between the groups. Group 1
comprised 18 primary school children Years 3 to 6 (Grades 2- 5) from a
rural and an inner-city school and Group 2, the comparison group, com-
prised 13 “expert programmers”, final year computing students at a UK
university. The use of undergraduate/college students as a comparison
group may appear unusual, yet we propose they provided an ideal compari-
son group since we are comparing novice and expert programmers. None
of the children had prior experience of text-based programing; six of them
indicated having used Scratch. All undergraduate students had experience
of coding using at least three text-based languages including Java, but they
had not met the engine before. We therefore expected them to have stable
and robust mental models of coding, including knowledge of programing
constructs, program composition, and error correction strategies. Both
groups were given the same task, to code an animated story using the same
instructional materials; therefore, our comparison would reveal differences
in coding processes using this particular engine with a focused and object-
ive comparison of the two groups.
The study comprised two phases. During the first instructional phase,
participants were introduced to the engine affordances. They were shown
how to add an item of scenery, add a character, and make the character
move (see the code snippet in Table 3).
Then they were asked to experiment with the character methods shown
in Table 1. Following that programing constructs and the use of tuples
were taught directly. This phase lasted two hours and was followed one
week later by the second phase, where participants were asked to independ-
ently code a story, either known or made-up, or let a story emerge from
coding (with help being available). The second phase lasted one hour.
12 C. B. PRICE AND R. M. PRICE-MOHR
Programming measurements
Primary data comprised extensive records of the participants’ coding activities,
automatically recorded by the engine, that contained time-shots of code (when
the “run” button was pressed) including errors reported by the compiler. All
records were subject to manual post-hoc analysis. For each participant we
extracted the number of lines of code written, the time taken, the number of
corrected errors and the number of tuples used. We introduced a new meas-
ure, the purposeful coding effort (PCE) that aims to capture the purpose
behind changing code. This number was obtained manually for each record
by incrementing its value for the following changes between each run: adding
or deleting a line of code, rearranging lines of code, changing the method for
a character, changing a character, and changing a parameter. Error correction
was not included. A value PCE ¼ 1 is the baseline and corresponds to simply
adding lines of code. Values greater than 1 indicate the number of changes
made to existing code; therefore this measure indicates the amount of purpose
in interacting with the developing program.
All measures were normalized relative to the total lines in the finished pro-
gram, yielding PCE/line, tuples/line, errors-corrected/line and lines-written/
minute; these were then subjected to non-parametric statistical tests. We first
ran an exploratory analysis including the Shapiro-Wilk test that indicated that
all test data were not-normally distributed (e.g., W ¼ 0.88, p ¼ .03). We then ran
the two-sample Mann-Whitney Wilcoxon Test for independent variables on
each measure, and the significance of any differences in the median and also
the effect size between the two groups on each measure were calculated. Since
this involved running multiple tests on the same data set the familywise error
would increase beyond the 0.05 significance level. To mitigate against this we
applied Holm’s variant of the Bonferroni correction (Holm, 1979), where the p-
values for each test are first ranked in increasing order and then an adjusted
alpha value for each test is calculated as 0.05/rank. To report the effect size we
calculated Pearson’s r according to the expression r ¼ pzffiffiffi
N
where z is the z-score
and N the total number of observations (Rosenthal, 1991, p. 19). The resulting
effect sizes lie between 0 and 1, where, as a rule of thumb, r ¼ 0.10 is a small
effect, r ¼ 0.30 is a medium effect and r ¼ 0.50 is a large effect (Cohen, 1992).
All calculations were done using packages in the R-language.
Table 4. Results of statistical analysis for children (child) and students (stud).
0
Measure Median Mean Median Mean W p rank a (0.05/rank) r (effect size)
(child) (child) (stud) (stud)
PCE/line 1.07 1.14 1.57 1.54 31 0.0009 4 0.0125 0.61
Lines/min 0.53 0.49 1.13 1.28 38.5 0.003 3 0.017 0.55
Tuples/line 0.07 0.11 0.0 0.02 171.5 0.023 2 0.025 0.44
Error correction/line 0.11 0.12 0.14 0.16 90.5 0.29 1 0.05 0.19
was reviewed, and we looked for evidence of patterns in the animation and
traced any pattern back to the program where we looked for evidence of
patterns having been made from tuples (abstraction, patterns). Evidence of
abstraction shown by polymorphic method calls was sought and for records
with PCE >1 we looked for evidence of logical reasoning. Finally, we
looked for examples of algorithmic thinking through the use of program-
ing constructs.
Results of analysis
Programming
Results of the Mann–Whitney Wilcoxon Tests are presented in Table 4.
First, the lines/min measure for the children (Mdn ¼ 0.53) differed signifi-
cantly from students (Mdn ¼ 1.13), W ¼ 38.5, p ¼ .003, r ¼ 0.55; children
were writing code at a lower rate than students, with a large effect size. This
is expected and reveals the differences in mechanical (keyboard) skills
between the groups as well as general cognitive development. Second, the
PCE/line measure for the children (Mdn ¼ 1.07) differed significantly from
students (Mdn ¼ 1.57), W ¼ 31, p ¼ .0009, r ¼ 0.61, with a very large effect
size. We found that 8/18 children had PCE/line ¼ 1; they were coding a story
14 C. B. PRICE AND R. M. PRICE-MOHR
Computational thinking
We found most children clearly organized their code into blocks: to set up
the scene, to have character interaction, and to bring the story to its end
COMPUTERS IN THE SCHOOLS 15
(decomposition). Three children went beyond this and created their own
methods to separate out parts of their stories. We found evidence for pat-
terns (abstraction) where children combined tuples to great effect. Most
children used patterns as expected: Characters would meet up, they would
have conversations. Some children coded more complex patterns. One child
created a dance routine pattern, where two characters executed synchron-
ous code to obtain a movement pattern that had mirror-symmetry in the
horizontal direction through judicious selection of parameters; this is clear
evidence of logical thinking, see Table 6.
Children made the correct use of polymorphic forms (e.g., using either
grog.flyto(40,20); or grog.flyto(pip); depending on the current layout of
the scene, and they chose suitable method parameters, though boys would
occasionally exaggerate their values to obtain strange effects such as high
rates of spinning. Here is evidence of abstraction. There is also evidence of
what we suggest is higher-order abstraction. Consider this line of story-
text: “Grog flies to see Pip because they are friends”. The second clause (of
reason) cannot be coded. The child who wrote this had abstracted out that
story-text which cannot be coded, but which he/she felt important for his/
her story. Concerning algorithmic thinking, even those children who did
not use tuples were able to correctly sequence lines of code to produce
their desired animation. Only two children transferred their learning of
programing constructs; One child used iteration to assemble a forest of
trees, and a second child used iteration to generate a sequence of actions to
create an excited character. While this is a little disappointing, it may be
that it exceeded their cognitive capacity given the range of engine affordan-
ces available, or they simply did not need to use these constructs. We
found no evidence of bricolage. Looking at children with PCE >1, we
found they were continually making purposeful choices, adding and chang-
ing code. They were clearly drawing down their experience of story writing
from literacy classes and from their reading experiences. In summary, there
is evidence of various types of thinking going on, and much of this can be
described as computational thinking.
Discussion
In answer to our first research question, “How can we measure the differ-
ence between children’s and experts’ processes of coding, and what do
these differences reveal?” the four measures we have devised report differ-
ent aspects of children’s coding process and how this differs from experts.
While experts code faster and are more adept at modifying code for a pur-
pose, the children make better use of tuples to make the abstract concept
of synchronization concrete and straightforward. Unexpectedly we found
16 C. B. PRICE AND R. M. PRICE-MOHR
there was no difference in the rates of error correction. The PCE measure
revealed useful information about how children compose their programs. A
future study will investigate the use of this measure over groups of children
of different ages in K-12 to explore age-related differences. In answer to
our second research question, “Is there any evidence for CT in children’s
programs?” we have found clear evidence of children using the concepts of
abstraction, decomposition, logical thinking and patterns. We are tempted
to speculate that the use of patterns is related to the Story-Writing-
Coding context.
There are of course limitations to this study. The sample size was small;
however this is mitigated by the use of the appropriate statistical tests. The
lack of use of selection and iteration (algorithmic thinking) is a concern and
may point to a limitation of Story-Writing-Coding, or at least of the engine
in its current form. While we are confident about our analysis of CT as pre-
sented here, we have some reservations about the CT factors identified in
the literature. Future research will focus on abstraction, logical thinking and
patterns that pick up various elements discussed in this paper.
This research leads us to formulate suggestions for the practitioner. First,
we encourage primary teachers to consider using a text-based language in
their teaching, especially platforms that produce graphical output. Second,
we encourage them to monitor the development of children’s programs
over time, using a measure such as our PCE. Third, we encourage them to
critically evaluate concepts within CT and to teach these linked closely to
programing. Fourth, we encourage teachers to adopt our Story-Writing-
Coding approach since it taps into the inherent desire and ability of chil-
dren to tell stories. Finally, we invite researchers to consider using students
as a comparison group to evaluate children’s progress. The authors are will-
ing to share the engine and a range of teaching resources suitable for K-12
educators, and we will also be pleased to receive requests for collaboration.
Please contact the corresponding author.
References
Armoni, M., Meerbaum-Salant, O., & Ben-Ari, M. (2015). From Scratch to “real” program-
ming. ACM Transactions on Computing Education, 14(4), 25:1–25:15.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is
involved and what is the role of the computer science education community? ACM
Inroads, 2(1), 44–54. doi:10.1145/1929887.1929905
Brown, N. C. C., Sentance, S., Crick, T., & Humphreys, S. (2013). Restart: The resurgence
of computer science in UK schools. ACM Transactions on Computing Education, 14(2),
1:1–1:22.
Brennan, K., & Resnick, M. (2012, July). New frameworks for studying and assessing the
development of computational thinking. Paper presented at the American Education
COMPUTERS IN THE SCHOOLS 17
K-12 Computer Science Framework. (2016). K-12 Computer Science Framework. Retrieved
from http://www.k12cs.org
Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2011, June). Habits of programming in
Scratch. Proceedings of the 16th annual joint conference on innovation and technology
in computer science education (ITiCSE ’11), USA, 168–172. doi:10.1145/
1999747.1999796
Morris, D., Uppal, G., & Wells, D. (2017). Teaching computational thinking and coding in
primary schools. Thousand Oaks, CA: Sage.
Price, C. B., & Price-Mohr, R. M. (2018). Stories children write while coding: a cross-dis-
ciplinary approach for the primary classroom. Cambridge Journal of Education, 1. doi:
10.1080/0305764X.2017.1418834
Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable game design and the develop-
ment of a checklist for getting computational thinking into public schools. Proceedings of
the 41st ACM technical symposium of computer science education (SIGCSE ’10), USA,
265–269. doi:10.1145/1734263.1734357
Rose, S. (2016). Bricolage programming and problem solving ability in young children: An
exploratory study. Proceedings of 10th European conference on games based learning,
University of the West of Scotland, Paisley, Scotland, 915–921. Retrieved from http://
shura.shu.ac.uk/12649/3/
Rose%20Bricolage%20programming%20problem%20solving%20ability.pdf
Rosenthal, R. (1991). Meta-analytic procedures for social research (2nd ed.) Newbury Park,
CA: Sage.
Royal Society (2012, January 13). Shut down or restart: The way forward for computing in
UK schools. Retrieved from http://royalsociety.org/education/policy/computing-in-
schools/report/
Selby, C., Dorling, M., & Woollard, J. (2014). Evidence of assessing computational thinking.
Retrieved from University of Southampton Institutional Repository website: https://
eprints.soton.ac.uk/372409/1/372409EvidAssessCT.pdf
Stead, A. G. (2016, June). Using multiple representations to develop notational expertise in
programming (Technical Report No. 890). Retrieved from University of Cambridge web-
site: http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-table.html
Stead, A. G., & Blackwell, A. (2014). Learning syntax as notational expertise when using
Drawbridge. In B. du Boulay & J. Good (Eds.), Psychology of programming interest
group annual conference 2014 (pp. 41–52). Retrieved from http://users.sussex.ac.uk/
bend/ppig2014/PPIGproceedings.pdf
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical
Transactions of the Royal Society A, 366(1881), 3717–3725. doi:10.1098/rsta.2008/0118
Wing, J. M. (2011). Research notebook: Computational thinking—What and why? Retrieved
from Carnegie Mellon University website: https://www.cs.cmu.edu/link/research-note-
book-computational-thinking-what-and-why
Yadav, A., Stephenson, C., & Hong, H. (2017). Computational thinking for teacher educa-
tion. Communications of the ACM, 60(4), 55–62. doi:10.1145/2994591