Assessment Practices in Africa today
Janet Condy
Abstract
The 1990 Jomtien World Conference ‘Education for All’ and the 2000 World
Education Forum in Daka encouraged governments in developing countries
to shift their emphasis from measuring outcomes to establishing the extent
to which their education systems could provide quality in education (Howie,
2012, p. 81). Within the global economy, governments are held responsible for
providing adequate education. This increased political pressure has led to direct
links between national economies and education. The purpose of this paper is to
identify tensions found in the various types of literacy and numeracy assessments
within Africa; from large international high-stakes testing to local classroom
formative and summative assessments. Four high-stakes tests commonly
found in Africa are discussed. They are: school-leaving exams which assess all
subjects at Grade 12 level; Trends in International Mathematics and Science
Study (TIMSS); The Southern and Eastern Africa Consortium for Monitoring
Educational Quality (SACMEQ); and, finally, the Progress in International Reading
Literacy Study (PIRLS). I discuss four principles of summative and formative
assessments based on the work of Johnston and Costello (2005, p. 256-265).
They include: assessment as a social practice, minds in society, representation
and interpretation and, lastly, practices of teacher assessments.
Keywords
quality education, high-stakes testing, formative and summative assessments
81
Introduction
The purpose of this presentation is to discuss assessment practices in Africa. Literacy
can be said to have begun in our continent so it can be claimed that writing and literacy
are African. Literacy began here. We, as Africans, therefore, have a direct responsibility
to ensure that literacy is promoted and adequately assessed throughout our continent.
So my question is: ‘Are we, in fact assessing our literacy and numeracy in a responsible
and accurate manner?’ I begin by looking briefly at education in ancient civilizations and
how educational practices and assessments are conducted today. Then I proceed to the
present. The purposes of assessment and the various audiences are considered. The paper
then reviews the four most common high-stakes testing systems currently in operation in
some African countries followed by a discussion on principles of formative and summative
assessments. The presentation is concluded by offering a few comments.
Since the beginning of human existence, every generation somehow passed on its stock
of values, traditions, methods and skills to the next generation (Gersick, Davis, McCollom
Hampton, & Lansberg, 1997). The history of the curricula of such education reflects history
itself, the history of knowledge, beliefs, skills and cultures of humanity. Ingold (2002) state
that as the customs and knowledge of ancient civilizations became more complex, many
skills were passed down from a person to person: whoever was skilled at the job generally
taught someone of the next generation; be it animal husbandry, farming, fishing, food
preparation, construction, and military skills.
Oral traditions were central in societies without written texts (Shils, 1981). Literacy in
ancient societies was associated with civil administration, law, long distance trade or
commerce, and religion. The development of writing started in about 3500 BC. In Africa,
Egypt fully developed hieroglyphics as early as 3400 BC. Our continent is the cradle of
writing and reading. We have a proud tradition. We need to take the responsibility for
raising the standards of literacy as high as we possibly can. Africa is where it all started:
so we have a duty to honour our origins. Formal schooling in ancient Egypt was provided
to an elite group either at religious institutions or at the palaces of the rich and powerful.
Schools for the young were historically for priests, bureaucrats and businessmen (Shils,
1981). But recent archeological discoveries show that ordinary labourers of both sexes
also wrote to each other about daily life on clay shards. Africa needs to regain this broad
level of basic reading and writing skills.
Over the centuries, through Greek and Roman times, schooling was generally of a higher
standard for the offspring of aristocratic citizens. Only in the industrial age did there emerge
a demand for egalitarian education. However, over the last few centuries, schooling has
experienced radical changes. Allington (2000, p. 162) states that our schools now educate
larger numbers of children to higher levels of literacy and numeracy proficiency than
82
ever before. The historical goals that our schools set have today been met or exceeded
(Allington, 2000, p. 162). Society has put a great deal of pressure on schools to educate all
students to levels of proficiency expected historically of a few. Governments around the
world have been challenged to develop more advanced academic skills in their schools
(Allington, 2000, p. 162).
The demand for universal literacy keeps growing louder and rightly so: it is a right not a
privilege any more (Johnston & Costello, 2005, p. 257). Many social boundaries of the old
order have been broken down by technology of mass media. Literacy has been spread
exponentially by cell phone messaging alone. Widespread democratization has been
furthered by social media such as facebook, MIxit, twitter and many more. The continuing
debate about the quality of education at our schools has led to a rise and a new focus on
standards and assessment (Farr, 2000, p. 505). Robinson (2013, p. 27) explained that the
core argument of education seems to have deserted the visionaries and today is mainly in
the hands of bureaucrats and data-chasers.
Perhaps we need to ask ourselves: ‘What is the purpose of today’s schooling?’ Christie
(2008, p. 21) suggess that the primary purpose of schooling is: to provide an environment
where teaching and learning take place, to prepare people for the world of work beyond
school, to provide nation-building and citizenship, to teach the values of a society to
children and young adults and to develop the individual. Robinson (2013, p. 27) expands
this argument by saying that by clarifying the purpose of education we can make judgments
about what to bring to the curriculum and how best it is measured.
Since the 1990 Jomtien World Conference ‘Education for All’ and the World Conference
Forum in Dakar (2000), governments have shifted their focus from measuring input to an
increased emphasis on educational quality outcomes. The purpose of this is to ascertain
the extent to which their education systems are meeting the needs to deliver quality in
education (Howie, 2012, p. 81). This has led to a proliferation of new assessments – both
formal (including high-stakes and systemic assessments) and informal classroom-based
assessments.
This increased amount of testing is the result of greater accountability demands from
government and society. In order to think more clearly about assessments, we need to
consider to whom and why these results are important. Different groups use assessments
for different purposes. They are conducted more or less frequently: different groups
or individuals are assessed and different information is needed (Farr 2000, p. 506).
Assessments are used by a variety of people for a variety of purposes. These are discussed
in more detail both below and in Table 1.
83
Governments and policy makers are ultimately responsible for improving the quality of
education (Howie, 2012, p. 82). They conduct national assessments or testing on a large
scale using standardized assessments that have been used for many years. The purpose
of national assessments is to compare student performance against a clearly defined
curriculum. They measure student learning outcomes and monitor school systems (Greaney
& Kellaghan, 2008, p. xiii). The designs and programmes implemented to improve teaching
and learning in school are assessed. Student achievement scores can then be used as
evidence to inform education policies and decisions. In addition, these tests identify slow
learners so that they can get the necessary support , appropriate technical assistance and
training. These tests are normally conducted on either a sample or a whole population of
students (Howie, 2012, p. 82). Howie states that the type of assessment, which is externally
conducted, is likely to impact on the quality and act as a lever for reform.
School administrators are primarily interested in performance assessments that are
typically criterion-referenced. These performance measures usually compare student
performance with a clearly defined curriculum (Farr, 2000, p. 506).
Greaney and Kellaghan, (2008) expound that parents have a particular invested interest
in their own individual children. In order to monitor their children’s progress and to be
active in their education, parents want to know how their children perform on criterionreferenced and norm-referenced tests.
A teacher’s primary concern is helping students learn (Farr, 2000). They are accountable
to the education system and to parents. They are essentially interested in the kind of
information that will support the daily instructional decisions they need to make. This kind
of information is generated from criterion-referenced tests and other types of assessments
that can be more effectively utilized in the classroom as part of instruction (Howie, 2012).
Howie (2012) claims that learners themselves should be their own assessors of their own
academic development. They should be asking how to improve their own literacy and
numeracy skills. If they understand their own needs they will improve. Table 1 below
briefly discusses who the audiences of assessments are, their purposes, the frequency of
testing, who is tested and what type of information is required.
84
Table 1: Audiences, purposes, frequency, who is tested and types of information required
of assessments
Audiences
Purposes
Frequency
Who is tested?
Type of
information
required
Government
Judge the
effectiveness of an
education system
Generally every 5th
year or annually,
this depends on the
country
All learners as
well as a sample
of learners at a
particular grade or
age level
Statistics for policy
makers used as
advocacy for
reform
School
administrators
Judge the
effectiveness
of curriculum,
materials and
teachers
Again this depends
on the country
but annual and/or
semester tests are
conducted
Groups of teachers
and learners
Related to broad
goals – criterionand normreferenced
Teachers
Plan instruction,
strategies and
activities
Daily or as often
as possible continuous
Individuals, small
groups and the
whole class
Primarily diagnostic
and focused on
teacher decision
making
Learners
Describe learners
learning, to
diagnose learning
problems
Daily or as often
as possible continuous
Individual
Related to specific
goal
Adapted from Farr (2000, p. 507) and Howie (2012, p. 82)
High-stakes Testing
In Africa most countries which have experienced high-stakes testing owe their origins
to international influences and have had mixed motives for their introduction (Lewin &
Dunne, 2000, p. 379). New developments and enthusiasm for types of assessment spread
from developed to developing countries. The discourse that surrounds assessment is
infused with examples to replicate innovations in assessment from the North to countries
in sub-Saharan Africa (Lewin & Dunn, 2000, p. 391). There have been mixed motives for the
introduction of high-stakes testing. They include the need to select the highest achieving
students for the next level of education as in many of the African countries, since only a
minority of students can proceed to secondary school. Governments need to have some
control over the curriculum and what is learned. Lewin and Dunne (2000, p. 380) clarify
that in many African countries, the results of high-stakes testing are critical to career and
life opportunities and access to employment.
85
Johnson and Costello (2005, p. 258) suggest that high-stakes testing has been demonstrated
to undermine the teaching and learning particularly for underachieving students. They
explain that high-stakes testing restricts the curriculum, thus defeating the original
intention to improve literacy learning. Howie (2012, p. 84) agrees that high-stakes testing
narrows the curriculum to that which is tested and in so doing lowers rather than raises
educational standards. A further implication, according to Johnson and Costello (2005, p.
258), is that there is growing outrage and diminishing morale. As a result of this, many
teachers exit the teaching profession due to the immense pressure put on them.
In Africa, the four most well-known high-stakes tests currently in practice are:
1.
Secondary school leaving examinations which assess all subjects at Grade 12
level
2.
Trends in International Mathematics and Science Study (TIMSS) which assess
mathematics and science using Grade 4 and 8 learners;
3.
The Southern and Eastern Africa Consortium for Monitoring Educational
Quality (SACMEQ) addresses literacy and numeracy from Grade 1 – 7; and
4.
Progress in International Reading Literacy Study, (PIRLS) which assesses
literacy skills in Grade 4 and 5 learners.
Each of these four tests will be briefly discussed through an African lens.
1.
Secondary school leaving examinations
Secondary school leaving examinations (Grade 12) give access to qualifications that have
international currency (Lewin & Dunne, 2000, p. 395). In South Africa about half a million
students write this exam annually (Howie 2012. p. 89) and their future careers depend
on the results of these exams. Lubisi and Murphy (2010, p. 262) state that these final
examinations can be seen as gatekeepers to employment and higher education. They are
highly regarded by the public because they are seen to signify achievement.
External examination boards remain very active in much of Africa in supporting and
influencing the development of assessment policy and practice at levels above primary
school (Lubisi & Murphy, 2010, p. 262). Since the introduction of these assessments there
has been better quality learning, materials have been developed, although their availability
may have suffered from poor distribution and budgetary constraints. Lewin and Dunne
(2000, p. 394) caution that statements of policy on curriculum and assessment have not
necessarily led to changed practice.
86
Lewin and Dunne (2000, p. 394) offer four provocative speculations as to why examination
reform may not be occurring as much as some would wish. Firstly, it may be that reform
is undesirable simply because there is too much change. If African examination boards
are not structured to be developmental, have divergent relationships with curriculum
development, and administration is given more prominence than professional functions,
advancements may be obstructed. Second it may be that technical and administrative
capacity to design, develop and undertake broader based assessment strategies may be
lacking. Third some reforms in assessment may not be for the better. Some innovations
may prove to be unworkable though technically attractive. Finally, in some African
countries, significant amounts of external services, such as printing, are done in developed
countries. These publishers have an interest in manipulating the curriculum reform and
assessment strategies to maximize their print runs (Lewin and Dunne, 2000, p. 394).
2.
Trends in International Mathematics and Science Study (TIMSS)
TIMSS is a large-scale comparative study and is conducted internationally at the end
of Grade 4 and Grade 8 (Reddy, 2006). The first international assessments in forty-one
educational systems around the world were conducted in 1995. To better understand what
this test assesses, I briefly describe the content. TIMSS assesses Mathematics and Science
and in each subject there are two domains: the content and the cognitive domains. In
Mathematics the content domain includes: number, algebra, measurement, geometry
and data handling. The cognitive domain involves: knowing facts and procedures, using
concepts, solving problems and reasoning. In Science, the content domain includes life
sciences, chemistry, physics, earth sciences and environmental science. The cognitive
domains were: factual knowledge, conceptual knowledge, reasoning and analysis. Both
subjects assess knowledge levels within the full spectrum of Blooms taxonomy.
Reddy (2006) explain that in 2003 the highest performing countries were Singapore,
Republic of Korea, Hong Kong, Chinese Taipei and Japan. The lowest performing countries
were Lebanon, the Philippines, Botswana, Saudi Arabia, Ghana and South Africa. According
to Reddy (2006) Ghana and South Africa had the highest percentage of learners achieving
a score of less than 400 points, which is below the Low International Benchmark.
In 2011 the TIMSS was again conducted, including new countries such as Ghana Morocco,
and Botswana in Africa (HSRC, 2011). Three countries, South Africa, Botswana and
Honduras administered the assessments at the Grade 9 level (HSRC, 2011). Although there
was a slight overall improvement in achievement scores in these countries, their scores
were still below the Low International Benchmark.
87
3.
The Southern and Eastern Africa Consortium for Monitoring Education Quality
(SACMEQ)
SACMEQ is a large international non-profit developmental organization of fifteen Ministries
of Education in Southern and Eastern Africa that decided to work together to share
experiences and expertise in developing the capacities of education planners to apply
scientific methods to monitor and evaluate the conditions of schooling and the quality of
education, with technical assistance from UNESCO International Institute for Educational
Planning (IIEP). SACMEQ’s mission, inspired by the goals adopted in 1990 in Jomtien and
subsequently those adopted in 2000 in Dakar - Senegal (Murimba, 2005, p. 77), was to
undertake integrated research and training activities that would expand opportunities for
educational planners and researchers to:
a)
receive training in technical skills required to monitor, evaluate and compare
the general conditions of schooling and the quality of basic education; and
b)
generate information that can be used by decision makers to plan the quality
of education.
In 2012 the fifteen Ministries of Education that constitute the SACMEQ network included:
Botswana, Kenya, Lesotho, Malawi, Mauritius, Mozambique, Namibia, Seychelles, South
Africa, Swaziland, Tanzania (Mainland), Tanzania (Zanzibar), Uganda, Zambia and Zimbabwe.
Murimba (2005, p. 79) describe that most of the countries were former colonies and that
different colonial powers influenced the development of education in different ways. It
was only after 1960 that many of these 15 countries achieved their political independence,
often after long emancipation struggles that interrupted the development of education.
School structures, teaching and learning practices as well as curricula and language policies
have all been influenced by these complex historical realities (Murimba, 2005, p. 79).
SACMEQ has completed three major education policy and achievement research projects
(SACMEQ I, SACMEQ II and SACMEQ III ) between 1995 and 2007. These research projects
include assessments on language and mathematics. Grade 6 was chosen, rather than
an age group in most of the fifteen SACMEQ countries, as it is the final class for primary
examination (in Kenya and Malawi it was Standard 8).
When looking closely at the results of four countries (Kenya, Namibia, South Africa and
Mauritius) that participated in the 2009 SACMEQ11 research, Figures 1 - 2 below indicate
interesting results. When looking at Figure 1 it appears that the Numeracy results of the
four countries are generally high in the lower levels of numeracy (emergent, basic and
beginning numeracy) for all four countries with very low results at the “abstract problem
solving” level. Namibia has the highest results in the “emergent” stage of Numeracy.
Figure 2’s Reading results generally follow a bell curve with Mauritius having the highest
88
results at the “critical reading” level. On the other hand, Namibia has the highest results
in the “pre-reading, emergent reading and basic reading” levels. Both graphs show that
the higher order thinking skills for both Numeracy and Reading achieve the lowest results.
Figure 1: Numeracy levels 1-8, 2009 of Kenya, Namibia, South Africa and Mauritius
89
Figure 2: Reading levels 1-8, 2009 of Kenya, Namibia, South Africa and Mauritius
Murimba (2005) elucidates that there are a few key factors that make SACMEQ results
so successful. First, educational policy reports published from SACMEQ 1 and 11 have
featured prominently in presidential and national commissions on education (for example
Zimbabwe and Namibia). Second, prime ministerial and cabinet reviews on educational
policy (for example Zanzibar), national education sector studies (for example Kenya and
Zambia) and reviews of national education master plans (for example Mauritius) have been
published. Third, the high quality of the SACMEQ reports and research materials have also
been recognized by major universities, for example Harvard and Melbourne University,
research organizations, for example the International Association for the Evaluation of
Educational Achievement and international organizations such as UNESCO’s EFA Global
Monitoring Report Division (Murimba, 2005).
4.
Progress in International Literacy Study (PIRLS)
PIRLS is one of the largest international studies of reading achievement. It has targeted
fourth graders (Howie, van Staden, Tshele, Dowse, Zimmerman, 2012) and was the
90
fourth set of tests conducted by the International Association for the Evaluation of
Educational Achievement (IEA). The purpose of PIRLS is to measure children’s reading
literacy achievement, to provide a baseline for future studies of trends in achievement,
and to gather information about children’s home and school experiences in learning to
read. These tests are conducted every five years. PIRLS 2006 tested 215,000 students
internationally from 46 educational systems. Only South African and Morocco participated
in this study. PIRLS 2011, however, tested 325,000 students from 49 countries including
Botswana, South Africa and Morocco participated in the sample. In 2011, the International
Association for the Evaluation for Educational Achievement (IEA) (Howie, 2012, p. xv)
initiated a new test – prePIRLS – for those countries whose learners had performed below
the international benchmark in previous studies. In Africa, only South Africa, Botswana
and Morocco, participated in the prePIRLS (Grade 4). The international PIRLS scaling rate
ranges from 0 – 1000 metric points with a centre point, which is 500, and stays constant
between assessments. Most learner performance from Africa ranges between 300 – 700
(Howie, 2012, p. 27).
van Staden (2013) reflects on a few significant results from prePIRLS and PIRLS, South
Africa, 2011 study. The South African Grade 4 pupils who wrote the prePIRLS test were
still performing at a low level. There was a substantial gender gap in achievement; Grade
4 girls performed better than the Grade 4 boys. Learners who wrote the tests in either
Afrikaans or English performed above the international centre point (medium) of 500.
Those tested in all nine African languages achieved very low outcomes. Only six percent of
South African learners were able to read at the advanced level. The Grade 5 PIRLS results
were similar to those from 2006. The Grade 5 learners tested in Afrikaans or English
were still performing below the international centre point. Their levels of achievement
were similar to learners in Saudi Arabia, Indonesia, Qatar, and Botswana and well above
the learners in Oman and Morocco. Again t Grade 5 girls performed better than boys.
Forty-three percent of learners in English and Afrikaans-speaking South African learners
were unable to reach the low international benchmark (300 points) and only four percent
reached the high international benchmark. Suburban schools performed better than the
urban schools. Township and rural schools performed the worst. Teachers with a four-year
diploma in education had the best results. Teachers with the highest achievement rates
were aged 60 or older and aged less than 25. The lowest achievement rates of teachers
were found in the age range of 50-57. The schools that had more than 5000 books had the
best results while those schools with no libraries performed the worst.
Principles of summative and formative assessments
To foreground this discussion on summative and formative assessments, I offer a definition
of each of these terms. A general discussion about these assessments follows and finally
91
four general principles of summative and formative assessments from Johnston and
Costello (2005, p. 259) are debated. Johnson and Costello suggest the following definitions:
Summative assessments are:
… the backward-looking assessments of learning, the tests we most commonly
think of that summarize or judge performance as in educational monitoring, and
teacher and student accountability testing.
Johnston and Costello (2005, p. 259) continue to elaborate on formative assessment:
…assessment for learning is the forward-looking assessment that occurs in the
process of learning, the feedback that the teacher provides to the student and the
nature of the feedback.
‘What gets assessed is what gets taught’ state Johnston and Costello (2005, p. 256).
And learning must form the basis of our assessment practices. Commeyras and Inyenga
(2007, p. 271) concur with this statement saying that assessment is an inevitable and
necessary area of concern. Performance on examinations is a deciding factor for an
individual’s future in education and eventual participation in nation building. They confirm
that assessments inform the general teaching of literacy and numeracy. It is incumbent
upon class teachers to find out what each child has achieved, to identify where students
are struggling so further instruction focuses on those strategies that can improve the
students learning. Class teachers’ judgments must be valued (Lubisi & Murphy, 2010, p.
265). Carter, Lees, Murira, Gona, Neville, and Newton (2011) expand this idea by stating
that assessment techniques need to be culturally and linguistically sensitive in order to
determine the academic achievement and potential of diverse students. While it is ethical
and professional to provide an equitable and quality service to all, the importance of using
culturally fair assessment tools cannot be overemphasized.
Since 2011, in South Africa, the ‘Curriculum and Assessment Policy Statement’ (CAPS)
(Naicker, 2011, p. 60) has replaced the old Outcomes Based Education system. The
Department of Basic Education (DoBE) (2011, p. 89) clarify that assessments are an integral
part of teaching and learning because they provide feedback for teaching and learning.
Assessing different skills in numeracy and literacy should be integrated and not seen
as separate activities. It is important to assess what learners understand and not what
they can memorize. The CAPS document offers various forms of assessments teachers
can possibly use in their classroom, the assessment methods and suggestions of types
of assessment tools. They suggest the following forms of assessment teachers can use:
observations, orals, practical demonstrations, written recordings, and research projects.
Finally, the document offers a variety of assessment tools, such as: observation book,
observation sheet, checklist, rubrics and learners’ class workbooks. Burke (2005, p. 13)
92
agrees that teachers need to develop their own classroom assessments. He offers further
methods such as: journals, debates, projects, products, performances, experiments,
portfolio, writing activities and skill tests.
By using the above assessment strategies, one can see that both formative and summative
assessments affect learning (Johnston & Costello, 2005, p. 259). Lubisi and Murphy (2010,
p. 265) furthers this argument by suggesting that formative and summative assessments
emphasize providing quality feedback rather than assigning marks.
I proceed to discuss four principles of summative and formative assessments based on the
work of Johnston and Costello (2005, p. 256-265).
Assessment is a social practice:
Noticing, representing, and responding to learners’ work is what makes assessment a social
practice. Meaning needs to be made for particular purposes and audiences (Johnston
& Costello, 2005, p. 258). Comments from teachers can improve performance and keep
leaners thinking they are doing well. Trust, sensitivity and motivation are characteristics
that support the social practices of assessments. Johnston and Costello suggest that when
engaging in assessments, the emphasis should be on fairness. Fairness is about ensuring
that learners are developing adequately. As teachers, we should aim to create an ethos in
our classroom in which assessment practices socialize children into developing themselves
as learners at the same time as developing their literacy and numeracy achievement rates.
Minds in society:
Johnston and Costello (2005, p. 258) describe that children’s thinking patterns evolve from
the people that they are surrounded by. Together they construct knowledge. An example
of this is found in Kenya. Commeyras and Inyenga’s (2007, p. 265) results revealed that
children in rural monolingual and multilingual schools reported being most comfortable
learning in their mother tongue or Kiswahili but not English. They reported that rural
teaching is often characterized by a disproportionate use of mother tongue by the teachers
and poor teaching strategies.
They found that many pre-service teachers performed badly in English on their final Kenya
Certificate of Primary Education (KCPE) Grade 8 exams. Therefore teaching would be their
only option for a professional career. Commeyras and Inyenga suggested that teachers
needed better teacher education in reading to develop their own language skills so they
could be better prepared to teach and conduct formative and summative assessments.
93
Representation and interpretation
Teachers and parents can interpret a child’s assessment differently because they have
different histories and goals. A mark, a number or a percentage will mean different things
to different people (Johnston & Costello, 2005, p. 261). Language used in any assessment
task influences the interpretations made.
We tend to categorize our learners, for example ‘learning disabled’. Once ‘identified’,
categories tend to be used to describe learners but within a medical deficit model. Sayed,
Subrahmanian, Soudien, Carrim, Balgopalan, Nekhwevha and Samuel (2007, p. 118) agree
that ‘marginalized people should not be seen as backward, suffering from an exclusionary
deficit, but as bearers of rights whose dignity needs to be reaffirmed and whose needs
become the drivers of policies’. For learners who need extra support, Bouwer (2011, p. 53)
suggests that teachers need to know the level and the ways in which learners learn so that
teachers devise appropriate strategies for the most effective learning support.
Practices of teacher assessments
Johnston and Costello (2005, p. 262) suggest teachers need to make their literacy and
numeracy learning spaces in such a way that learning is visible and audible. Children should
be given opportunities to talk about the process and experiences of their practices and
engage in manageable challenges. Teachers need to notice what children can do, become
‘kidwatchers’ and know how they are engaging in the development of the new knowledge
and competence (Goodman, 2005, p. 25). Clay (1991, p. 232) suggest that teachers can
become astute observers of literacy behaviours and skilled at producing responses which
advance the child’s learning. Observing behaviours of children informs a teacher’s intuitive
understanding of cognitive processes and her teaching improves. Consistency in formative
and summative assessments across the curriculum is essential.
Improving performance on summative assessment requires improving formative
assessment. Johnston and Costello (2005, p. 262) suggest that change may be slow
because of the way teachers understand students, themselves, and what they are trying
to accomplish. Successful assessment systems develop constructive teacher-student
relationships and are able to influence change and improvement.
94
Conclusion
To draw this debate to a close, this paper has attempted to offer a critical review of
assessment practices relevant to African. While there are some similarities in the histories
and politics of many of the developing countries in Africa there are also many differences.
African countries are relatively new to high-stakes testing. It would appear that the results
could have a higher impact on decision makers, researchers, communities and the clients
of education. A challenge for the future development of assessment policies in Africa is to
decide on what is valued and what price is worth paying to acquire it.
95
References
Allington, R.L. (2000). The schools we have. The schools we need. In N. Padak, T. Raskinski,
J. Peck, B. Church, G. Fawcett, J. Hendershot, J. Henry, B. Moss, E. Pryor, K. Roskos, J.
Baumann, D. Dillon, C. Hopkins, J. Humphrey, & D. O”Brien (Eds.), Distinguished educators
on reading: Contributions that have shaped effective literacy instruction. 164-181.
Delaware: International Reading Association.
Bouwer, C. (2011). Identification and assessment of barriers to learning. In E. Landsberg,
D. Kruger, & E. Swart (Eds.), Addressing barriers to learning: A South African perspective.
Pretoria: Van Schaik Publishers.
Burke, K. (2005). How to assess authentic learning. California: Sage Publications.
Carter, J.A., Lees, J.A., Murira, G.M., Gona, J., Neville, B.G.R. & Newton, CR.J.C. (2011).
Contextually relevant resources in speech-language therapy and audiology in South Africa
– are there any? The South African Journal of Communication Disorders, 58(1).
Chisholm, L & Chilisa, B. (2012). Contexts of educational policy change in Botswana and
South Africa. Prospects, 42:371-388.
Christie, P. (2008). Changing schools in South Africa: Opening the doors of learning.
Johannesburg: Heinemann Publishers.
Clay, M.M. (1991). Becoming literate: The construction of inner control. Portsmouth:
Heinemann Education.
Commeyras, M. & Inyenga, H. (2007). An integrative review of teaching reading in Kenya
primary schools. Reading Research Quarterly, 42(2):258-281.
Farr, R. (2000). Putting it all together: Solving the reading assessment puzzle. In N. Padak,
T. Raskinski, J. Peck, B. Church, G. Fawcett, J. Hendershot, J. Henry, B. Moss, E. Pryor, K.
Roskos, J. Baumann, D. Dillon, C. Hopkins, J. Humphrey, & D. O”Brien (Eds.), Distinguished
educators on reading: Contributions that have shaped effective literacy instruction. 500516. Delaware: International Reading Association.
Gersick, K., Davis, J., McCollom Hampton, M. & Lansberg I. (1997). Generation to generation:
Life cycles of the family business. Boston: Harvard Business School Press.
Goodman, K. (2005). What’s whole in whole language? Georgetown: Starbucks Distribution.
96
Greaney, V. & Kellaghan, T. (2008). Assessing national achievement levels in education.
National Assessments of Educational Achievement. Washington, DC: World Bank.
Hansen J. (2000). Evaluation: The centre for writing instruction. In N. Padak, T. Raskinski,
J. Peck, B. Church, G. Fawcett, J. Hendershot, J. Henry, B. Moss, E. Pryor, K. Roskos, J.
Baumann, D. Dillon, C. Hopkins, J. Humphrey, & D. O”Brien (Eds.). Distinguished educators
on reading: Contributions that have shaped effective literacy instruction. Delaware:
International Reading Association. p. 5518-542.
Howie, S. (2012). High-stakes testing in South Africa: Friend or foe? Assessment in
Education: Principles, Policy and Practice, 19(1), 81-88.
Howie, S., van Staden, S., Tshele, M., Dowse, C, & Zimmerman, L. (2012). PIRLS 2011:
South African children’s reading literacy achievement. Summary Report. Pretoria: Centre
for Evaluation and Assessment.
Human Science Research Council. (2011). Towards equity and excellence: Highlights from
TIMSS 2011: The South African perspective. Pretoria: HSRC.
Ingold, T. (2002). The perception of the environment: Essays on livelihood, dwelling and
skill. London: Routledge Taylor and Francis Group.
Johnston, P. & Costello, P. (2005). Principles for literacy assessment. Reading Research
Quarterly, 40(2), 256-267.
Kontovourki, S. (2012). Reading leveled books in assessment saturated classrooms: A close
examination of unmarked processes of assessment. Reading Research Quarterly, 47(2),
153-171.
Lewin, K, & Dunne, M. (2010). Policy and practice in assessment in Anglophone Africa:
Does globalization explain convergence. Assessment in Education: Principles, Policy and
Practice, 7(3), 379-399.
Lubisi, R.C., & Murphy, R.J.L. (2010). Assessment in South African schools. Assessment in
South African schools. Assessment in Education: Principles, Policy and Practice, 9(2), 255268.
Morgan, W. & Wyatt-Smith, C. (2010). Im/proper accountability: Towards a theory of
critical literacy and assessment. Assessment in Education: Principles, Policy and Practice,
7(1), 123-142.
97
Murimba, S. (2005). Evaluating students’ achievements: The Southern and Eastern
African Consortium for Monitoring Educational Quality (SACMEQ): Mission, approach and
projects. Prospects, 35(1), 75-89.
Nzomo, J., Kariuki, M. & Guantal, L. (2001). The quality of primary education in Kenya.
Some suggestions based on a survey of schools. Paris: International Institute for Educational
Planning/United Nations Educational, Scientific and Cultural Organization.
Reddy, V. (2006). Mathematics and Science Achievement at South African Schools in TIMSS
2003. Cape Town: Blue Weaver.
Robinson , M. (2013). Trivium 21c: Preparing young people for the future with lessons from
the past. Wales: Crown House Publishing Ltd.
Sayed, Y., Subrahmanian, R., Soudien, C., Carrim N., Balgopalan, S., Nekhwevha, F. &
Samuel, M. (2007). Education exclusion and inclusion: Policy and implementation in South
Africa and India. Pretoria: Human Sciences Research Council.
Shils, E. (1981). Tradition. Chicago: The University Press of Chicago.
Southern and Eastern Africa Consortium for Monitoring Education Quality. n.d. Retrieved
August 17 2013 from www.sacmeq.org
South Africa. Department of Basic Education. (2011). National Curriculum Statements:
English Home Language. Pretoria: Government Printers.
Southern and Eastern Africa Consortium for Monitoring Educational Quality. (2013).
Retrieved August 25, 2013 from
http://www.ibe.unesco.org/comenius/comenius.htm
Stobart, G. & Eggen, T. (2012). High-stakess testing – value, fairness and consequences.
Assessment in Education: Principles, Policy and Practice, 19(1), 1-6.
Tierney, R. (2000). Literacy assessment reform: Shifting beliefs, principled possibilities, and
emerging practices. In N. Padak, T. Raskinski, J. Peck, B. Church, G. Fawcett, J. Hendershot,
J. Henry, B. Moss, E. Pryor, K. Roskos, J. Baumann, D. Dillon, C. Hopkins, J. Humphrey, &
D. O”Brien (Eds.). Distinguished educators on reading: Contributions that have shaped
effective literacy instruction. 517- 541. Delaware: International Reading Association.
98