Heq Apr23 Cert Is Report
Heq Apr23 Cert Is Report
Heq Apr23 Cert Is Report
Certificate in IT
April 2023
EXAMINERS’ REPORT
Information Systems
General comments
The overall results were disappointing with most candidates who passed achieving just above a pass
mark. This was due to the lack of attempts of question one and the inability to gain marks from
question four despite it being the second most popular question.
The cloud questions are still being answered poorly and are focused on the a news story from a
number of years ago. Candidates need to focus on business solutions and not what is reported in
newspapers.
Questions Report:
A1
A few candidates attempted this question. Of the few who attempted, most included a
dataflow diagram. Centres need to be encouraged to include OO techniques as an
alternative to structured methods and tools. A Use Case diagram can be quite simple in
a similar manner to a dataflow diagram.
The inclusion of ‘include’ and ‘extend’ within the diagram together with an explanation
would increase the marks. An example was implied in the scenario. Diagrams help to
illustrate the requirements of a system between developer and user.
Some attempted to draw ERDs or flow charts. Where a candidate knew about OO it
was well answered, however most candidates demonstrated a lack of OO/ Use Case
knowledge even though this is in Section 2.3b of the Syllabus states ‘OO Modelling’.
A2
This was the most popular question with a pass mark of 75% and was well answered.
Although five steps were required it was acceptable to combine steps. The expectation
of the question was to ensure that candidates understand the whole life cycle from
planning to implementation and maintenance. It is important to stress that there must
be detailed analysis and design does not just the inclusion of the user interface. Also it
should be noted that testing occurs during all stages of the life cycle.
Section C about factors influencing changes to project development was again well
answered, but some answers were short, more context and meaning were required in
these instances.
A3
A reasonably popular question based on project management techniques with a pass
mark of 58%. Most candidates could identify the responsibilities of a project manager
but only a small number of candidates understood the Eternal Triangle. More emphasis
needs to be made to include these techniques.
Risk analysis is part of a feasibility study at the beginning of the planning stages and
includes for example, the typical TELOS and SWOT analysis. Examples were expected.
There is a subtle difference in the measurement of quality assurance, planning and
control based on standards, procedures and requirements of the system and the
project itself.
Candidates should answer with the responsibilities and not the traits of a project
manager. Risk analysis needed further elaboration and detailed examples.
Section D answers needed to be of a higher level, for example the idea of standards,
adhering to standards and displaying that standards have been obtained should be
clearly explained.
A4
The second most popular question but with a pass mark of 22%. Candidates appeared
to find it difficult to relate to misuse of data, information and the advances of
technology within a health system. Many misunderstood the question and
concentrated on the effect of using screens on a person’s health, although some
aspects do have relevance. It is not only the data that can be affected but also the
current trend and use of AI in assisting with solving health issues. Descriptions and
examples were expected.
The second part of the question was poorly answered with candidates not addressing
the rules that would validate whether both diagrams are balanced and consistent with
each other. Many answers focussed on providing general definitions of context and
data flow diagrams.
Most candidates scored some marks on this question. The good answers accurately
listed all the elements as well as providing more detailed descriptions of each. The
weaker answers provided a bullet point list of some of the elements but answers
lacked an accurate description to score all the marks. Some candidates provided
incorrect or partial names for the elements. A few candidates sketched a diagram to
support their illustration of the elements with a small number of these providing
accompanying descriptions.
The second part of the question was poorly answered, with most candidates seeming
to know the elements of a data flow diagram but were but unable to identify the
validation rules. Most who attempted this part of the question provided a general
response that the diagrams needed to be balanced and consistent, without providing
any further detail. They were unable to state the differences between a data flow
diagram and context diagram. Some answers stated that the data flow diagram is more
detailed but stopped short of identifying any specific differences. A few candidates
answered correctly that the number of external entities and processes are the same on
both diagrams.
B7
Candidates could have achieved better marks by mentioning specific multi-media
types, providing examples, and then explaining the applicable guidelines in the context
of a website. Many answers were repetitive and identical, focussing solely on
accessibility or general navigation of a website without reference to multi-media
categories.
A substantial number of candidates did not answer this question in full or make
reference to types of media in their response, with many candidates providing instead
general advice for building a website. The more able candidates were able to include in
their responses different types of media and justify their choices, providing advantages
and disadvantages. Those candidates who did not mention many forms of media still
gained some marks by explaining that the website needed to adhere to standards
around visual impairment and identified ways in which this could be achieved through
various reasonable adjustments.
Most candidates did not provide answers that were directly related to the multimedia
guidance. Instead, their responses were general guidance when developing a website,
or they only mentioned images, videos, and other multimedia elements without
providing a precise discussion or explanation.
B8
This question received responses ranging from poor to average. Where testing
techniques were correctly identified, candidates could have scored more marks by
providing specific details on the purpose of each technique and how it is implemented
in practice. A lot of candidates identified testing techniques using incorrect names and
therefore could not score marks. Some candidates repeated techniques more than
once and with the use of names that were invalid.
Many candidates listed at least three different techniques for testing the functionality
of a computer application. The good answers identified three or more forms of testing
and provided accurate descriptions of each. Candidates who simply listed types of
testing without providing a description achieved lower marks. Some candidates
provided brief descriptions of types of testing and missed the opportunity to gain the
extra marks. A few candidates provided incorrect descriptions for the type of testing
they had listed out. One candidate misinterpreted the question, relating their
responses to surveys, interviews and observation with no mention of testing.
Few candidates correctly answered the question by providing the right explanation and
discussion of the testing techniques, such as black box, white box, and stress testing.
However, most candidates listed different types of testing that were not directly
related to the expected answer.
B9
A substantial number of candidates provided short answers with a narrow emphasis on
planning or prototyping aspects, making it difficult for them to pick up many marks.
Candidates need to show a more comprehensive understanding of RAD, for example
the structure, tooling, and prototyping aspects, as well as some of the advantages and
disadvantages.
Many candidates provided answers which were too brief answers and described only
some of the elements of a Rapid Application Development approach. The more able
candidates provided lengthier answers explaining the involvement of users,
prototyping, joint development, planning, sprint and review cycles, and rapid
development. Very few candidates mentioned CASE and parallel mini-projects. Those
who did not mention these gained marks on the description of prototyping, iteration
and user feedback.
The answers given needed to be more comprehensive.
B10
Most candidates were able to identify methods for a secure log-in process, however,
there was a lot of repetition which meant that additional marks could not be awarded.
For example, many candidates separated fingerprint and iris recognition into two
categories, but both fall under biometrics.
Almost all candidates outlined the steps for only one password reset process, in
particular, registered email validation. Candidates could have scored more marks by
listing other password reset processes.
With regard to why a user should not simply be deleted after leaving an organisation,
many candidates identified that an audit trail was required, and that access may be
required to documentation stored in user directories. The ability to achieve higher
marks on this question was limited by the fact that candidates did not outline a wider
range of reasons.
Part a) was answered well by most candidates with the stronger answers identifying
more methods alongside a description and justification of each. Some candidates
described one or two methods with more than one example of each and lost the
opportunity for more marks. Few responses went beyond the mentioning of
biometrics, and two factor authentication to also identify IP filtering or time filtering.
Some candidates misinterpreted the question and described the processes for a
password reset.
Many answers were limited to the use of validation via registered email address
without discussing checks against previous passwords and IP filtering.
Part c) answers could be improved with many candidates providing only a single
reason, the deletion of documentation, therefore missing out on further marks from a
more rounded response.
B11
Answers to this question varied with candidates able to identify three different charts
and a brief rationale for each choice, and others who repeated the choice of graph,
resulting in limited marks being achieved. Candidates could have scored higher marks
by providing a more detailed justification for their choices.
Some responses failed to justify their selection of the type of chart. The stronger
answers provided more detail on the advantages and disadvantages of the type of
chart used.
B12
Candidates struggled to explain how each component in an Entity Relationship Model
is converted into a physical version in a relational database. The majority of candidates
did not address the question, instead focussing on describing the components of an
ERM, such as entities and attributes, rather than identifying how these are represented
in a relational database. A few candidates showed understanding of the conversion
process but could not achieve high marks due to a lack of sufficient detail in their
answers.
Some candidates attempted to answer the question by sketching an entity relationship
diagram and scored some marks for their explanations alongside the illustrations.
Other candidates provided a diagram but lost marks for not accompanying this with a
description. Very few answers mentioned relationships or fully discussed attributes.