Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

WordMelodies: Supporting the Acquisition of Literacy Skills by Children with Visual Impairment through a Mobile App

Published: 29 March 2023 Publication History

Abstract

WordMelodies is a mobile app that aims to support inclusive teaching of literacy skills for primary school students. Thus it was designed to be accessible both visually and through screen reader, and it includes over 80 different types of exercises for practicing literacy skills, each with adjustable difficulty levels, in Italian and in English. WordMelodies is freely available for iOS and Android devices. However, it has not been previously evaluated with children having visual impairments. Thus, in this article, we evaluate the app usability, its perceived ease of use, appreciation and children’s autonomy while using it, as well as the characteristics of the end users. To achieve this, we conducted a user study with 11 primary school students with visual impairments, and we analyzed app usage logs collected from 408 users in over 1 year from the app publication. We show that app usability is high, and most exercises can be completed autonomously. The exercises are also perceived to be easy to perform, and they are appreciated by the participants. Finally, we provide insights on how to address the identified app limitations and propose future research directions.

1 Introduction

Primary school teaching materials, both print-based (e.g., textbooks) and digital (e.g., educational apps), often use graphical content as a means to engage very young students. Additionally, illustrations are also used to convey complex concepts, for example in geometry or mathematics [6]. The illustrations also provide the bridge between listening and early reading behavior [39], and therefore they are also invaluable for the early formation of literacy and phonemic skills.
However, such materials may not be accessible for children with severe visual impairments or blindness (VIB) [27]. For these children, the lack of adequate teaching materials is a critical limitation for the development of literacy skills [12, 22]. While accessible alternatives, such as braille books [18] or tactile drawings [39], exist, these have a number of limitations. They are difficult to design and produce; they are commonly available only in special education settings [17], but rarely present among the resources of inclusive classes [47]; and they are not inclusive to sighted students, which may contribute to social exclusion of students with VIB [57].
To address these issues, we present WordMelodies [2, 43], a mobile application to support children in the acquisition of basic literacy skills. The app was designed to be inclusive for children with and without VIB, thus promoting interaction between them. WordMelodies also supports children in exercising basic touchscreen interactions on mobile devices, thus promoting digital literacy as well.
The app design involved teachers of children with visual impairments and accessibility specialists, through multiple, user-driven iterations, with a universal design approach [60]. Currently, it features over 80 different, language-specific exercises, in English and Italian languages, with five unique exercise families, each with a different interaction modality. WordMelodies was developed as a cross-platform software, and it was published for free on mobile app stores for both Android1 and iOS platforms.2 The app was publicised through associations of people with VIB, during classes with children with VIB, and through blogs for accessible learning for people with VIB [21].
We previously presented a preliminary evaluation conducted with adults with visual impairments to assess app usability and accessibility [2]. Study results unveiled that the app is indeed accessible and usable by people with VIB. After the publication we also initiated anonymous remote collection of app usage data, aiming to investigate how end users interact with the system.

1.1 New Contributions

This article extends our prior work [2], presented at the 18th International Web For All Conference, with two additional key contributions as follows:
Our conference paper presented a limited user study with four adults with visual impairments, aimed only at assessing the app accessibility and usability. In this extension we present the results of a user study conducted with 11 children with VIB that used WordMelodies under the supervision of an educator. The focus of the study is to evaluate the usability of the system perceived by children and to assess whether the five proposed exercise families are perceived by the educators to be easy to use, appreciated, and suitable for autonomous usage by the children.
During our prior work we initiated remote collection of usage data from end users, reporting only basic statistics on the first month of the collected data. After over a year of collecting data, we analyze then to improve our understanding of (a) the adoption of the system, considering user language, the device used, and screen reader usage (b) exercise usage, in particular considering usage frequency, error frequency and distribution by types of exercises, user language, and screen reader usage.
Results of our user study show that WordMelodies is perceived to be highly usable by the children, with an average System Usability Scale (SUS) [36, 53] score of 79. Most participants were also able to complete the exercises without assistance, including exercises based on the drag&drop interaction. This is notable, because this form of interaction is known to be particularly challenging for users with VIB [34]. Since the drag&drop interaction was particularly successful, we provide some details on how we designed it.
Most exercises were considered easy to use and were appreciated by the participants. However, exercises involving the selection of multiple elements in a table were perceived to be more difficult to perform autonomously. Indeed, six participants had to be assisted and one did not manage to complete these exercises at all. The difficulties with table exercises might be associated to the high number of elements that need to be explored and the need to keep track of those elements that have been selected.
Through remote logging, we collected usage data from 408 users. The participants had an explorative behavior, accessing many exercises before choosing the one to solve. This indicates that the exercise selection interaction should be modified to make it easy for the users to find the intended exercise. Another finding is that Italian users also tried exercises in English, showing that the app could be useful for learning foreign language literacy. Language-specific exercises were also among the most commonly used ones, indicating that correct localization and internationalization of exercises is a top priority. Errors in exercises were uneven, indicating that some exercises might require more training to develop the associated literacy and interaction skills.

2 Related Work

2.1 Development of Literacy Skills in Sighted Students and Students with VIB

Primary school teaching materials for developing literacy skills often rely on illustrations and visual cues [65], because visual reminders support the early formation of literacy and phonemic skills [39]. Thus, similar learning activities are invaluable to engage students and to support the understanding of more complex concepts [11, 66]. Moreover, over the past few decades, there has been extraordinary growth in the use of animations and multimedia [15, 32, 49] to teach literacy skills.
In recent years, an increasing number of mobile device applications have been proposed for teaching basic literacy skills through gamification [10, 26]. Such solutions have been adopted in primary school programs as well as in homework activities to foster emergent literacy skills [5, 50]. Indeed, research findings indicate that preschoolers and students in early classes of primary school show significantly higher basic literacy skills (e.g., letter name and sound knowledge, print concepts, name writing skills) after hours of exposure to similar edutainment games [26].
Prior literature suggests that similar sensory and interactive strategies are needed also for children with VIB to support the development of oral language and literacy [12, 22]. Indeed, while family support can to some degree improve the literacy skills in children with VIB [14], prior works suggest that first-hand experience with literacy exercises is needed for promoting literacy among children with VIB [61]. However, visual-based educational strategies and technologies, adopted to teach literacy to sighted young students, are not accessible without sight. Thus, they are not adequate to engage and teach the same concepts to students with severe VIB [16, 27].

2.2 Education Materials for Students with VIB

Printed teaching materials, and in particular those that use illustrations, are not accessible to students with severe VIB [19]. For these students, accessible solutions, such as Braille books or tactile drawings, are often used [18]. These solutions are an effective support for teaching activities aimed at children with severe VIB [17]. However, such instruments are rarely available outside special education classes [47]. Typically, tactile materials are also difficult to produce, since illustrations are complex to render non-visually [19]. Therefore they are seldom available for children with severe VIB [17, 27]. Furthermore, tactile materials cannot be modified once created and their interactivity is limited. Additionally, tactile materials are commonly designed solely for the support of people with VIB, without considering the inclusiveness and collaboration with other children [63], which can potentially lead to social exclusion [57].
Similarly to printed materials, digital teaching materials also use visual interactions, animations and drawings to stimulate learning, which often makes them inaccessible to students with severe VIB [54, 58]. To address this issue, approaches using haptic devices to access graph data have been investigated [9, 67]. However, these devices are uncommon and haptic representations need to be specifically designed. Approaches based on touchscreen interactions on mobile devices have also been proposed. These solutions can convey spatial information through the proprioceptive exploration of the touchscreen surface [41]. They have been used to convey mathematical concepts [24], learn simple shapes [3, 42], or visualize graph data [51]. Such approaches are convenient, because commodity devices can be used, they are highly interactive, and they are also inclusive, since visual representations can be associated to non-visual touchscreen exploration.
Tactile materials, augmented with digital interactions to increase their interactivity, have also been investigated. One possible solution are three-dimensional (3D) printed models embedding touch sensors [55]. Other approaches detect interaction using computer vision detection through external devices [56]. These solutions, however, are complex to design and produce. Approaches using tactile overlays for touchscreen devices [33, 46] mitigate the production costs required for 3D-printed models. They are more inclusive as visual touchscreen access is still possible, and overlays can be easily swapped. However, the overlays still need to be carefully designed and printed.

2.3 Education Materials for Developing Literacy Skills in Students with VIB

Prior literature stresses the importance of using adequate materials to support emergent literacy in children with VIB [28, 61]. However, only few works propose education materials for this goal [16]. An exploratory study reports positive effects of tactile illustrations on storybook comprehension by children with VIB [8]. One prior work [61] describes a specific program developed by the American Printing House for the Blind that includes a handbook and 27 tactile-visual read-aloud story books. Others propose design and usage guidelines for tactile materials, indicating advantages in supporting emergent literacy skills for children with VIB [31, 39, 44].
To date, very few digital solutions support emergent literacy in children with VIB. The lack of accessible digital teaching materials is particularly relevant in situations that require remote teaching, without being able to use materials available in class, as during the school lock-down due to the COVID-19 pandemic [38]. Considering mobile applications, most of those available for children with VIB are designed to teach basic Braille literacy: Exploring Braille with Madilyn and Ruff [62], Braille Buzz [4], and Braille Sheets [52]. Even though these applications support Braille literacy, they are limited in scope. Exploring Braille with Madilyn and Ruff and BrailleBuzz are specifically designed for early-stage braille learning, and in particular they teach how to identify and produce single Braille letters. Braille Sheets can be used with pre-printed Braille sheets superimposed on an iPad touchscreen. This solution is limited due to the type of interaction with the pre-printed braille sheets, which does not allow the student to interact with the iPad in the same way that their sighted peers are interacting with literacy games (e.g., literacy exercises based on drag&drop, complete the word with missing letters, unscramble words, delete the wrong letter, and so on).

3 System Development

The idea behind WordMelodies was suggested by a teacher for children with VIB from USA who reported that there is a lack of accessible apps to practice literacy skills for primary school children. Guided by this observation, the research team started the app development with an iterative design process. The design involved two iterations with educators for children with VIB, accessibility experts, and researchers. In the following we report the main design principles, identified after the two iterations, the strategies adopted to implement these principles, and the key characteristics of the final app. Additional details on each iteration can be found in our previous paper [2].

3.1 Problem Analysis

We first aimed to identify which exercises could be more effective in supporting the learning of children with VIB. For this, we created a list of 56 possible exercises,3 defined based on current teaching standards [13, 29, 45], existing educational apps (most of which were inaccessible), and informal interviews with three domain experts: the same teacher who reported the problem, a congenitally blind expert in assistive technologies (a co-author of this article), and a primary school teacher from Italy.
After creating the list of the potential exercises, we presented each exercise to two of the domain experts, also providing an example of how children with VIB could interact with the app while performing the exercise. Then, we asked the experts to specify, for each presented exercise, a subjective evaluation of the exercise usefulness, and optionally to provide additional comments. The results of this formative activity were used, in the following development phases, to select which exercises to implement within the app. Informal interviews with the two domain experts were also conducted to elicit key design principles that were then used to guide the app engineering and implementation.
The design principles that were identified are as follows:
Inclusiveness. The app should be usable by both sighted children and children with VIB.
Entertainment. To engage and keep the children interested the app should be entertaining.
Independence. Children should be able to use the app also without supervision by adults.
Consistency. Interactions and interfaces should be consistent among different exercises. For example, interactive elements having the same functionality should have the same colors and positions in all the screens. The interactive elements should possibly be close to the screen corners.
Beyond tap. Interviews with the domain experts highlighted that the app should help children to exercise common yet complex interaction gestures, like drag&drop. This is the opposite of the “simple gesture” principle adopted in similar applications [1, 24].
Scalability. New exercises should be easy to create.
Multi-platform. To reach all potential users, the application should be available for all mobile systems (iOS and Android) and device form factors (smartphones and tablets).

3.2 Design

To implement the outlined design principles, WordMelodies adopts a number of strategies, the most important being the following4:
(1)
To enable inclusive and entertaining interactions WordMelodies adopts specific design principles to support visual access, as well as the accessibility by people with low vision and screen-reader users. Specifically, to support access by people with low vision, WordMelodies uses large text, high-contrast visual elements, and consistent color coding. Furthermore, system accessibility services, such as zoom and negative color filter can also be used. To support inclusive access to blind people, people with low vision, and sighted people, WordMelodies uses audio-icons [23], interface elements that combine visual and auditory aspects and interactions. Indeed, all interactive elements are accessible by hearing and sight. Consider the example in Figure 1(a): Upon touching the picture of the lion, the audio of its roar is played. Then, when the child slides the finger among the letters on the touchscreen, if the screen reader is active, it reads each letter and a corresponding word to support the comprehension of the letter (e.g., “I for ice”).
(2)
To support children in learning and exercising common touchscreen interactions WordMelodies defines five exercise families, each requiring a different interaction. The way to interact with each exercise is explained through a tutorial that can be accessed from the exercise itself. Furthermore, carefully designed verbal messages are provided during the interaction explaining each step of it. The five exercise families are presented in detail in following section.
(3)
To ease scalability and consistency, the five exercise families can be used as a template to create many different exercises having a common interaction mode. For example “select the antonym” (see Figure 1(d)) and “select the rhyme” are different exercise types in the same family. With this approach, different educational goals can be pursued with the same interaction, hence increasing consistency. This approach also eases scalability. Indeed exercise families are implemented so that exercises for each family are defined in a static file. Hence, adding new exercises within the same family does not require writing additional code, only extending the static file.
Fig. 1.
Fig. 1. Examples of exercises in WordMelodies from the five different exercise families.

3.3 Exercise Families

Currently WordMelodies defines five exercise families (see Figure 1), with a total of 82 exercise types (46 in English and 36 in Italian). The five exercise families are as follows:
drag (drag the correct element). Exercises in this family require the child to drag one or more elements to one or more destination areas. Hints can be provided in the form of an audio-icon or a word that is read aloud. In the example in Figure 1(a) there is an audio-icon representing a lion and the letters “L,” blank space, “O,” and “N.” In the line below, there are three draggable letters, “I,” “E,” and “O.” Another exercise in the same family is “reorder the letters in the alphabet” (see Figure 2) in which there are three empty spaces and three letters (i.e., draggable elements) to reorder.
keyboard (write with software keyboard). In these exercises the child has to insert one or more letters selecting them from a simplified soft keyboard. Hints can be provided in the form of an audio-icon or a text that is read aloud. In the example of Figure 1(b) upon touching the speaker icon the text “I for ice” is read aloud. Then, the child has to insert the right letter in the box. Another exercise in the same family requires the child to complete a word by inserting the right letters into a list of empty boxes (e.g., to write the “dog” word).
baskets (drag an item to the correct basket). Exercises in this family present one element in the screen center that should be dragged to the right target on the left or the right of the screen. The example of Figure 1(c) shows the element “duchess” that should be dragged over the “male” or “female” targets. Another exercise in the same family requires to distinguish between nouns and verbs.
selection (select the correct answer). In these exercises the child has to select the correct answer from a set of possible answers by tapping it. In the example of Figure 1(d), the child has to select which is the antonym of “difficult” selecting among “easy,” “hard,” and “joy.” Another exercise in the same family requires the child to select the word (in a set of three) that rhymes with a given word.
table (select multiple answers in table). In this exercise, the child has to select, by tap, one or more answers from a set of possible ones and then press “done.” The exercise is correct if all the related elements (which can vary in number) were selected. For example, in Figure 1(e) the child has to select all the words related to the word “beach.” There are nine choices, five of which are related (e.g., “wave” and “sand”), and four are not (e.g., “giraffe” and “television”).
Fig. 2.
Fig. 2. Drag&drop example (screenshot taken while the letter “O” was being dragged).

3.4 Drag&Drop Design

Learning and performing the drag&drop interaction on touchscreen via screen reader is known to be difficult for people with VIB [34]. In this interaction, the users first select the target object to move by exploring the touchscreen surface with the finger or by swiping left/right to sequentially traverse the available objects. Then they perform a double tap and hold the finger on the screen to start dragging the selected element. This latter gesture is independent from the position of the selected element on the screen. Then, the users can drag the finger on the screen, thus moving the selected element and receiving indications of the areas traversed. However, in the default implementation of this interaction it is not clear whether an element can be dragged, whether a traversed area is an eligible target area, and whether the target area is already occupied.
We describe the design principles that we adopted to improve the accessibility of this type of interaction in the drag family exercises. Note that the following design principles do not apply to the basket family exercises, as they have a much simpler interaction (e.g., there is only one draggable element and two eligible areas that cannot be occupied). Specifically, we implemented personalized solutions for people with low vision and for screen-reader users.
For users with low vision, we made sure that all interactive elements are clearly distinguishable. Specifically, the draggable objects are all highlighted in ochre yellow, the target areas are painted in light grey, the starting area of a dragged element is in dark grey, and the background is in light blue. While the ability to adapt the actual colors to the specific users needs might be useful, we argue that the design principle of clearly distinguishing these elements eases the interaction. For screen-reader users we carefully designed a series of voice messages to be read during interaction (the following examples all refer to Figure 2):
draggable objects are indicated with a hint (e.g., “O for Orange [pause] draggable element”).
upon starting the drag gesture, the object being dragged is repeated (e.g., “You are moving O for orange”).
the user is informed when the dragged object enters a target area and if the target area is already occupied (e.g., “Empty Box 1”).
the user is informed when the dragged object leaves a target area (e.g., “Box 1 left, release to cancel”).
the user is informed when the dragged object is placed in a target area (e.g., “O for Orange placed in Box 1”).
We also limited the number of elements to drag to a maximum of 3. Since there are few elements, it is possible to represent them in large dimensions also on smartphone devices. In turn, this eases readability for people with low vision and simplifies interaction for screen-reader users. Finally, we highlight that the corresponding tutorial exercise was also carefully designed to explain how to perform the drag&drop gesture step-by-step.

4 User Study

In our previous work, we described the accessibility evaluation of the app, conducted with adults with VIB [2]. Here, we describe a new study, conducted with a group of children with VIB, aimed at evaluating the app usability and assessing the participants’ perceived autonomy, ease of use, and appreciation of the touchscreen interaction modalities implemented in the five exercise families. The evaluation of the educational efficacy of WordMelodies is beyond the scope of this article.

4.1 Experimental Design

4.1.1 Participants.

A group of children with VIB has been recruited by the Institute for Research, Training and Rehabilitation (I.Ri.Fo.R.), which is a foundation managed by the Italian Union for Blind and Visually Impaired People to carry out specific research, training and rehabilitation activities for people with VIB. All participants accessed the Italian version of the app. Participants’ demographic data are summarized in Table 1; “TS experience” is the previous experience in the use of touch-screen, and “AT used” is the assistive technology used during the experiment.
Table 1.
PIDSexAgeVisual conditionTS experienceAT used
P1F8Low visionFrequentNone
P2F10Low visionSometimesNone
P3F10Low visionFrequentNone
P4M11Low visionFrequentNone
P5M11Low visionFrequentVO
P6M11BlindSometimesVO
P7M5Low visionNeverNone
P8F9Low visionFrequentNone
P9M7BlindNeverVO
P10M11Low visionFrequentNone
P11F5Low visionNeverNone
Table 1. Participants’ Demographic Data
AT: Assistive technology, TS: Touchscreen.
Eleven participants with VIB (6 M, 5 F) with an age range between 5 and 11 ( \(M=8.91\) , \(SD=2.34\) ) participated in the study. Two children (P6 and P9) were blind, and the other nine had severe visual impairment. The two participants who were blind, and \(P5\) who could not interact with the app visually, used the screen reader. All the other participants had sufficient residual vision to be able to interact with the app visually and therefore did not use any accessibility services (zoom, text enlargement, inverted colors). We believe that this is due to the fact that WordMelodies already uses very large font and high-constrast colors, as highlighted in Section 3.2. Most participants had frequent prior experience with touchscreen interaction (P1, P3, P4, P5, P8, and P10), while two sometimes used touchscreen devices (P2 and P6) and three never (P7, P9, and P11).

4.1.2 Apparatus.

The study was conducted at the I.Ri.Fo.R. institute in Pisa. During the tests, the participants were supported by a typhloinformatics educator. For the study, an iPad air tablet with iOS 13 was used. For the three participants that needed it, VoiceOver screen reader was used.
Data about the participants and the study were collected via questionnaires completed by the educator using digital documents. Participants’ demographic data were collected anonymously and included sex, age, visual condition, and self-reported touchscreen interaction experience (frequent, sometimes, never). Consent forms were collected for each participant. For each exercise family, we collected the educator’s assessment of the participant’s autonomy in performing the exercise (autonomous, supported, not-completed), as well as the ease of use and appreciation of the exercise with values on a scale from 1 to 5 where 1 and 5 indicated low and high ease of use or appreciation respectively. We also assessed the usability of the app through a version of the SUS [36] adapted for use with children [53].

4.1.3 Protocol.

Before the study, the educator described the study to the child’s parents and had them sign the study consent form. Afterwards, demographic data were collected for the participant performing the test. The educator then introduced the app to the participant and initiated the study tasks. A set of five tasks has been defined for the user test, one for each exercise family. Each task included a series of four exercises of the same type. Specifically, the tasks were as follows:
keyboard, alphabet: Write the correct letter of the alphabet in the box.
selection, rhymes: Select the word rhyming with a given word.
baskets, vowels and consonants: Drag the letter in the correct basket.
drag, articles: Drag the correct determinative article in the box.
table, proper nouns: Select proper nouns in the table.
The first exercise was used to show the participant how the interaction in the considered exercise family works and it was not considered in the following data analysis. The participant was then asked to perform the other 3 exercises autonomously. This procedure was replicated for each task. Thus, a total of 20 exercises were performed by every participant, of which 15 were observed and considered for data analysis purposes. The study took about 30 minutes for each participant. The educator was asked to observe the participant when performing the tasks and to take notes while the user was interacting with the exercises on the touchscreen. More specifically, the educator was asked to
(1)
take note of the start and end date/time of the test for each participant.
(2)
take note of any difficulties the children encountered in performing the exercises.
(3)
take note of whether the tasks were completed, and how (independently or with support).
For each task, the educator also assessed, on a Likert scale from 1 to 5, the ease of use and the perceived appreciation by the participant, for the considered exercise. After all the tasks, the participant was also asked an adapted version of the SUS questionnaire [36], designed to be suitable for children [53].

4.2 Results

4.2.1 Autonomy, Ease of Use, and Appreciation of Different Exercise Families.

For most exercise families, the participants were able to solve the exercises autonomously (see Table 2). Nine participants were able to solve drag exercises autonomously, while two (P9 and P11) required assistance. Keyboard, selection, and baskets exercises were solved autonomously by eight participants each. For keyboard, P2 and P9 required assistance, and P11 could not complete the exercises. This participant could not complete selection exercises as well, while P7 and P9 required assistance with them. Instead, for drag exercise family, P7 and P11 required assistance while P9 did not manage to complete them autonomously. Table was the most problematic exercise family, with only four participants (P2, P3, P5, and P8) who were able to solve them on their own. Among the others, one participant could not complete these exercises (P11) and the remaining six required assistance.
Table 2.
PIDKeyboardSelectingBasketsDragTable
P1AAAAS
P2SAAAA
P3AAAAA
P4AAAAS
P5AAAAA
P6AAAAS
P7ASASS
P8AAAAA
P9SSSNCS
P10AAAAS
P11NCNCSSNC
Tot. A88984
Tot. S22226
Tot. NC11011
Table 2. Exercise Families Completed Autonomously (A), with Support (S), or Not Completed (NC)
As shown in Figure 3(a), table exercises were also perceived as the most difficult ones ( \(M=2.91\) , \(SD=1.04\) ).5 Indeed, keyboard ( \(M=4.45\) , \(SD=.93\) ), selection ( \(M=4.36\) , \(SD=1.21\) ), baskets ( \(M=4.55\) , \(SD=.82\) ), and drag ( \(M=4.27\) , \(SD=1.35\) ) exercises all had significantly lower perceived difficulty ( \(\chi ^2=23.05\) , \(p\lt .001\) ), based on Friedman test and post hoc Dunn test with Bonferroni correction (all pairwise \(p\lt .05\) ). No significant differences were found between other exercise families. For all the exercise families, the perceived appreciation was high (see Figure 3(b)). The lowest scoring one was drag ( \(M=3.55\) , \(SD=1.13\) ) and the highest selection ( \(M=4.45\) , \(SD=.52\) ). No significant differences were found between the exercise families considering this metric.
Fig. 3.
Fig. 3. Ease of use and Appreciation for different exercise families.

4.2.2 System Usability.

The average SUS score, measured on the participants’ responses, was 79.09 ( \(SD=11.20\) ). This score is considered excellent, based on benchmarks available in prior literature [7]. It is also higher than the average (72) for educational mobile apps [64]. Considering the separate SUS items (see Figure 4), all had better scores than the average reported in prior benchmarks [37]. Only the last item had a score worse than the benchmark score. This item corresponds to the following statement: “I needed to learn a lot of things before I could get going with the system.” This is not surprising given that the app’s goal is to teach literacy skills and touchscreen interactions to the users. However, we also note that, despite this, participants did not feel that they needed technical support to use the app, and they found the app easy and quick to learn.
Fig. 4.
Fig. 4. System Usability Scale—separate scores for each item.
Considering separate SUS scores for each participant, we notice that lowest scores were reported by those participants who required assistance most or could not solve some exercises. Indeed, P9, who could not complete one exercise family (drag) and needed support with all others had assigned a SUS score of 70 to the app. P11, who needed assistance with baskets and drag exercises, and could not complete all the other exercises assigned an even lower SUS score of 52.5.

5 Remote Usage Data Analysis

In addition to the user study, we also conducted remote usage data collection, which ran between April 2020 and December 2021, for a total of 21 months. During the data collection, we publicised the app to associations of people with VIB, teachers of children with VIB, and through blogs for accessible learning for people with VIB [21]. Data from the user studies, described in the previous section, were omitted from the remote usage data analysis.
The data were collected anonymously. It included information about the platform used (device model and OS), related system settings (if the screen reader was active, language in use) and information about the app functionalities used, including screens visited and exercises performed by the user. For the exercises, we collected information about the exercise family and type, and whether the user performed the exercise correctly and after how many attempts.
In total WordMelodies has been used by 408 unique users. Most (280, \(69\%\) ) started using the app within the first two months from the publication, while the app was actively publicised. Of these, 79 ( \(28\%\) ) stopped using the app shortly afterwards, which is in agreement with normal app abandonment behavior [40]. Instead, the majority (211, \(72\%\) ) continued accessing the app in the following months. During the data collection, 29,489 log records were registered. Half of these (14,822) were collected within the first 2 months from the publication of the app.

5.1 App Usage by Device Type and Model

With 276 ( \(68\%\) ) users in total, iOS devices were more commonly used. This was expected, because iOS devices are generally more popular among people with VIB [48]. However, Android was in use on \(32\%\) of the devices (132). This is more than we expected based on prior literature, which suggest that iOS users with VIB are about \(80\%\) of the total [25].
The majority of the users accessed WordMelodies from a smartphone (223, \(55\%\) ), while 185 ( \(45\%\) ) used tablets. However, tablet usage was higher than expected considering that smartphones are much more common than tablets (about 10-to-1 proportion) [59]. In particular, iOS users favored tablets, with 162 ( \(59\%\) ) using iPads, and 111 ( \(41\%\) ) using iPhones. Instead, only 20 ( \(15\%\) ) Android users accessed WordMelodies from a tablet, while 112 ( \(85\%\) ) used a smartphone. Another interesting finding is that many of the iOS devices were quite dated: Thirty-two ( \(29\%\) ) iPhone users and 89 ( \(55\%\) ) iPad users had a device older than 2017.

5.2 User Characteristics

Three hundred sixty-six ( \(90\%\) ) users accessed WordMelodies in English and 42 ( \(10\%\) ) in Italian. This was expected as the app was publicised more actively on English websites [21] and social networks. Of the 42 Italian-speaking users, 30 ( \(71\%\) ) also tried English exercises. This result highlights the need to also exercise foreign language literacy, which is reported as a challenge for children with VIB [20].
During the data collection, the app was publicised with associations, teachers, and web communities of people with VIB. While we cannot be certain, we therefore expect that most users were people with VIB. Of those who used the app, 329 ( \(81\%\) ) users never used the screen reader, 37 ( \(9\%\) ) used it sometimes and 42 ( \(10\%\) ) users always had it active. This confirms prior findings that only a small portion of mobile users with VIB actually use the screen reader [30]. Only 6 ( \(8\%\) ) screen reader users were on Android, and 2 of them used the screen reader only sporadically. This confirms that blind users (more in general screen reader users) prefer iOS over Android [48].

5.3 Functionalities Used

Among the collected log records, the majority marked exercise starts ( \(10,\!254\) , \(35\%\) ). However, only one third of the exercises ( \(3,\!144\) ) were completed. The others were started but never finished. We expected this behavior to be indicative of new users, who would try out the app rather than diligently doing exercises. However, further analyses revealed that this behavior was consistent for both new and regular users. One possible way to explain this behavior is that the users “browse” different exercises, starting them and exiting immediately, until they find the ones they are interested in.
Seven thousand seven hundred ninety-seven ( \(26\%\) ) log records were associated to menu navigation, such as exercise or topic selection, while there were 545 ( \(2\%\) ) accesses to tutorial exercises, of which 42 ( \(8\%\) ) referred to the drag&drop functionality. Tutorials were activated by 188 ( \(46\%\) ) different users, while the language selection functionality was accessed 365 ( \(1\%\) ) times by 137 ( \(34\%\) ) different users. However, only 30 users actually performed exercises in both languages. These were all users who first performed some exercises in Italian, and then tried some exercises in English.

5.4 Exercises

Of the 3,144 exercises completed by the users, 2,678 ( \(85\%\) ) were in English and 466 ( \(15\%\) ) in Italian. Forty of the 46 ( \(87\%\) ) available English exercises were completed at least once. As shown in Figure 5, the most popular exercise was “Write the letter of the alphabet in the box,” solved 647 times ( \(24\%\) ). This was also the first exercise on the exercise list inside the app. Others were much less used: “Select the rhyming word” was completed 224 times ( \(8\%\) ), “Drag the word into the correct basket” 215 ( \(8\%\) ) times, “Drag the word to complete the sentence” 214 ( \(8\%\) ) times, and “Listen and complete the sentence” 147 ( \(7\%\) ) times. It is worth noting that “’Complete the sight words,” an English language-specific exercise was the sixth most popular exercise, with 104 ( \(4\%\) ) completions. The great variety of performed exercises, and the fact that the most popular exercise was the first one on the list was probably due to new users exploring the app.
Fig. 5.
Fig. 5. Frequently completed exercises, at first try and repeated at least once (English).
For the Italian users, 24 of 36 ( \(67\%\) ) available exercises were completed at least once. Figure 6 shows the number of completions for the six most popular exercises. The most frequently completed exercise was “Drag the correct ending letter,” with 96 ( \(20\%\) ) runs, followed by “Drag the word into the correct basket,” completed 76 ( \(16\%\) ) times. “Write the letter of the alphabet in the box,” the first exercise on the list, had a much lower count of 39 ( \(8\%\) ) runs. “Drag the correct indeterminate article” was completed 35 ( \(7\%\) ) times, “Identify the correct statement” 28 ( \(6\%\) ) times, and “Choose the related words from the table” 25 ( \(5\%\) ) times.
Fig. 6.
Fig. 6. Frequently completed exercises, at first try and repeated at least once (Italian).
Two thousand four hundred sixty-seven ( \(78\%\) ) of the completed exercises were correct at first try. Similar results were obtained for both English (2,088, \(78\%\) ) and Italian \((379,\) \(81\%\) ). Instead, 554 ( \(22\%\) ) exercises were answered incorrectly and thus had to be repeated at least once, 485 ( \(22\%\) ) for English, and 69 ( \(19\%\) ) for Italian. Most commonly, the repeated exercises were also the most popular ones. However, repetitions over total attempts varied across different exercises and between the two languages. We note that the exercises belonging to the table family, which were the most difficult ones during the user study, also had a higher number of repetitions than the average: Thirty-three in English and \(25\%\) in Italian.
Among the most popular exercises in English, “Select the rhyming word” was the most frequently repeated one (61 times, \(27\%\) ). “Drag the word to complete the sentence” was also repeated more than average ( \(23\%\) , 49). Other exercises were less frequent, with “Write the letter of the alphabet in the box” and “Complete the sight words” repeated \(19\%\) (125) and \(18\%\) (19) of the time, respectively. In Italian, “Drag the correct ending letter” was the exercise with most repetitions among the six most popular ones (26, \(27\%\) ). All other popular exercises had less than average amount of repetitions. “Choose the related words from the table” had 4 ( \(16\%\) ), “Write the letter of the alphabet in the box” had 6 ( \(15\%\) ), “Drag the word into the correct basket” had 11 ( \(14\%\) ), and “Identify the correct sentence” had 3 ( \(11\%\) ). “Drag the correct indeterminate article” was repeated in only 1 occasion of 35 ( \(2\%\) ). The analysis of the exercises that are frequently repeated could be indicative of which literacy skills require more training for the students in general. More specifically, educators, parents, and teachers can benefit from knowing which specific exercises are more difficult for the children.

6 Discussion and Limitations

6.1 Drag&Drop Usability

Drag&drop is known to be a problematic touchscreen interaction for people with VIB. This emerges in the scientific literature [34] and was also reported by two of the domain experts during analysis. Instead, our results suggest that the drag&drop gesture in WordMelodies is accessible to children with VIB and does not impact the system usability. Indeed, most children completed the exercises with the drag&drop interaction without needing any support from the educator. We believe that such a positive result is due to the characteristics of our design of the interface elements and the interaction, which was refined through several iterations conducted with people with VIB.
In our design, the elements are made to be as big as possible, with large text, distinctive colors and consistent behavior. This makes the draggable elements and drop areas easy to distinguish for people with low vision and text content easy to read. For blind users, the interaction is coupled with explicative messages that clearly explain what the user is doing or can do with each interface element. Furthermore, WordMelodies implements a tutorial that provides step-by-step instructions on how to use the drag&drop gesture, which can be activated in each exercise that uses this interaction. Remote usage data analysis confirms that the tutorial functionality is indeed frequently used by app users and therefore could be a contributing factor to the accessibility of drag&drop exercises.

6.2 Difficulties in Performing Table Exercises Autonomously

Study results show that the exercises in the table family were the hardest, with only four participants who completed them autonomously. Remote data confirmed this finding, indicating a higher than average number of repetitions for table exercises. This is interesting, because this family of exercises requires a simple form of interaction (i.e., only element selection). So, one question emerges: What limits children autonomy in completing exercises in this family? Answering this question would require a new set of experiments but we can make the following hypothesis: The problems with these exercises emerged due to the large number of elements shown on the screen. Indeed, exercises in the table family are the only ones showing up to 16 interaction elements (the table cells). This can negatively impact the user experience of all users; for those with low vision, a larger number of elements means that each element is smaller, while for screen-reader users there is an additional difficulty in navigating among many items.
We believe that providing additional messages during screen reader navigation to highlight the current position in the table, the selection status of the traversed elements and to list the currently selected elements would help screen reader users. Instead, for users with low vision, we could improve the size of the table with respect to the screen area, the color contrast of the table cells and the text, or provide zoom on the currently traversed element. Additional iterations to improve the design of the tutorial for the table exercises could also help, as previously experienced for the drag&drop exercises.

6.3 Finding the Desired Exercise

The analysis of the logs uncovers that users tend to start many exercises without completing them. We suspect that this is due to the fact that it is not immediate to find one given exercise, hence users tend to start many, before finding the intended one. Currently the app presents a list of topics (e.g., “Learn to write”) each containing a list of exercise types, some of which have multiple difficulty levels. One problem is that the list of topics is long (12 items in English) and many exercises could be suitable for more than one topic. Thus, it could be difficult for the user to find the correct one. For example, the “learn to write” topic contains two exercise types but indeed most of the other exercise types are designed to support the user to “learn to write” like those in the “alphabet” topic.
So, one additional design requirement that we will address in future is that users should easily find the exercise they intend to practice with. A number of solutions can be adopted to improve the system along this direction. It is possible to re-organize the exercises into more meaningful topics, and provide a preview image and a meaningful description of each exercise. It is also possible to implement a functionality to search for exercises by name or filter them based on the child age or grade. Finally, it would be possible to define preferred exercises or allow a caregiver to prepare a list of exercises (possibly of different types) that the child has to complete.

6.4 Edutainment App Localization

In our previous work [2], we higlighted that exercises should be specifically designed to address linguistic skills required for each language. This is supported by the remote logging data analysis. Indeed, we note how popular exercises in English and Italian are different. Most importantly, we notice that language-specific exercises, for example exercises on sight-words in English, are among the most popular ones.
Furthermore, we also highlight that the majority of the Italian users of the app also accessed exercises in English. This is consistent with prior works that identify foreign language learning as a challenge for children with VIB [20]. Based on this finding, we think that WordMelodies could be an effective instrument for improving foreign language literacy.

7 Conclusions and Future Work

This article presents the evaluation of the WordMelodies mobile application [2] through a user study with 11 children and the analysis of remotely collected usage logs. Results show that the app is usable, with a SUS score of 79, which is higher than other educational mobile apps and considered excellent based on prior benchmarks. The participants were autonomous in performing most of the exercises, even those based on the drag&drop interaction, which has so far been considered challenging for users with VIB. We achieved this result by carefully designing the interface elements and the screen reader messages provided while performing this interaction. Our design could be adapted to be used in other mobile applications, thus improving the accessibility of the drag&drop interaction in other mobile apps.
We also uncover two key limitations of the current app. First, the table exercise family is found to be more difficult than the others, and often users need support to complete these exercises. Our intuition is that the users find it difficult to visualize and navigate table exercises due to the presence of too many interactive elements. We discuss a number of possible design principles that could improve the accessibility of table exercises, but more generalizable findings, related to the accessibility of table content on touchscreens, could be drawn from a thorough future investigation. Another possible improvement is to provide more detailed instructions for the interaction with tables, similar to what we have done for the drag&drop exercises. We will also investigate alternative table exercises that are easier to answer than word association exercises.
The second limitation is related to exercise selection; with the growing number of the exercises available in WordMelodies, the current list-based selection is not effective. We propose a number of solutions, aimed to simplify the search for a desired exercise, which we will explore as future work. Another major challenge that we wish to explore as a future work is the evaluation of the educational efficacy of WordMelodies. We are planning to achieve this through a longitudinal study that involves multiple children, with and without VIB, and educators.
Furthermore, other functionalities and personalization capabilities are also being investigated. As future work, we will expand the creation of exercises in other languages and we will investigate the use of the system for foreign language teaching. We are also working on a web-based dashboard for exercise creation and monitoring of the responses to support teachers during the use of the platform in classroom. Finally, we will include the support for the personalization of the app, including changing colors and verbal interaction messages.

Acknowledgments

We wish to thank I.Ri.Fo.R. Pisa and in particular Michele Materazzi for the invaluable support provided during the experiments. We also thank Diane Brauner for her important insight during the conceptualization of the system.

Footnotes

4
See our previous paper [2] for the full list.
5
We report means for multipoint items as they were found to better indicate central tendency than medians [35].

References

[1]
Dragan Ahmetovic, Valeria Alampi, Cristian Bernareggi, Andrea Gerino, and Sergio Mascetti. 2017. Math melodies: Supporting visually impaired primary school students in learning math. In Proceedings of the 14th Web for All Conference on The Future of Accessible Work. ACM, 26.
[2]
Dragan Ahmetovic, Cristian Bernareggi, Irene Mantegazza, and Sergio Mascetti. 2021. WordMelodies: An inclusive mobile app supporting the acquisition of literacy skills. In Proceedings of the International Web for All Conference. ACM.
[3]
Dragan Ahmetovic, Cristian Bernareggi, Sergio Mascetti, and Federico Pini. 2020. SoundLines: Exploration of line segments through sonification and multi-touch interaction. In Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’20).
[4]
APH. [n.d.]. Braille Buzz. Retrieved January 29, 2022 from https://www.aph.org/product/braille-buzz/.
[5]
Dorit Aram and Orit Chorowicz Bar-Am. 2016. Mothers helping their preschool children to spell words: A comparison between interactions using the computer vs. pencil and paper. Int. J. Child-Comput. Interact. 7 (2016), 15–21.
[6]
Abraham Arcavi. 2003. The role of visual representations in the learning of mathematics. Educ. Stud. Math. 52, 3 (2003), 215–241.
[7]
Aaron Bangor, Philip T. Kortum, and James T. Miller. 2008. An empirical evaluation of the system usability scale. Int. J. Hum.–Comput. Interact. 24, 6 (2008), 574–594.
[8]
Florence Bara, Edouard Gentaz, and Dannyelle Valente. 2018. The effect of tactile illustrations on comprehension of storybooks by three children with visual impairments: An exploratory study. J. Vis. Impair. Blindn. 112, 6 (2018), 759–765.
[9]
Cristian Bernareggi, Dragan Ahmetovic, and Sergio Mascetti. 2019. MuGraph: Haptic exploration and editing of 3D chemical diagrams. In Proceedings of the Conference on Computers & Accessibility. ACM.
[10]
Beth Beschorner and Amy C. Hutchison. 2013. iPads as a literacy teaching tool in early childhood. Int. J. Educ. Math. Sci. Technol. 1, 1 (2013), 16.
[11]
Virginia Bower. 2011. Creative Ways to Teach Literacy: Ideas for Children Aged 3 to 11. Sage.
[12]
Deborah Chen and Jamie Dote-Kwan. 2018. Promoting emergent literacy skills in toddlers with visual impairments. J. Vis. Impair. Blindn. 112, 5 (2018), 542–550.
[13]
Common Core State Standards Initiative. 2010. Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects, Appendix A.
[14]
C. J. Craig. 1996. Family support of the emergent literacy of children with visual impairments. J. Vis. Impair. Blindn. 90, 3 (1996), 194–200.
[15]
Nicol Dalla Longa and Ornella Mich. 2013. Do animations in enhanced ebooks for children favour the reading comprehension process? A pilot study. In Proceedings of the 12th International Conference on Interaction Design and Children. 621–624.
[16]
Frances Mary D’Andrea and Carol Farrenkopf. 2000. Looking to Learn: Promoting Literacy for Students with Low Vision. American Foundation for the Blind.
[17]
Pauline Davis and Vicky Hopwood. 2002. Including children with a visual impairment in the mainstream primary school classroom. J. Res. Spec. Educ. Needs 2, 3 (2002), 1–11.
[18]
Amy G. Dell, Deborah A. Newton, and Jerry G. Petroff. 2012. Assistive Technology in the Classroom: Enhancing the School Experiences of Students with Disabilities. Pearson Boston, MA.
[19]
Thomas Dick and Evelyn Kubiak. 1997. Issues and aids for teaching mathematics to the blind. Math. Teach. 90, 5 (1997), 344–349.
[20]
Athanasia Efstathiou and Stavroula Polichronopoulou. 2015. Teaching English as a foreign language to visually impaired students: Teaching materials used by teachers of English. Enabl. Access Pers. Vis. Impair. 1, 67 (2015), 67–75.
[21]
Perkins eLearning. [n.d.]. Blog on WordMelodies. Retrieved January 29 from https://www.perkinselearning.org/technology/blog/word-melodies-emerging-reading-and-writing-app.
[22]
Karen A. Erickson and Deborah Hatton. 2007. Literacy and visual impairment. In Seminars in Speech and Language, Vol. 28. Thieme Medical Publishers, Inc., 058–068.
[23]
William W. Gaver. 1986. Auditory icons: Using sound in computer interfaces. Hum.-Comput. Interact. 2, 2 (1986), 167–177.
[24]
Andrea Gerino, Nicolo Alabastro, Cristian Bernareggi, Dragan Ahmetovic, and Sergio Mascetti. 2014. Mathmelodies: Inclusive design of a didactic game to practice mathematics. In Proceedings of the International Conference on Computers Helping People With Special Needs. Springer.
[25]
Nora Griffin-Shirley, Devender R. Banda, Paul M. Ajuwon, Jongpil Cheon, Jaehoon Lee, Hye Ran Park, and Sanpalei N. Lyngdoh. 2017. A survey on the use of mobile applications for people who are visually impaired. J. Vis. Impair. Blindn. 111, 4 (2017), 307–323.
[26]
Shayl F. Griffith, Mary B. Hagan, Perrine Heymann, Brynna H. Heflin, and Daniel M. Bagner. 2020. Apps as learning tools: A systematic review. Pediatrics 145, 1 (2020).
[27]
Kenneth A. Hanninen and Aaster Raynor. 1975. Teaching the Visually Handicapped. 145, 1 (1975), 1–14.
[28]
Andrea Hathazi and Mihaela Bujor. 2013. Development of tactile strategies and use of tactile resources in emergent literacy at children with visual impairment. Psychol. Paedagog. 13, 2 (2013), 41.
[29]
Italian Ministry of Education. 2004. Indicazioni Nazionali Per i Piani di Studio Personalizzati Nella Scuola Primaria (a report).
[30]
Hernisa Kacorri, Sergio Mascetti, Andrea Gerino, Dragan Ahmetovic, Valeria Alampi, Hironobu Takagi, and Chieko Asakawa. 2018. Insights on assistive orientation and mobility of people with visual impairment based on large-scale longitudinal data. Trans. Access. Comput. (2018).
[31]
Jeeeun Kim, Abigale Stangl, and Tom Yeh. 2014. Using LEGO to model 3D tactile picture books by sighted children for blind children. In Proceedings of the 2nd ACM Symposium on Spatial user Interaction. 146–146.
[32]
Abhishek Kumar, K. Vengatesan, M. Rajesh, and Achintya Singhal. 2019. Teaching literacy through animation & multimedia. Int. J. Innov. Technol. Explor. Eng. 8, 5 (2019), 73–76.
[33]
Steven Landau and Karen Gourgey. 2001. Development of a talking tactile tablet. Inf. Technol. Disabil. 7, 2 (2001).
[34]
Barbara Leporini and Eleonora Palmucci. 2017. A mobile educational game accessible to all, including screen reading users on a touch-screen device. In Proceedings of the 16th World Conference on Mobile and Contextual Learning. 1–4.
[35]
James R. Lewis. 1993. Multipoint scales: Mean and median differences and observed significance levels. Int. J. Hum.-Comput. Interact. 5, 4 (1993), 383–392.
[36]
James R. Lewis. 2018. The system usability scale: Past, present, and future. Int. J. Hum.–Comput. Interact. 34, 7 (2018), 577–590.
[37]
James R. Lewis and Jeff Sauro. 2018. Item benchmarks for the system usability scale. J. Usabil. Stud. 13, 3 (2018), 423–451.
[38]
Sandra Lewis. 2020. Education for students with visual impairments in the time of coronavirus: An approach to education through videoconferencing. Journal of Visual Impairment & Blindness 114, 3 (2020), 171–172.
[39]
Sandra Lewis and Joan Tolla. 2003. Creating and using tactile experience books for young children with visual impairments. Teach. Except. Childr. 35, 3 (2003), 22–29.
[40]
Localytics. 2022. 25% of Users Abandon Apps After One Use. Retrieved January 29, 2022 https://uplandsoftware.com/localytics/resources/blog/25-of-users-abandon-apps-after-one-use/.
[41]
Pedro Lopes, Alexandra Ion, Willi Mueller, Daniel Hoffmann, Patrik Jonell, and Patrick Baudisch. 2015. Proprioceptive interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 939–948.
[42]
Sergio Mascetti, Andrea Gerino, Cristian Bernareggi, and Lorenzo Picinali. 2017. On the evaluation of novel sonification techniques for non-visual shape exploration. ACM Trans. Access. Comput. 9, 4 (2017), 1–28.
[43]
Sergio Mascetti, Giovanni Leontini, Cristian Bernareggi, and Dragan Ahmetovic. 2019. WordMelodies: Supporting children with visual impairment in learning literacy. In Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility. ACM.
[44]
Carolane Mascle, Christophe Jouffrais, Gwenaël Kaminski, and Florence Bara. 2022. Displaying easily recognizable tactile pictures: A comparison of three illustration techniques with blind and sighted children. J. Appl. Dev. Psychol. 78 (2022), 101–164.
[45]
Maureen McLaughlin and Brenda J. Overturf. 2012. The common core: Insights into the K–5 standards. Read. Teach. 66, 2 (2012), 153–164.
[46]
Joshua A. Miele, Steven Landau, and Deborah Gilden. 2006. Talking TMAP: Automated generation of audio-tactile maps using Smith-Kettlewell’s TMAP software. Br. J. Vis. Impair. 24, 2 (2006), 93–100.
[47]
Kathleen M. Minke, George G. Bear, Sandra A. Deemer, and Shaunna M. Griffin. 1996. Teachers’ experiences with inclusive classrooms: Implications for special education reform. J. Spec. Educ. 30, 2 (1996), 152–186.
[48]
John Morris and James Mueller. 2014. Blind and deaf consumer preferences for android and iOS smartphones. In Inclusive Designing. Springer, 69–79.
[49]
Taufik Muhtarom et al. 2019. The urgency of interactive animated learning media development for facilitating literate skills for the student of primary school. Journal of Physics: Conference Series 1254, 1 (2019), 012–034.
[50]
Michelle M. Neumann and David L. Neumann. 2017. The use of touch-screen tablets at home and pre-school to foster emergent literacy. J. Early Childhood Lit. 17, 2 (2017), 203–220.
[51]
Denis Nikitenko and Daniel Gillis. 2014. Touching the data: Exploring data sonification on mobile touchscreen devices. Proced. Comput. Sci. 34 (2014), 360–367.
[52]
ObjectiveED. 2022. Braille Sheets. Retrieved January 29, 2022 from https://www.objectiveed.com/braille-sheets.
[53]
Cynthia Putnam, Melisa Puthenmadom, Marjorie Ann Cuerdo, Wanshu Wang, and Nathaniel Paul. 2020. Adaptation of the system usability scale for user testing with children. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–7.
[54]
Anne Spencer Ross, Xiaoyi Zhang, James Fogarty, and Jacob O. Wobbrock. 2017. Epidemiology as a framework for large-scale mobile application accessibility assessment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. 2–11.
[55]
Martin Schmitz, Mohammadreza Khalilbeigi, Matthias Balwierz, Roman Lissermann, Max Mühlhäuser, and Jürgen Steimle. 2015. Capricate: A fabrication pipeline to design and 3D print capacitive touch sensors for interactive objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 253–258.
[56]
Lei Shi, Holly Lawson, Zhuohao Zhang, and Shiri Azenkot. 2019. Designing interactive 3D printed models with teachers of the visually impaired. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–14.
[57]
Kristen Shinohara and Jacob O. Wobbrock. 2011. In the shadow of misperception: Assistive technology use and social interactions. In Proceedings of the Conference on Human Factors in Computing Systems. ACM.
[58]
Abigale Stangl, Jeeeun Kim, and Tom Yeh. 2014. Technology to support emergent literacy skills in young children with visual impairments. In Proceedings of the Extended Abstracts of the 32nd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1249–1254.
[59]
Statcounter. 2022. Desktop vs Mobile vs. Tablet Market Share Worldwide. Retrieved January 29, 2022 from https://gs.statcounter.com/platform-market-share/desktop-mobile-table.
[60]
Edward Steinfeld and Jordana Maisel. 2012. Universal Design: Creating Inclusive Environments. John Wiley & Sons.
[61]
Josephine M. Stratton and Suzette Wright. 1991. On the way to literacy: Early experiences for young visually impaired children. RE: View 23, 2 (1991), 55–62.
[62]
Sensory Sun. 2022. Exploring Braille with Madilyn and Ruff. Retrieved January 29, 2022 from https://www.sensorysun.org/apps/.
[63]
Ann P. Turnbull et al. 1995. Exceptional Lives: Special Education in Today’s Schools.ERIC.
[64]
Prokopia Vlachogianni and Nikolaos Tselios. 2021. Perceived usability evaluation of educational technology using the System Usability Scale (SUS): A systematic review. J. Res. Technol. Educ. (2021), 1–18.
[65]
David Wray, Richard Fox, Jane Medwell, and Louise Poulson. 2002. Teaching Literacy Effectively in the Primary School. Psychology Press.
[66]
Dominic Wyse, Russell Jones, Helen Bradford, and Mary Anne Wolpert. 2018. Teaching English, Language and Literacy. Routledge.
[67]
Wai Yu, Ramesh Ramloll, and Stephen Brewster. 2000. Haptic graphs for blind computer users. In Proceedings of the International Workshop on Haptic Human-Computer Interaction. Springer.

Cited By

View all
  • (2024)Exploring Blind and Low-Vision Youth’s Digital Access Needs in School: Toward Accessible Instructional TechnologiesACM Transactions on Accessible Computing10.1145/368880517:3(1-31)Online publication date: 14-Aug-2024
  • (2024)Cuddling Up With a Print-Braille Book: How Intimacy and Access Shape Parents' Reading Practices with ChildrenProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642763(1-15)Online publication date: 11-May-2024
  • (2024)Game accessibility for visually impaired people: a reviewSoft Computing10.1007/s00500-024-09827-428:17-18(10475-10489)Online publication date: 20-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Accessible Computing
ACM Transactions on Accessible Computing  Volume 16, Issue 1
March 2023
322 pages
ISSN:1936-7228
EISSN:1936-7236
DOI:10.1145/3587922
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 March 2023
Online AM: 20 October 2022
Accepted: 20 September 2022
Revised: 21 July 2022
Received: 08 February 2022
Published in TACCESS Volume 16, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Visual impairment
  2. literacy education
  3. mobile edutainment apps

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)604
  • Downloads (Last 6 weeks)87
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring Blind and Low-Vision Youth’s Digital Access Needs in School: Toward Accessible Instructional TechnologiesACM Transactions on Accessible Computing10.1145/368880517:3(1-31)Online publication date: 14-Aug-2024
  • (2024)Cuddling Up With a Print-Braille Book: How Intimacy and Access Shape Parents' Reading Practices with ChildrenProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642763(1-15)Online publication date: 11-May-2024
  • (2024)Game accessibility for visually impaired people: a reviewSoft Computing10.1007/s00500-024-09827-428:17-18(10475-10489)Online publication date: 20-Jul-2024
  • (2023)Web-Based 3D Virtual Environments Utilization in Primary and Secondary Education of Children with Multiple ImpairmentsElectronics10.3390/electronics1213279212:13(2792)Online publication date: 24-Jun-2023
  • (2023)Identification of Challenges and Best Practices for Including Users with Disabilities in User-Based TestingApplied Sciences10.3390/app1309549813:9(5498)Online publication date: 28-Apr-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media