Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Critical Apprasisal 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Critical appraisal checklist for Randomized Controlled Trials (RCT)

Adapted from JBI WEBSITE: HTTPS://JOANNABRIGGS.ORG/ -CRITICAL APPRAISAL TOOLS:


HTTPS://JOANNABRIGGS.ORG/CRITICAL-APPRAISAL-TOOLS

Article Title: “Augmented Biofeedback Training with Physical Therapy Improves Visual-Motor
Integration, Visual Perception, and Motor Coordination in Children with Spastic Hemiplegic
Cerebral Palsy: A Randomized Control Trial.” Reem M. Alwhaibi, Reham S. Alsakhawi, and Safaa
M. ElKholi

Alwhaibi RM, Alsakhawi RS, ElKholi SM. Augmented Biofeedback Training with Physical
Therapy Improves Visual-Motor Integration, Visual Perception, and Motor Coordination in
Children with Spastic Hemiplegic Cerebral Palsy: A Randomised Control Trial. Phys Occup Ther
Pediatr. 2020;40(2):134-151. doi:10.1080/01942638.2019.1646375

Reviewer: Raneen Allos

Yes No Unclear NA
1. Was true randomization used for assignment of participants X
to treatment groups?
2. Was allocation to treatment groups concealed? X

3. Were treatment groups similar at the baseline? X

4. Were participants blind to treatment assignment? X

5. Were those delivering treatment blind to treatment X


assignment?
6. Were outcomes assessors blind to treatment assignment? X

7. Were treatment groups treated identically other than the X


intervention of interest?
8. Was follow up complete and if not, were differences between X
groups in terms of their follow up adequately described and
9. analyzed?
Were participants analyzed in the groups to which they were X
randomized?
10. Were outcomes measured in the same way for treatment X
groups?
11. Were the instruments used to measure outcomes reliable and X
valid?
12. Was appropriate statistical analysis used? X

13. Was the trial design appropriate, and any deviations from X
the standard RCT design (individual randomization, parallel
groups) accounted for in the conduct and analysis of the
trial?
Overall credibility of article results per your assessment on the scale of 0-10

I would give this study an seven.

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
Explanation of critical appraisal of Randomized Controlled Trial
(RCT)

Brief and structured summary of the article in a form of Abstract


• This study was attempting to answer the question “What is the efficacy of combining physical
therapy treatments and augmented biofeedback to improve motor function in children with
cerebral palsy?”
• This study was designed as a randomized control trial with three groups (A, B, C). The groups
were randomly assigned from a pool of 45 participants. All participants were young children
(Ages 5-8) who have cerebral palsy. Each group had 15 children randomly selected with no bias
to age nor sex. Group A received typical physical therapy interventions, Group B received
biofeedback training by the use of “E-Link Upper Limb Exerciser”, (which is a set of computer
games), and Group C received both. Each participant went through the program for three
months, with one-hour sessions, three times a week. The participants were pre-tested prior to the
study beginning and post-tested at the twelve-week mark.
o The tests used at the pre and post testing times were referenced from the Beery-
Buktenica Developmental Test of VMI (Short form 6th edition) (Beery, Buktenica, &
Beery, 2010). Tests were used to objectively measure visual-motor integration (VMI),
visual perception (VP), and motor coordination (MC) by matching shapes and tracing
them with a pencil. These tests were timed and standardized.
o Physical therapy interventions included: holding a marker between thumb and index
finger, buttoning and unbuttoning, drawing shapes on paper, stacking cubes, opening
and closing scissors, stretching exercises of effected upper limb, etc.
o Biofeedback intervention included computer games that provided motivational feedback
to the players. The games included soccer, space shooting, driving, etc. which all
required the participants to move their upper limbs. The games were presented on a
screen and used different input tools to receive information from the player.
• This study used mixed MANOVA testing to compare pretest and posttest results of VMI, VP,
and MC scores of the three participant groups.
o VMI: significant changes were seen between group A and C, but not between A and B
or B and C. Each group scored higher in their posttest results for VMI.
o VP: significant changes were seen between group A and C, but not between A and B or
B and C. Each group scored higher in their posttest results for VP.
o MC: significant changes were seen between group A and C and between B and C, but
not between A and B. Each group scored higher in their posttest results for MC.
• The results above support the experimental question that the combination of physical therapy
interventions and augmented biofeedback are significant in improving functional coordination
in children with cerebral palsy. The authors indicated limitations of the study such as inability to
follow up with participants after the three months of treatment. The sample size was also small
of only 45 children, all withing the same city (Riyadh, Saudi Arabia). Follow up studies should
focus on which type of augmented feedback (visual, auditory, or tactile) is best, as this was not
biased in any way for this experiment.

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
1. Was true randomization used for assignment of participants to treatment groups?

The article does not explicitly say that a true randomization was used to assign treatment
groups. The authors said that they split the three groups of children randomly with “no
difference in age (p=0.75) or sex (p= 0.18)”. The mean age of the groups was 6.4-6.6 years old,
but the sexes of the groups seem like they were not chosen to be a true random. Group A had 5
boys, 10 girls. Group C had 10 boys, 5 girls. Group B had 8 boys, 7 girls. Those numbers seem
too convenient to be randomly selected and given the fact that the randomization process was
not explicitly written out, I am not sure if true randomization was used.

2. Was allocation to groups concealed?

Allocation to groups could not be concealed because all three groups received different
treatment. The testers were physical therapists and staff members of the Disabled Children’s
Association, from where the 45 participants were recruited from. To perform the study, the
members of the study had to be aware of the group assignments to proceed with the correct form
of treatment.

3. Were treatment groups similar at the baseline?

The groups were all similar demographically. They were all between the ages of 5-8 and had
spastic hemiplegic cerebral palsy. The inclusion and exclusion criteria ensured that they were all
very similar at baseline as well. The treatment groups pretest results were similar as well. All
three groups were pretested and post tested with standardized visual motor integration, visual
perception, and motor coordination tests. The individual scores were not reported, but the
average scores were computed and given, and they all seem to be very similar. Their standard
deviation did not exceed 5 points.

4. Were participants masked (or blinded) to treatment assignment?

It is unclear in the article whether the children were masked to their assignment. As they were
children with cerebral palsy, they may have had cognitive impairments where they were not
truly aware they were in a study and just believed they were playing games.

5. Were those delivering treatment blind to treatment assignment?

It was not mentioned if the evaluators were blind to treatment assignment, but due to their being
three different treatments that were distributed, I believe it can be assumed that they were not
blind. They had to know what group each participant was in to ensure they performed the
correct treatment.

6. Were outcomes assessors blind to treatment assignment?

There was not any information in the article about the outcomes assessors being blinded to the
treatment.

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
7. Were treatment groups treated identically other than the intervention of interest?

Yes, the treatment groups were treated almost identically. Group C received 30 minutes doing
the two interventions while Groups A and B received 60 minutes because they just did one
intervention. Group C still got the same type of intervention, just half the time for it.

8. Was follow up complete and if not, were differences between groups in terms of their
follow up adequately described and analyzed?

Follow up after the study was not completed. The authors noted this in their limitation section as
a funding and timing constriction.

9. Were participants analyzed in the groups to which they were randomized?

The participants were not analyzed once they were placed in their groups other than for the
posttest. There was no testing done on the participants during the three months of intervention.

10. Were outcomes measured in the same way for treatment groups?

Yes, the outcome measurements were tested and recorded by the same three, standardized tests
for all the participants. All the participants were pre and post tested the same with these tests.

11. Were the instruments used to measure outcomes reliable and valid?

The instrument used to measure outcomes was the Beery-Buktenica Developmental Test of
VMI (short-form) (6th edition). This is a “standardized test with good reliability and validity”
that evaluates hand and finger movements. The scores are out of thirty points and are converted
to standardized scores based on the participant’s age.

12. Was appropriate statistical analysis used?

The study used a mixed MANOVA test to compare results between the three groups and
compare their three different pre and post test scores. Post-hoc tests were how the researchers
compared all the data and compiled them for this research study. These analyses were
appropriate because this study had multiple independent and dependent variables to compare.

13. Was the trial design appropriate for the topic, and any deviations from the standard RCT
design accounted for in the conduct and analysis?

I believe the use of this design was appropriate for the topic. It seemed like the researchers were
starting with a baseline knowledge of their topic, and their research was done well but still in a
general manner. The preliminary results are available, and the end results may be used for a
follow up study. Investment in this study was appropriate for the study because the results
showed that the combination of interventions was helpful for the participants and can improve
children with CP lives.

Additional consideration

The article was posted in the Physical and Occupational Therapy in Pediatrics journal which is
a credible journal. The group of authors received approval from the Deanship of Scientific

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
Research Council and the Research Ethics Committee of the Health and Rehab Sciences
College in Riyadh, Saudi Arabia.

Why you should or should not use this evidence?

The results of this study are sound and supported by the evidence provided. There are
limitations as well as questions surrounding the randomization process and the scoring results.
The overall safety of the children was never noted, but due to the type of intervention and the
location (Disabled Children Association) I believe their safety was prioritized even though it
was not explicitly said. The benefits did outweigh the risks as there were not many risk factors
to the interventions.

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.

You might also like