Course: Educational Assessment and Evaluation: Code: 8602 Assignment No: 02
Course: Educational Assessment and Evaluation: Code: 8602 Assignment No: 02
Course: Educational Assessment and Evaluation: Code: 8602 Assignment No: 02
com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
Page 1 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
provoke. In advertising, projective tests are used to evaluate responses to advertisements. The tests have also
been used in management to assess achievement motivation and other drives, in sociology to assess the
adoption of innovations, and in anthropology to study cultural meaning. The application of responses is
different in these disciplines than in psychology, because the responses of multiple respondents are grouped
together for analysis by the organization commissioning the research, rather than interpreting the meaning of
the responses given by a single subject. We find them especially useful when researching student
populations who are often bored with being asked to participate in yet another research project. In this article
we describe the main types of projective techniques available and discuss their benefits and drawbacks in
some detail. We address concerns about their validity and reliability and ethical issues surrounding their use.
In doing so we draw from a limited and sometimes dated literature on using these techniques outside of the
clinical setting. We illustrate our discussion with examples drawn from projects where we have employed
them.
Types of Projective Techniques
Linzey (1959) identified five categories of projective techniques based on the type of responses they elicit.
1. Associative techniques. Respondents are asked to respond to a stimulus with the first thing that comes
to mind. Word association is the most frequently used associative technique and is especially useful for
identifying respondents’ vocabulary (Gordon & Langmaid, 1988). It is best used in circumstances where the
subject can verbalise a response, such as in one-to-one or group interviews, as the immediacy of response is
important.
2. Construction techniques require respondents to construct a picture or story and are loosely based on the
clinical Thematic Apperception Test. They encourage the expression of imagination and creativity.
Respondents may be presented with a picture and asked to explain what is happening in the picture (Mick et
al., 1992; Sherry, et al., 1993). They can be asked to draw their own picture. Matthews (1996) asked
students and secondary school pupils to draw pictures of scientists at work. Market researchers often ask
subjects to personify products and brands in words or pictures: if Head and Shoulders shampoo was a
person, what would this person be like?
3. Completion techniques. The respondent is presented with an incomplete stimulus, such as the beginning
of a sentence, and is asked to complete it or to complete thought and speech bubbles in a cartoon drawing.
Completion techniques generate less complex and elaborate data than construction techniques, but they
demand less from respondents as the stimulus material has more structure.
4. Choice or ordering techniques. Respondents select one from a list of alternatives, or arrange materials,
such as pictures or statements, into some order, or group them into categories according to their similarities
and dissimilarities (Mostyn, 1978). Market researchers present consumers with a variety of different brands
within a product category and ask that these be placed into groups. Often consumers will place certain
brands together in ways that were not envisaged by their brand development and management teams.
5. Expressive techniques. Respondents incorporate some stimulus into a novel production such as a role-
play (Lannon & Cooper, 1983). Respondents might be asked to prepare and act out a mini play where the
characters are, say, the computer, the software and a new user. Role-plays are best undertaken when
respondents know and are comfortable with each other and the researcher.
Q No: 2 Elaborate the purpose of assuring reliability and validity of measurement tools. Also
highlight the relationship between validity and reliability.
Reliability is a measure of the consistency of a metric or a method. Every metric or method we use,
including things like methods for uncovering usability problems in an interface and expert judgment, must
be assessed for reliability. In fact, before you can establish validity, you need to establish reliability. Here
are the four most common ways of measuring reliability for any empirical method or metric:
• inter-rater reliability
• test-retest reliability
• parallel forms reliability
• internal consistency reliability
Page 2 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
Because reliability comes from a history in educational measurement (think standardized tests), many of the
terms we use to assess reliability come from the testing lexicon. But don’t let bad memories of testing allow
you to dismiss their relevance to measuring the customer experience. These four methods are the most
common ways of measuring reliability for any empirical method or metric.
Inter-Rater Reliability
The extent to which raters or observers respond the same way to a given phenomenon is one measure of
reliability. Where there’s judgment there’s disagreement. Even highly trained experts disagree among
themselves when observing the same phenomenon. Kappa and the correlation coefficient are two common
measures of inter-rater reliability. Some examples include:
• Evaluators identifying interface problems
• Experts rating the severity of a problem
For example, we found that the average inter-rater reliability of usability experts rating the severity of
usability problems was r = .52. You can also measure intra-rater reliability, whereby you correlate multiple
scores from one observer. In that same study, we found that the average intra-rater reliability when judging
problem severity was r = 0.58 (which is generally low reliability).
Test-Retest Reliability
Do customers provide the same set of responses when nothing about their experience or their attitudes has
changed? You don’t want your measurement system to fluctuate when all other things are static. Have a set
of participants answer a set of questions (or perform a set of tasks). Later (by at least a few days, typically),
have them answer the same questions again. When you correlate the two sets of measures, look for very high
correlations (r > 0.7) to establish retest reliability. As you can see, there’s some effort and planning
involved: you need for participants to agree to answer the same questions twice. Few questionnaires measure
test-retest reliability (mostly because of the logistics), but with the proliferation of online research, we
should encourage more of this type of measure.
Parallel Forms Reliability
Getting the same or very similar results from slight variations on the question or evaluation method also
establishes reliability. One way to achieve this is to have, say, 20 items that measure one construct
(satisfaction, loyalty, usability) and to administer 10 of the items to one group and the other 10 to another
group, and then correlate the results. You’re looking for high correlations and no systematic difference in
scores between the groups.
Internal Consistency Reliability
This is by far the most commonly used measure of reliability in applied settings. It’s popular because it’s the
easiest to compute using software—it requires only one sample of data to estimate the internal consistency
reliability. This measure of reliability is described most often using Cronbach’s alpha (sometimes called
coefficient alpha).
It measures how consistently participants respond to one set of items. You can think of it as a sort of average
of the correlations between items. Cronbach’s alpha ranges from 0.0 to 1.0 (a negative alpha means you
probably need to reverse some items). Since the late 1960s, the minimally acceptable measure of reliability
has been 0.70; in practice, though, for high-stakes questionnaires, aim for greater than 0.90. For example,
the SUS has a Cronbach’s alpha of 0.92.
The more items you have, the more internally reliable the instrument, so to increase internal consistency
reliability, you would add items to your questionnaire. Since there’s often a strong need to have few items,
however, internal reliability usually suffers. When you have only a few items, and therefore usually lower
internal reliability, having a larger sample size helps offset the loss in reliability.
In Summary
Here are a few things to keep in mind about measuring reliability:
• Reliability is the consistency of a measure or method over time.
• Reliability is necessary but not sufficient for establishing a method or metric as valid.
• There isn’t a single measure of reliability, instead there are four common measures of consistent
responses.
Page 3 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
• You’ll want to use as many measures of reliability as you can (although in most cases one is
sufficient to understand the reliability of your measurement system).
• Even if you can’t collect reliability data, be aware of the ways in which low reliability may affect the
validity of your measures, and ultimately the veracity of your decisions
Validity
• Face validity: Face validity simply means that the validity is taken at face value. As a check on face
validity, test/survey items are sent to teachers or other subject matter experts to obtain suggestions
for modification. Because of its vagueness and subjectivity, psychometricians have abandoned this
concept for a long time. However, outside the measurement arena, face validity has come back in
another form. While discussing the validity of a theory, Lacity and Jansen (1994) define validity as
making common sense, and being persuasive and seeming right to the reader. For Polkinghorne
(1988), validity of a theory refers to results that have the appearance of truth or reality.
The internal structure of things may not concur with the appearance. Many times professional
knowledge is counter-common sense. The criteria of validity in research should go beyond "face,"
"appearance," and "common sense."
• Content validity: In the context of content
validity, we draw an inference from the test scores
to a larger domain of items similar to those on the
test. Thus, content validity is concerned
with sample-population representativeness i.e. the
knowledge and skills covered by the test items
should be representative to the larger domain of
knowledge and skills.
For example, computer literacy includes skills in operating
system, word processing, spreadsheet, database, graphics,
internet, and many others. However, it is difficult, if not
impossible, to administer a test covering all aspects of
computing. Therefore, only several tasks are sampled from
the universe of computer skills.
Content validity is usually established by content experts.
Take computer literacy as an example again. A test of
computer literacy should be written or reviewed by computer science professors because it is assumed that
computer scientists should know what are important in his discipline. At first glance, this approach looks
similar to the validation process of face validity, but yet there is a subtle difference. In content validity,
evidence is obtained by looking for agreement in judgments by judges. In short, face validity can be
established by one person but content validity should be checked by a panel, and thus usually it goes hand in
hand with inter-rater reliability.
However, this approach has some drawbacks. Usually experts tend to take their knowledge for granted and
forget how little other people know. It is not uncommon that some tests written by content experts are
extremely difficult.
Second, very often content experts fail to identify the learning objectives of a subject. Take the following
question in a philosophy test as an example:
What is the time period of the philosopher Epicurus?
a. 341-270 BC
b. 331-232 BC
c. 280-207 BC
d. None of the above
Page 4 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
This type of question tests the ability of memorizing historical facts, but not philosophizing or any form of
logical reasoning. The content expert may argue that "historical facts" are important for a student to further
understand philosophy. Let's change the subject to computer science and statistics. Look at the following
two questions:
When were the founder and CEO of Microsoft, William Gates III born?
a. 1949
b. 1953
c. 1957
d. None of the above
Which of the following statement is true about ANOVA?
a. It was invented by R. A. Fisher in 1914
b. It was invented by R. A. Fisher in 1920
c. It was invented by Karl Pearson in 1920
d. None of the above
It would be hard pressed for any computer scientist or statistician to accept that the above questions fulfill
content validity. As a matter of fact, the memorization approach is still a common practice among
instructors.
Further, sampling knowledge from a larger domain of knowledge involves subjective values. For example, a
test regarding art history may include many questions on oil paintings, but less questions on watercolor
paintings and photography because of the perceived importance of oil paintings in art history.
Content validity is sample-oriented rather than sign-oriented. A behavior is viewed as a sample when it is a
subgroup of the same kind of behaviors. On the other hand, a behavior is considered a sign when it is an
indicator or a proxy of a construct (Goodenough, 1949). Construct validity and criterion validity, which will
be discussed later, are sign-oriented because both of them indicate behaviors that are different from those of
the test.
Relationship
Because a tool can be reliable but not valid, many people mistakenly believe that they are unrelated. But,
there is actually a relationship between reliability and validity. It's just a complicated one.
Reliability and validity are important concepts within psychometrics. Reliability is generally thought to be
necessary for validity, but it does not guarantee validity.
Reliability and validity are, conceptually, quite distinct and there need not be any necessary relationship
between the two. Be wary of statements which imply that a valid test or measure has to be reliable. Where
the measurement emphasis is on relatively stable and enduring characteristics of people (e.g. their
creativity), a measure should be consistent over time (reliable). It also ought to distinguish between
inventors and the rest of us if it is a valid measure of creativity. A measure of a characteristic which varies
quite rapidly over time will not be reliable over time - if it is then we might doubt its validity. For example, a
valid measure of suicide intention may not be particularly stable (reliable) over time though good at
identifying those at risk of suicide.
Validity is often expressed as a correlation between the measure and some criterion. This validity coefficient
will be limited or attenuated by the reliability of the test or measure. Thus, the maximum correlation of the
test of measure with any other variable has an upper limit determined by the internal reliability.
Within classical test theory, predictive or concurrent validity (correlation between the predictor and the
predicted) cannot exceed the square root of the correlation between two versions of the same measure that is,
reliability limits validity.
With this in mind, it can be helpful to conceptualize the following four basic scenarios for the relation
between reliability and validity:
Page 5 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
1. Reliable (consistent) and valid (measures what it's meant to measure, i.e., a stable construct)
2. Reliable (consistent) and not valid (measures something consistently, but it doesn't measure what it
meant to measure)
3. Unreliable (not consistent) and not valid (inconsistent measure which doesn't measure what it meant to
measure)
4. Unreliable (not consistent) and valid (measures what it meant to measure, i.e., an unstable construct)
It is important to distinguish between internal reliability and test-retest reliability. A measure of a fluctuating
phenomenon such as suicide intention may be valid but have low test-retest reliability (depending on how
much the phenomenon fluctuates and how far apart the test and retest is), but the measure should exhibit
good internal consistency on each occasion.
Q No: 3 Write down learning outcome for any unit of Science for 9th class and develop an Essay type
test item with rubric, 5 multiple choice questions and 5 short questions for the written learning
outcome.
Humans are linguistic animals. Language is the most fundamental and pervasive tool we have for
interpreting our world and communicating with others as we act in an attempt to transform that world.
Whether they pursue an emphasis in literature or writing, science majors gain a deeper understanding of the
resources of the world. Both literature and writing courses help students explore how writers use the creative
resources of language-in fiction, poetry, nonfiction prose, and drama-to explore the entire range of human
experience. Science courses help students build skills of knowing the living and non living organisms;
become careful and critical readers; practice writing-in a variety of genres-as a process of intellectual
inquiry and creative expression; and ultimately to become more effective thinkers and communicators who
are well-equipped for a variety of careers in our information-intensive society.
Specific learning outcomes for Science courses include the following:
1. Reading: Students will become accomplished, active readers who appreciate ambiguity and complexity,
and who can articulate their own interpretations with an awareness and curiosity for other perspectives.
2. Writing skills and process: Students will be able to write effectively for a variety of professional and
social settings. They will practice writing as a process of motivated inquiry, engaging other writers’ ideas as
they explore and develop their own. They will demonstrate an ability to revise for content and edit for
grammatical and stylistic clarity. And they will develop an awareness of and confidence in their own voice
as a writer.
3. Sense of Genre: Students will develop an appreciation of how the formal elements of living and non
living organisms. They will recognize how writers can transgress or subvert generic expectations, as well as
fulfill them. And they will develop a facility at writing in appropriate genres for a variety of purposes and
audiences.
4. Culture and History: Students will gain knowledge of the major traditions of living and non living
organisms in Science, and an appreciation for the diversity of literary and social voices within–and
sometimes marginalized by–those traditions. They will develop an ability to read texts in relation to their
historical and cultural contexts, in order to gain a richer understanding of both text and context, and to
become more aware of themselves as situated historically and culturally.
5. Critical Approaches: Students will develop the ability to read works of literary, rhetorical, and cultural
criticism, and deploy ideas from these texts in their own reading and writing. They will express their own
ideas as informed opinions that are in dialogue with a larger community of interpreters, and understand how
their own approach compares to the variety of critical and theoretical approaches.
Page 6 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
6. Research Skills: Students will be able to identify topics and formulate questions for productive inquiry;
they will identify appropriate methods and sources for research and evaluate critically the sources they find;
and they will use their chosen sources effectively in their own writing, citing all sources appropriately.
7. Oral communication skills: Students will demonstrate the skills needed to participate in a conversation
that builds knowledge collaboratively: listening carefully and respectfully to others’ viewpoints; articulating
their own ideas and questions clearly; and situating their own ideas in relation to other voices and ideas.
Students will be able to prepare, organize, and deliver an engaging oral presentation.
8. Valuing literature, language, and imagination: Students will develop a passion for literature and language.
They will appreciate literature’s ability to elicit feeling, cultivate the imagination, and call us to account as
humans. They will cultivate their capacity to judge the aesthetic and ethical value of literary texts–and be
able to articulate the standards behind their judgments. They will appreciate the expressive use of language
as a fundamental and sustaining human activity, preparing for a life of learning as readers and writers.
Q No: 4 Describe the measure of relationship and also elaborate how these measurers can be utilize in
the interpretation of the test result, provide examples where necessary.
Measures of Relationship
Measures of relationship are statistical measures which show a relationship between two or more variables
or two or more sets of data. For example, generally there is a high relationship or correlation between
parent's education and academic achievement. On the other hand, there is generally no relationship or
correlation between a person's height and academic achievement. The major statistical measure of
relationship is the correlation coefficient.
Notes that the variety of interpersonal relationships in contemporary society necessitates the development of
brief, reliable measures of satisfaction that are applicable to many types of close relationships. This article
describes the development of such a measure. In Study I, the 7-item Relationship Assessment Scale (RAS)
was administered to 125 college students who reported themselves to be "in love." Analyses revealed a
unifactorial scale structure, substantial factor loadings, and moderate inter correlations among the items. The
scale correlated significantly with measures of love, sexual attitudes, self-disclosure, commitment, and
investment in a relationship. In Study II, the scale was administered to 57 college student couples in ongoing
relationships. Analyses supported a single factor, alpha reliability of .86, and correlations with relevant
relationship measures. The scale correlated .80 with a longer criterion measure, the Dyadic Adjustment
Scale (G. B. Spanier, 1976), and both scales were effective (with a subsample) in discriminating couples
who stayed together from couples who broke up. It is concluded that the RAS is a brief, psychometrically
sound, generic measure of relationship satisfaction.
Relationship education promotes practices and principles of premarital education, relationship resources,
relationship restoration, relationship maintenance, and evidence-based marriage education.
The formal organization of relationship education in the United States began in the late 1970s by a diverse
group of professionals concerned that the results of conventional methods and means of marriage
therapy resulted in no appreciable reduction in the elevated rate of divorce and out-of-wedlock births. The
motivation for relationship education was found in numerous studied observations of the elevated rates of
marital and family breakdown, school drop-outs, incarceration, drug
addiction, unemployment, suicide, homicide, domestic abuse and other negative social factors when divorce
and/or out-of-wedlock pregnancy were noted. In all of the negative categories noted above, statistical over-
representation of adults whose childhood did not involve both of their parents was present.
Initial planning for the field of relationship education involved the participation
of psychologists, counselors, family life educators, social workers, marriage and family
therapists, psychiatrists, clergy from various faith traditions, policy makers, academicians in the fields
of social science, attorneys, judges, and lay persons. The goal was to seek the broadest possible dispersal of
Page 7 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
research and marriage education skills courses which could improve interpersonal relationship functioning,
especially with married and pre-marital couples.
Relationship education is about helping people find strategies and solutions that fit for their unique
circumstances, values and relationship goals. That includes respecting their own personal responsibility for
their success and the decisions they make for their lives. Evidence-based skills training provides techniques
that are easy to understand and use to surface greater awareness of what lies beneath the tip of the iceberg,
navigate typical relationship challenges, and overcome differences that are a natural part of any close
relationship. Relationship education provides safe, time-limited structures for conversations that matter,
which are often much more about listening than talking. Learning to actively listen with empathy and
respect to another person’s perspective and experience –without judgment, defensiveness, blame, or an
effort to quickly try to “fix” the issue or the person — makes it safer for intimates to develop greater
awareness of themselves and each other.
Relationship education teaches practical, usable skills for better understanding and safely expressing the full
range of emotions, including anger, sadness and fear. Upsetting feelings held in eventually either implode or
explode. Confiding painful feelings to a significant other leaves more room to experience feelings of love,
pleasure and happiness. Just as the most powerful waves lose their energy when they break against the
shore, the same is generally true of emotions. Relationship education enables distressed couples — with
good will towards each other, openness to learning, and a desire for the relationship to succeed — to deal
with differences and problems in ways that often lead to greater closeness, understanding, acceptance and
commitment. The issues that surface are typically symptoms of communication breakdowns, hidden
assumptions and expectations, behaviors that come from holding in upsetting feelings, or lack of skills for
constructive conflict resolution. Relationship education helps people develop their emotional intelligence,
including understanding that feelings of love come from the anticipation of pleasure in our interactions with
others. If instead of anticipating pleasure, we expect pain, feelings of love are unlikely to survive, let alone
thrive. What’s a pleasure changes during different stages and passages of life. Sustaining feelings of love
requires learning what it takes in today’s circumstances to stay a pleasure in each other’s lives.
Relationship education recognizes that although nearly all traditional marriage vows include the promise to
“love ‘till death do us part,” the marriage contract itself cannot be dependent on “feelings” of love, which
naturally wax and wane. That doesn’t mean commitment or obligations wax and wane. Emotions are
affected by many factors, often unrelated to issues inside our closest relationships. Marriage is the glue
that’s meant to hold couples and families together during periods of growth, change and challenge that are
natural part of life. Relationship education is built on the understanding that what happens in our closest
relationships impacts quality of life, fulfillment, happiness, and the ability to pursue cherished dreams and
aspirations. Relationships take regular attention. Without intentionally nurturing relationships, it’s easy to
become strangers, for relationships to wither and become vulnerable. Beyond staying a pleasure in each
other’s lives, the work of an intimate relationship is to consistently meet each other’s needs for bonding
(emotional and physical closeness). Relationship education provides a road map and usable skills for
sustaining healthy relationships that are an ongoing source of love, pleasure, happiness, and fulfillment for
both partners.
The student-teacher relationship can have a profound impact on the social and academic development of a
child. Extensive research supports that statement; however, descriptions of the ideal student-teacher
relationship are inconsistent. This is because there are several different theories that researchers reference
when describing the relationships. Similarly, there are many questionnaires that researchers have used to
measure the quality of student-teacher relationships and sometimes they differ drastically in their content.
This study reviews 14 of the questionnaires from five theoretical perspectives, combines the collective 170
questionnaire items into one survey, and gathers data from 628 students. The findings provide insights
related to measurement of student-teacher relationships and further our understanding of how students’ think
about their relations with teachers.
Page 8 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
Consider the following case: Mario is a 7-year-old boy who was admitted to an inpatient rehabilitation
facility after a sledding accident in which he acquired a no displaced fracture at C1 and a closed traumatic
brain injury. He has been in the inpatient rehabilitation unit for 3 weeks, and the facility routinely
administers standardized functional assessments at admission, during periods of rapid change, and at
discharge. Mario has progressed from being very dependent at admission to rapidly attaining some basic
motor skills. He is now medically stable, cooperative, and appears ready to make changes in motor function
that will allow him to return home with some transfer and self-mobility skills. He recently started sitting by
himself and is standing with minimal support. The physical therapist has administered a functional test of
mobility at admission and recently to determine progress. The child has a score of 6.1 at admission and then
most recently a score of 35.9 at 3 weeks after admission. The physical therapist providing intervention (and
others) may have a number of questions regarding how to interpret the functional test results described in the
case. For example: What do the summary scores from the outcome measures mean? How do we interpret the
change score? Has the child achieved “clinically significant change” up to this point in the hospitalization
and physical therapy episode of care? Is the change meaningful? Is the change score beyond measurement
error that would typically occur in the routine administration of this measure? How can these scores be used
to help examine the patterns of mobility changes that have taken place? Because the meanings of scores on a
standardized instrument are not intuitively apparent,1 there is a need to provide meaning to scores that result
from tests and measures used in physical therapist practice.
Physical therapy and other health care fields are beginning to explore, in increasing depth, the proper
interpretation of tests and measures and the clinical changes that score improvements represent. Measures to
detect important effects related to physical therapy intervention must be valid (i.e., measure what is
intended), responsive (ie, able to detect an important change, even if that change is small), and interpretable
(ie, the intended audience must understand the magnitude of effect).1,2 At the center of this issue of
“interpretability” is the attempt to have a better understanding of a “clinically significant difference”
(CSD).3,4 Understanding CSD can be a bewildering endeavor, particularly with the myriad of terms and
anachronisms that are used across different fields and traditions. A number of terms to describe the
phenomenon of CSD have been proposed, but different terms often have a similar meaning, such as “reliable
change index” (RCI) and “minimal detectable change” (MDC), or “minimal clinically important difference”
(MCID) and “minimal important difference” (MID).
(Part A)
Explain different methods of assigning grades.
Assigning Grades
Test results can be used for a variety of reasons, such as informing students of their progress, evaluating
achievement, and assigning grades.
Formative evaluation= Activities that are aimed at providing feedback to the students
Summative evaluation= Activities that determine the worth, value, or quality of an outcome
Before assigning grades, consider: Are the grades solely based on academic achievement, or are there other
factors to consider?
• Factors could include attendance, participation, attitudes, etc.
Most experts recommend making academic achievement the sole basis for assigning grades. If desired, the
recommendation is to keep a separate rating system for such non achievement factors to keep achievement
grades unbiased. When comparing grades (5th grade to 6th grade, for example) it is critical to consider how
grades were calculated. Grades based heavily on homework will not be comparable to grades based heavily
on testing. It involves comparing a student’s performance to a specified level of performance. One common
Page 9 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
system is the percentage system (A= 90-100%, B=80- 89%, etc) marks directly describe student
performance. However, there may be considerable variability between teachers of how they assign grades
(lower vs. higher expectations). The end-of-course grades assigned by instructors are intended to convey the
level of achievement of each student in the class. These grades are used by students, other faculty, university
administrators, and prospective employers to make a multitude of different decisions. Unless instructors use
generally-accepted policies and practices in assigning grades, these grades are apt to convey misinformation
and lead the decision-maker astray. When grading policies are practices are carefully formulated and
reviewed periodically, they can serve well the many purposes for which they are used.
What might a faculty member consider to establish sound grading policies and practices? The issues which
contribute to making grading a controversial topic are primarily philosophical in nature. There are no
research studies that can answer questions like: What should an "A" grade mean? What percent of the
students in my class should receive a "C?" Should spelling and grammar be judged in assigning a grade to a
paper? What should a course grad represent? These "should" questions require value judgments rather than
an interpretation of research data; the answer to each will vary from instructor to instructor. But all
instructors must ask similar questions and find acceptable answers to them in establishing their own grading
policies. It is not sufficient to have some method of assigning grades--the method used must be defensible
by the user in terms of his or her beliefs about the goals of an American college education and tempered by
the realities of the setting in which grades are given. An instructor's view of the role of a university
education consciously or unwittingly affects grading plans. The instructor who believes that the end product
of a university education should be a "prestigious" group which has survived four or more years of culling
and sorting has different grading policies from the instructor who believes that most college-aged youths
should be able to earn a college degree in four or more years.
To calculate final grades according to the weighted categories method:
1. At the bottom of the assignments page, mark the “Use weighted categories” box or in
“Grades,” under the “Setup” tab, select the “Weighted categories” option.
2. In “Assignments” (either in “Grades” or “Course Home”), assign each existing category the
appropriate weight percentage. When you create a new category, you will be required to enter
in a category weight.
These percentages can be viewed in the “% of total” column on the assignments page. It works best if your
category weight percentages add up to 100, but BYU Learning Suite will accommodate if they add up to be
less or more than 100%.
The total point’s method means that the points assigned to each assignment will reflect the effect of that
assignment on the students' final grades.
Example:
Grades will be based on the following learning activities: In-class quizzes (2 points per lecture for a total of
60 points), homework (4 points per item for a total of 100 points) midterm exams (40 points each for a total
of 160 points) and the final exam (100 points). To receive an A, you must have 378 points, an A- requires
362 points, a B+ 349 points, and so on.
To calculate grades according to the total point’s method:
• At the bottom of the assignments page, leave the “Use weighted categories” box blank or in
“Grades” under the “Setup” tab, select the “Total Points” option.
When you have chosen to use the total point’s method, in “Assignments,” the “% of Total” column will be
hidden, and the option to assign a category weight percentage when creating or editing a category will also
be hidden.
Page 10 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
I think the worst method is to let the grade solely depend on one exam at the end of the semester. The
students will most likely put no effort in learning for this exam during the semester and will instead invest a
couple of days before the exam to repeat the lecture's content.
The most promising method I came across was using (bi-)weekly assignments out of which the student had
to reach at least 50% of the points. This way, they have to focus on the topic in order to solve the
assignments to be able to take part in the exam.
In addition to this method, some lecturers awarded students with more than 80% of the overall points a
bonus in the exam, i.e. they will obtain (let's say) 10 points bonus within the exam. I for myself think this
really motivated me to invest time into the assignments and consequently into the lecture content.
Finally, there was one lecture where students could pose questions with possible answers in an online
learning platform. The other students could use these questions (after review by an assistant or professor) to
learn for the final exam. Students posing very good questions gained a bonus for the exam, similar to the
above approach.
(Part B)
What are the purposes of assigning grades to the student achievement and what these grades reflect?
Few educators receive any formal training in assigning marks to students’ work or in grading students’
performance and achievement. As a result, when required to do so, most simply reflect back on what was
done to them and then, based on those experiences, try to develop policies and practices that they believe are
fair, equitable, defensible, and educationally sound. Their personal experiences as students, therefore, may
have significant influence on the policies and practices they choose to employ.
The grades teachers and professors assign to students’ work and performance have long been identified by
those in the measurement community as prime examples of unreliable measurement (Brookhart, 1993;
Stiggins, Frisbie, & Griswold, 1989). What one teacher considers in determining students’ grades may differ
greatly from the criteria used by another teacher (Cizek, Fitzgerald, & Rachor, 1996; McMillan, Workman,
& Myran, 1999). Even in schools and colleges where established grading policies offer guidelines for
assigning grades, significant variation remains in the grading practices of individual teachers and professors
(Brookhart, 1994, McMillan, 2001). One reason for this variation is that few teachers or professors receive
any formal training on grading and reporting. Most have scant knowledge of the various grading methods,
the advantages and disadvantages of each, or the effects of different grading policies (Stiggins, 1993, 1999).
As a result, the majority of teachers and professors rely on traditional grading practices, often replicating
what they experienced as students (Frary, Cross, & Weber, 1993; Guskey & Bailey, 2001; Truog &
Friedman, 1996). Because personal recollections of these experiences vary among teachers and professors,
so too do the practices and policies they employ. Grading on standards for achievement means a shift from
thinking that grades are what students earn to thinking that grades show what students learn. Teachers
sometimes talk about grades as pay students earn by doing their work. This seems fine as a simple image.
After all, doing assignments, studying, and paying attention are the work students do when they're in school.
But if "earning" grades gives people the idea that grades are students' pay for punching a clock, for showing
up and being busy, and for following directions no matter what the outcome, then the image is harmful. The
object of all this business isn't just the doing of it. The idea is that once students do all these things, they will
learn something.
Page 11 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy
One time when I was conducting a district workshop on grading, a young teacher became agitated.
Everything her students did "counted," she said. Everything! That was what they were in school for, and that
was how she kept them in line. If behavior and work habits didn't count toward grades, her classroom would
fall apart. It would be easy to mock this teacher, but maybe we can learn something valuable from her. I
think she really believed what she was saying and genuinely could not imagine coping with her classroom,
much less being "successful" (according to her own judgment), without the heavy-handed grading policy she
described. In fact, her resistance to my ideas might have been rooted in a fear that she really wasn't a good
teacher yet, and she might have realized deep down that being an "enforcer" wasn't a very educative
teaching style.
For grading to support learning, grades should reflect student achievement of intended learning outcomes. In
schools today, these learning outcomes are usually stated as standards for achievement. Grades on both
individual assessments and report cards should reflect students' achievement of performance standards on
intended learning outcomes. It has to be both, because if grades on individual assessments don't reflect
achievement of intended learning outcomes, the report card grade derived from them can't, either. The report
card grade is a summary of the meaning of a set of individual grades. And for any of this to work, students
have to understand what it is they are trying to learn and what the criteria for success on these learning
targets are (Moss & Brookhart, 2012). It will do no good to base grades on achievement if students don't
understand what it is they are supposed to be achieving.
Before we go any further down this road, stop a minute and reflect on what you think about this principle.
Probably most of you reading this book do not have quite as extreme a view about "grading everything" as
my agitated teacher did. However, many of you may genuinely think students should be graded for practice
work, effort, and perhaps even attitude and can explain why you think so. Although I do not hold this view, I
acknowledge that if behavior, effort, and attitude are not included in the grade, they have to be assessed and
handled in some other way. Please take a moment to ask yourself the reflection questions
Reflect on your current grading beliefs and practices. If you teach more than one grade or subject, think
about them one at a time as you contemplate these questions. For each question, try to figure out the
rationale behind your answer. Why do you think or grade in the manner you describe for each question?
Page 12 of 12 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy