Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
22 views25 pages

AIL Midterm Exam

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 25

1.

Clarity in learning targets means that objectives should be stated in behavioral terms that can
be observed.
2. Reliability refers to the extent to which a test consistently measures what it is supposed to
measure.
3. Equivalency reliability assesses the agreement of measuring instruments over time.
4. Internal consistency measures the precision between the observers or the measuring
instruments used in a study.
5. Reliability is the extent to which a test produces different results each time it is administered.
6. Fairness in assessment includes avoiding bias and stereotypes in assessment tasks and
procedures.
7. Positive consequences of assessments include providing feedback to students and potentially
improving their motivation.
8. Practicality of a test means it should be effective only in controlled laboratory settings.
9. Instructional objectives should state the specific audience for the educational activity.
10.The behavior component of a behavioral objective should describe the intended results rather
than the instructional process.
11.Behavioral objectives can include covert activities if they are not accompanied by directly
observable indicators.
12.Conditions in a behavioral objective describe what the learner will be allowed or denied to use
during performance.
13.Degree in a behavioral objective describes the conditions under which the performance should
occur.
14.The split-half method is used to calculate reliability by comparing two sets of test scores.
15.The Kuder-Richardson formulas are used to estimate internal consistency of a test.

Which cognitive level involves the acquisition of facts, concepts, and theories?
a) Comprehension
b) Application
c) Knowledge
d) Evaluation
Which cognitive level is characterized by understanding and awareness of interrelationships?
a) Comprehension
b) Analysis
c) Synthesis
d) Application
What cognitive process involves breaking down a concept into its components?
a) Analysis
b) Synthesis
c) Knowledge
d) Application
Which cognitive process involves putting together components to summarize a concept?
a) Synthesis
b) Evaluation
c) Comprehension
d) Analysis
Which cognitive level involves judging the worth or value of a concept or principle?
a) Evaluation
b) Knowledge
c) Application
d) Comprehension
What term describes specific activities or tasks that a student can proficiently perform?
a) Competencies
b) Abilities
c) Skills
d) Knowledge
Which term refers to a cluster of skills?
a) Abilities
b) Skills
c) Competencies
d) Knowledge
What term is used for a group of related competencies?
a) Skills
b) Competencies
c) Abilities
d) Knowledge
Which assessment method is appropriate for evaluating various levels of cognitive objectives?
a) Essays
b) Objective tests
c) Checklists
d) Product rating scales
Which assessment method is effective for testing higher-level cognitive skills?
a) Objective tests
b) Checklists
c) Essays
d) Product rating scales
What type of assessment tool involves a list of characteristics or activities to be checked?
a) Performance checklist
b) Objective test
c) Essay
d) Checklist
What assessment method is used to rate products like book reports and creative endeavors?
a) Performance tests
b) Product rating scales
c) Objective tests
d) Checklists
Which method is used to determine if an individual behaves as expected during a task?
a) Oral questioning
b) Performance checklist
c) Observation
d) Self reports
Which assessment method involves evaluating a student’s ability to communicate ideas
coherently?
a) Observation
b) Performance checklist
c) Oral questioning
d) Self reports
Which type of validity considers the outward appearance of the test?
a) Content validity
b) Construct validity
c) Criterion-related validity
d) Face validity
What type of validity assesses if a test measures what it is intended to measure?
a) Content validity
b) Face validity
c) Construct validity
d) Criterion-related validity
Which validity type involves comparing a test with a specific criterion?
a) Content validity
b) Criterion-related validity
c) Face validity
d) Construct validity
What validity type is concerned with the test measuring a particular construct or factor?
a) Construct validity
b) Criterion-related validity
c) Face validity
d) Content validity
Which domain involves the development of intellectual skills and knowledge?
a) Affective domain
b) Cognitive domain
c) Psychomotor domain
d) Emotional domain
Which domain encompasses feelings, emotions, and attitudes?
a) Cognitive domain
b) Psychomotor domain
c) Affective domain
d) Behavioral domain
What domain involves utilizing motor skills and their coordination?
a) Cognitive domain
b) Affective domain
c) Psychomotor domain
d) Emotional domain
Which psychomotor subdomain involves the ability to apply sensory information to motor
activity?
a) Mechanism
b) Adaptation
c) Perception
d) Guided response
What psychomotor subdomain involves performing tasks with a degree of proficiency and skill?
a) Perception
b) Complex overt response
c) Set
d) Adaptation
Which psychomotor subdomain refers to performing a task with minimal guidance?
a) Mechanism
b) Guided response
c) Adaptation
d) Origination
Which assessment method is used to supplement oral questioning and performance tests?
a) Observation
b) Product rating scales
c) Checklists
d) Self reports
Which cognitive level involves transferring knowledge from one field of study to another?
a) Application
b) Comprehension
c) Analysis
d) Synthesis
Which cognitive process entails the summarizing of concepts by putting together components?
a) Synthesis
b) Analysis
c) Application
d) Evaluation
Which term describes behaviors that are directly observable in an assessment?
a) Covert behaviors
b) Overt behaviors
c) Cognitive behaviors
d) Affective behaviors
Which type of validity evaluates if a test measures what was taught in the instruction?
a) Content validity
b) Criterion-related validity
c) Construct validity
d) Face validity
What method involves assessing students’ stock knowledge through verbal responses?
a) Oral questioning
b) Performance checklist
c) Product rating scales
d) Observation

61 – 110

Ms. Thompson has set clear learning targets for her math class. Which of the following best
demonstrates clarity in her targets?
a) Students will learn about algebra.
b) Students will solve algebraic equations involving variables and coefficients.
c) Students will understand algebra.
d) Students will study algebraic concepts.
Mr. Johnson is preparing a history test. Which type of validity does he need to consider when
ensuring that the test covers all topics taught?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
In Ms. Anderson’s biology class, she uses a test to assess students' understanding of the cell
cycle. What type of validity is assessed when the test accurately measures knowledge of the cell
cycle as taught?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
Mr. Lee is comparing his chemistry test results with those from a standardized test. Which type of
validity is he using?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
Ms. Patel designs an open-ended essay question for her English exam to ensure the test reflects
students' understanding of literary analysis. What type of validity does this strategy improve?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
Mr. Green wants to test whether a new math assessment tool is reliable. What method would
best assess the consistency of the tool over time?
a) Interrater reliability
b) Stability reliability
c) Internal consistency
d) Equivalency reliability
In Ms. Garcia's class, two different teachers grade the same set of essays. What type of reliability
are they testing?
a) Stability reliability
b) Internal consistency
c) Interrater reliability
d) Equivalency reliability
Mr. Brown uses a test with questions that measure various levels of cognitive skills. Which
method is best for evaluating these different levels?
a) Essays
b) Objective tests
c) Checklists
d) Product rating scales
Ms. Robinson’s students are given a quiz to measure their recall of historical facts. What type of
cognitive level is being tested?
a) Comprehension
b) Application
c) Knowledge
d) Evaluation
Mr. Smith’s assessment method involves a checklist to evaluate students’ presentation skills.
What type of assessment tool is he using?
a) Performance checklist
b) Objective test
c) Essay
d) Checklist
Ms. Davis is developing a test for her geography class. She needs to ensure that it fairly
measures all topics covered in the syllabus. Which type of validity is she focusing on?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
During an exam review, Ms. Johnson notices that students are struggling with certain questions.
Which aspect of fairness might she need to reconsider?
a) Content validity
b) Face validity
c) Opportunity to learn
d) Criterion-related validity
Mr. Adams implements a new method of grading that provides detailed feedback. What positive
consequence is this likely to enhance for students?
a) Motivation
b) Confidentiality
c) Equivalency reliability
d) Cost-effectiveness
Ms. Clark uses a performance checklist to evaluate her students' ability to conduct scientific
experiments. What is this tool assessing?
a) Knowledge
b) Skills
c) Comprehension
d) Evaluation
In Mr. Taylor's class, students are asked to provide self-assessments on their project work. What
type of assessment method is being used?
a) Oral questioning
b) Performance checklist
c) Observation
d) Self reports
Ms. Wilson’s students are given a task to create a presentation on a given topic. Which cognitive
level are they demonstrating if they put together various sources to summarize their findings?
a) Comprehension
b) Analysis
c) Synthesis
d) Application
Mr. Brown needs to assess if his students’ test results remain consistent over multiple test
administrations. What reliability method should he use?
a) Stability reliability
b) Internal consistency
c) Interrater reliability
d) Equivalency reliability
Ms. Lewis is evaluating her science test using students’ feedback on the test’s appearance.
Which type of validity is she assessing?
a) Content validity
b) Construct validity
c) Face validity
d) Criterion-related validity
Mr. Adams wants to ensure his test measures the intended learning outcomes effectively. What
aspect of validity is he focusing on?
a) Content validity
b) Face validity
c) Construct validity
d) Criterion-related validity
Ms. Evans uses a rubric to evaluate student projects. What type of assessment method is she
using?
a) Performance checklist
b) Objective test
c) Product rating scale
d) Self report
Mr. Miller has designed a math test that measures students’ ability to apply formulas in new
situations. Which cognitive level is being tested?
a) Knowledge
b) Application
c) Analysis
d) Synthesis
Ms. Foster reviews the consistency of her test results when scored by different teachers. Which
type of reliability is she evaluating?
a) Stability reliability
b) Interrater reliability
c) Internal consistency
d) Equivalency reliability
Mr. Cooper’s students are required to perform a task with minimal guidance. What psychomotor
subdomain does this represent?
a) Perception
b) Guided response
c) Mechanism
d) Adaptation
Ms. Turner designs a test that asks students to use their knowledge in real-world scenarios. What
type of validity is she aiming to improve?
a) Content validity
b) Face validity
c) Criterion-related validity
d) Construct validity
Mr. Nelson needs to ensure that his test measures what was taught in his course. Which type of
validity should he focus on?
a) Content validity
b) Face validity
c) Construct validity
d) Criterion-related validity
Ms. Allen is preparing an exam and wants to test students’ understanding of complex concepts.
What method is best for assessing higher-level cognitive skills?
a) Objective tests
b) Checklists
c) Essays
d) Product rating scales
Mr. Carter is creating a test for his students and needs to ensure it is practical and easy to
administer. What aspect should he consider?
a) Face validity
b) Cost
c) Content validity
d) Criterion-related validity
Ms. Wright is using a rubric to evaluate her students’ oral presentations. What type of
assessment tool is she using?
a) Performance checklist
b) Objective test
c) Product rating scale
d) Self report
Mr. Bennett is concerned that his test results vary when given at different times. What type of
reliability is he assessing?
a) Stability reliability
b) Internal consistency
c) Interrater reliability
d) Equivalency reliability
Ms. Thompson's students are asked to analyze a text and identify key themes. Which cognitive
level are they demonstrating?
a) Application
b) Comprehension
c) Analysis
d) Synthesis
Mr. Lewis asks his students to create a unique project combining different skills learned
throughout the course. What cognitive level does this task involve?
a) Knowledge
b) Synthesis
c) Comprehension
d) Analysis
Ms. Martinez is developing a new assessment and wants to ensure it accurately measures
students' understanding of complex concepts. Which type of validity is she focusing on?
a) Content validity
b) Face validity
c) Criterion-related validity
d) Construct validity
Mr. Harris uses an assessment tool to measure students’ skills in conducting experiments. What
aspect of practical assessment is he considering?
a) Cost
b) Face validity
c) Internal consistency
d) Criterion-related validity
Ms. Collins is evaluating a test and needs to determine how it aligns with the curriculum goals.
Which type of validity is she assessing?
a) Content validity
b) Construct validity
c) Face validity
d) Criterion-related validity
Mr. Scott wants to assess his students’ abilities in solving complex problems. What cognitive level
is he focusing on?
a) Knowledge
b) Comprehension
c) Analysis
d) Application
Ms. Taylor is using a checklist to evaluate her students' performance in a lab. What type of
assessment tool is this?
a) Performance checklist
b) Objective test
c) Product rating scale
d) Self report
Mr. Walker’s students are asked to demonstrate their understanding of a concept by applying it
to a new situation. Which cognitive level is being assessed?
a) Application
b) Comprehension
c) Analysis
d) Synthesis
Ms. Evans uses multiple-choice questions to assess students' recall of facts. What cognitive level
is being tested?
a) Knowledge
b) Comprehension
c) Application
d) Analysis
Mr. Fisher is evaluating the fairness of his test. What aspect of fairness should he consider?
a) Content validity
b) Face validity
c) Opportunity to learn
d) Criterion-related validity
Ms. Robinson wants to ensure that her test is easy to administer and score. Which practical
aspect is she focusing on?
a) Time required
b) Content validity
c) Face validity
d) Cost
Mr. Peterson uses an assessment tool that provides immediate feedback to students. What
positive consequence does this likely enhance?
a) Confidentiality
b) Motivation
c) Cost-effectiveness
d) Face validity
Ms. Jackson is concerned that her test might not measure students' abilities accurately. Which
aspect of validity is she addressing?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
Mr. White compares his students’ test scores with a national standardized test. What type of
validity is he evaluating?
a) Content validity
b) Criterion-related validity
c) Face validity
d) Construct validity
Ms. Harris is developing an exam and wants to ensure it covers all topics taught. What aspect of
validity is she most concerned with?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
Mr. Green assesses the reliability of his test by administering it twice to the same group of
students. What type of reliability is he measuring?
a) Stability reliability
b) Internal consistency
c) Interrater reliability
d) Equivalency reliability
Ms. Lee uses a scoring guide to assess students' essays. What type of assessment tool is she
employing?
a) Performance checklist
b) Product rating scale
c) Objective test
d) Self report
Mr. King is evaluating the impact of his assessment on student learning. What positive
consequence is he focusing on?
a) Motivation
b) Confidentiality
c) Cost-effectiveness
d) Equivalency reliability
Ms. Wilson’s students are asked to reflect on their learning process. What type of assessment is
she using?
a) Self report
b) Performance checklist
c) Product rating scale
d) Objective test
Mr. Collins ensures that his test questions align with the learning objectives he has set. What
type of validity is he addressing?
a) Content validity
b) Face validity
c) Criterion-related validity
d) Construct validity
Ms. Roberts designs a test that is easy to administer and score. What practical aspect is she
focusing on?
a) Cost
b) Face validity
c) Internal consistency
d) Criterion-related validity

111. What is the primary characteristic of a well-defined learning target?


a) It focuses on abstract concepts.
b) It is stated in behavioral terms.
c) It includes subjective criteria.
d) It is difficult to measure.
112. Which type of validity ensures that a test measures what it is intended to measure
based on its format and content coverage?
a) Criterion-related validity
b) Construct validity
c) Content validity
d) Face validity
113. A test is evaluated based on how it appears to students. This type of validity is
known as:
a) Content validity
b) Criterion-related validity
c) Face validity
d) Construct validity
114. Which type of validity is concerned with comparing a test's results to an established
external criterion?
a) Face validity
b) Content validity
c) Criterion-related validity
d) Construct validity
115. To enhance the validity of a test, which of the following methods can be used?
a) Using more controlled (objective) items
b) Increasing the number of open-ended items
c) Reducing the test length
d) Focusing on multiple-choice questions only
116. Which reliability method assesses the consistency of test results over time by
administering the test to the same group at different times?
a) Equivalency reliability
b) Stability reliability
c) Internal consistency
d) Interrater reliability
117. When different raters score the same test and the scores are compared for
consistency, which type of reliability is being measured?
a) Stability reliability
b) Internal consistency
c) Interrater reliability
d) Equivalency reliability
118. What does internal consistency reliability measure?
a) Consistency of test scores over time
b) Consistency between different raters
c) Consistency within the test items themselves
d) Consistency between different forms of the test
119. If a test includes a mix of multiple-choice questions and essays, how might this
affect the test's practicality?
a) It will be easier to administer.
b) It will be less costly.
c) It might increase the time required for scoring.
d) It will have higher content validity.
120. A teacher is concerned that her test might not cover all necessary topics. What type
of validity is she concerned about?
a) Construct validity
b) Face validity
c) Content validity
d) Criterion-related validity
121. What is the primary focus of fairness in assessment?
a) Accuracy of the test results
b) Consistency of test administration
c) Equal opportunity for all students
d) Cost of the assessment
122. If a teacher uses a rubric to evaluate student essays, which type of assessment tool
is being employed?
a) Performance checklist
b) Product rating scale
c) Objective test
d) Self report
123. The term "equivalency reliability" refers to:
a) The consistency of test results over time
b) The extent to which two items measure the same concept
c) The degree to which different raters agree
d) The consistency of results within a single test
124. A test measures a student’s ability to apply learned material to new situations. This
is an example of which cognitive domain level?
a) Knowledge
b) Comprehension
c) Application
d) Analysis
125. To ensure that a test is practical, a teacher should consider which of the following
factors?
a) Degree of construct validity
b) Time required for administration and scoring
c) Amount of open-ended questions
d) Number of test items
126. In the context of assessment, what does "construct validity" refer to?
a) The degree to which a test measures a theoretical construct
b) The appearance of the test to students
c) The comparison of test scores with an external criterion
d) The coverage of content taught
127. If an assessment tool is designed to measure student performance against a
predefined set of standards, it is likely evaluating:
a) Criterion-related validity
b) Face validity
c) Content validity
d) Construct validity
128. An assessment tool provides feedback immediately after a test. This likely
enhances:
a) Confidentiality
b) Motivation
c) Cost-effectiveness
d) Face validity
129. Which of the following describes the "condition" component of a behavioral
objective?
a) The observable behavior expected from the student
b) The degree to which the behavior must be performed
c) The circumstances under which the behavior will occur
d) The specific audience for the activity
130. Which of the following is a method to assess the "stability" of a test’s reliability?
a) Comparing scores from two different forms of the test
b) Administering the test to the same group on different occasions
c) Evaluating the consistency among different raters
d) Assessing the internal consistency of test items
131. If a test measures a specific skill at an identical level of difficulty across different
items, which type of reliability is being assessed?
a) Equivalency reliability
b) Stability reliability
c) Internal consistency
d) Interrater reliability
132. The term "face validity" is best described as:
a) The test's ability to predict future performance
b) The test's alignment with instructional content
c) The test's appearance and acceptability to students
d) The test's ability to measure a theoretical construct
133. If a test is easy to administer, score, and interpret, it is considered:
a) Valid
b) Reliable
c) Practical
d) Fair
134. What aspect of a test does "content validity" primarily assess?
a) The overall appearance of the test
b) The test’s ability to predict performance on other tests
c) The extent to which the test covers the material taught
d) The test’s consistency across different raters
135. When assessing a student’s ability to analyze complex problems, which cognitive
domain level is being targeted?
a) Knowledge
b) Comprehension
c) Application
d) Analysis
136. A teacher wants to avoid bias in assessment tasks. This concern relates to which
aspect of assessment?
a) Fairness
b) Validity
c) Reliability
d) Practicality
137. Which of the following best describes the difference between goals and objectives in
instructional planning?
a) Goals are specific and measurable; objectives are broad.
b) Goals provide a general direction; objectives are specific and measurable.
c) Goals are easier to measure than objectives.
d) Objectives are abstract; goals are concrete.
138. In an assessment, what does "degree" refer to?
a) The observable behavior expected from the student
b) The conditions under which the behavior is performed
c) The level of performance required to meet the objective
d) The specific audience for the activity
139. What is the role of a "performance checklist" in assessment?
a) To evaluate specific skills through a rating scale
b) To provide immediate feedback on test performance
c) To measure student knowledge through multiple-choice questions
d) To assess specific criteria in a performance task
140. Which type of validity involves comparing test results with scores from a national
standardized test?
a) Content validity
b) Criterion-related validity
c) Face validity
d) Construct validity
141. A teacher needs to ensure that all students had the opportunity to learn the
material covered in a test. This is related to:
a) Fairness
b) Validity
c) Reliability
d) Practicality
142. Which type of reliability is measured by correlating scores from two different forms
of the same test?
a) Stability reliability
b) Equivalency reliability
c) Internal consistency
d) Interrater reliability
143. If a test evaluates the extent to which it measures a specific theoretical construct,
this is an example of:
a) Content validity
b) Construct validity
c) Face validity
d) Criterion-related validity
144. A teacher is designing an assessment that will be easy to score and interpret, which
aspect of practical assessments is she focusing on?
a) Cost
b) Ease of administration
c) Time required
d) Ease of scoring and interpretation
145. Which of the following is an example of a behavior in the psychomotor domain?
a) Memorizing a list of terms
b) Demonstrating a physical skill, such as swimming
c) Understanding a theoretical concept
d) Analyzing a historical event
146. What is an important consideration when evaluating the ethics of an assessment?
a) The cost of the assessment tool
b) The consistency of test results
c) Ensuring confidentiality of student data
d) The time required for administration
147. A teacher wants to ensure that a test measures the skills taught in the lesson.
Which type of validity should she consider?
a) Content validity
b) Criterion-related validity
c) Face validity
d) Construct validity
148. To ensure that the test results are dependable across different testing conditions, a
teacher should focus on:
a) Validity
b) Reliability
c) Fairness
d) Practicality
149. If a test measures the ability to apply learned material in new contexts, which
Bloom’s Taxonomy level is being assessed?
a) Knowledge
b) Comprehension
c) Application
d) Evaluation
150. What is a key factor in assessing the "practicality" of a test?
a) The degree to which it measures learning outcomes
b) The ability to administer and score it efficiently
c) The number of behavioral objectives included
d) The level of construct validity
1. True. Learning targets should be clearly stated in behavioral terms to ensure they are
observable and measurable.
2. True. Reliability refers to a test's consistency in measuring what it is intended to measure.
3. False. Equivalency reliability assesses whether two items measure identical concepts at
the same level of difficulty, not agreement over time.
4. True. Internal consistency measures how well different items on a test assess the same
characteristic or quality.
5. False. Reliability refers to consistency in results, not variability.
6. True. Fairness includes avoiding bias and ensuring that assessments are equitable for all
students.
7. True. Assessments provide feedback and can improve student motivation and self-esteem.
8. False. Practicality refers to how effectively a test can be administered in real-world
situations, not just controlled settings.
9. True. Instructional objectives must specify the audience to ensure that the intended
outcomes are clear.
10.True. Behavioral objectives should focus on the results of learning, not the instructional
methods used.
11.False. Covert activities should not be included unless they are paired with observable
behaviors.
12.True. Conditions in objectives outline what resources or restrictions apply during the
performance of the behavior.
13.False. Degree describes the level of performance required, not the conditions of
performance.
14.True. The split-half method involves comparing two sets of scores to assess reliability.
15.True. The Kuder-Richardson formulas are used to estimate the internal consistency of a
test, particularly for dichotomous items.

31. c) Knowledge – Knowledge involves acquiring facts, concepts, and theories.


32. a) Comprehension – Comprehension involves understanding and awareness of interrelationships.
33. a) Analysis – Analysis involves breaking down a concept into its components.
34. a) Synthesis – Synthesis involves putting together components to summarize a concept.
35. a) Evaluation – Evaluation involves judging the worth or value of a concept or principle.
36. c) Skills – Skills refer to specific activities or tasks a student can perform proficiently.
37. c) Competencies – Competencies are clusters of skills.
38. c) Abilities – Abilities are made up of related competencies categorized as cognitive, affective, and
psychomotor.
39. b) Objective tests – Objective tests assess various levels of cognitive objectives.
40. c) Essays – Essays test higher-level cognitive skills.
41. d) Checklist – A checklist involves a list of characteristics or activities to be checked.
42. b) Product rating scales – Product rating scales are used to rate products like book reports and creative
endeavors.
43. b) Performance checklist – A performance checklist is used to determine if an individual behaves as
expected during a task.
44. c) Oral questioning – Oral questioning assesses a student’s ability to communicate ideas coherently.
45. d) Face validity – Face validity refers to the outward appearance of the test.
46. a) Content validity – Content validity assesses if the test measures what it is intended to measure.
47. b) Criterion-related validity – Criterion-related validity involves comparing the test with a specific
criterion.
48. a) Construct validity – Construct validity involves the test measuring a particular construct or factor.
49. b) Cognitive domain – The cognitive domain involves intellectual skills and knowledge.
50. c) Affective domain – The affective domain encompasses feelings, emotions, and attitudes.
51. c) Psychomotor domain – The psychomotor domain involves motor skills and their coordination.
52. c) Perception – Perception involves applying sensory information to motor activity.
53. b) Complex overt response – Complex overt response refers to performing tasks with proficiency and
skill.
54. b) Guided response – Guided response involves performing a task with minimal guidance.
55. a) Observation – Observation is used to supplement oral questioning and performance tests.
56. a) Application – Application involves transferring knowledge from one field of study to another.
57. a) Synthesis – Synthesis involves summarizing concepts by putting together components.
58. b) Overt behaviors – Overt behaviors are directly observable in assessments.
59. a) Content validity – Content validity evaluates if a test measures what was taught in instruction.
60. a) Oral questioning – Oral questioning involves assessing students’ stock knowledge through verbal
responses.
61.  b) Students will solve algebraic equations involving variables and coefficients because this option
clearly states observable and measurable actions. Explanation: This option specifies actions that can be
directly observed and measured.
62.  b) Content validity ensures the test covers all topics taught. Explanation: Content validity assesses
whether the test includes all relevant material from the curriculum.
63.  b) Content validity checks if the test measures the specific content taught. Explanation: This ensures
that the test accurately reflects the topics covered in instruction.
64.  c) Criterion-related validity is assessed by comparing test results with an established test.
Explanation: This method evaluates how well the test correlates with an already validated benchmark.
65.  d) Construct validity is improved by using open-ended items to assess alignment with how the
concept was taught. Explanation: Open-ended items can provide insights into whether the test aligns
with the instructional objectives.
66.  b) Stability reliability assesses if the test results are consistent over time. Explanation: This type of
reliability measures the consistency of test results across different time points.
67.  c) Interrater reliability evaluates consistency between different graders. Explanation: This ensures
that different graders provide similar evaluations of the same work.
68.  c) Essays are best for assessing complex cognitive skills. Explanation: Essays allow for the
demonstration of higher-order thinking and complex problem-solving abilities.
69.  c) Recall of historical facts tests the knowledge cognitive level. Explanation: This tests the ability to
remember and retrieve factual information.
70.  a) A performance checklist evaluates specific performance criteria. Explanation: A checklist is used
to assess whether specific performance standards are met.
71.  b) Content validity ensures the test covers all the syllabus topics. Explanation: This checks if the test
reflects the entirety of the syllabus content.
72.  c) Fairness includes ensuring students had the opportunity to learn all test content. Explanation:
Ensuring that all students had the chance to learn the material is crucial for a fair assessment.
73.  a) Detailed feedback enhances student motivation and engagement. Explanation: Providing
comprehensive feedback helps students understand their progress and motivates them to improve.
74.  a) A performance checklist is used to evaluate specific skills in performance tasks. Explanation: It
assesses whether the student meets predefined performance criteria.
75.  d) Students assess their own performance through self-reports. Explanation: Self-reports involve
students reflecting on their own learning and performance.
76.  c) Combining sources to create new understanding involves synthesis. Explanation: Synthesis is the
process of integrating information from multiple sources to form new insights.
77.  a) Stability reliability assesses consistency of results over time. Explanation: This type of reliability
ensures that test results remain stable when administered at different times.
78.  c) Face validity assesses how the test appears to students. Explanation: Face validity is concerned
with whether the test seems relevant and appropriate to those taking it.
79.  a) Content validity ensures the test measures what was taught. Explanation: This verifies that the
test accurately reflects the material covered in instruction.
80.  c) A product rating scale evaluates the quality of student products. Explanation: A rubric assesses
the quality and effectiveness of student work based on specific criteria.
81.  b) Applying formulas in new situations tests the application cognitive level. Explanation: This
involves using learned formulas in novel contexts to test application skills.
82.  b) Interrater reliability evaluates consistency between different scorers. Explanation: It ensures that
different individuals grading the same work reach similar conclusions.
83.  d) Adaptation involves using motor skills in new situations. Explanation: Adaptation refers to
applying learned motor skills to different contexts or tasks.
84.  d) Construct validity ensures the test measures the intended construct or concept. Explanation: This
type of validity confirms that the test accurately measures the theoretical concept it is intended to assess.
85.  a) Content validity ensures the test covers all content taught. Explanation: This checks if the test
includes all the material covered in the curriculum.
86.  c) Essays are suitable for assessing complex cognitive skills. Explanation: Essays allow for a
detailed evaluation of complex thought processes and understanding.
87.  b) Practical assessments consider cost and ease of administration. Explanation: Cost and practicality
are important factors in determining the feasibility of assessments.
88.  c) A product rating scale evaluates the quality of student presentations. Explanation: A rubric
assesses how well student presentations meet specific quality criteria.
89.  a) Stability reliability ensures consistency of test results over time. Explanation: This type of
reliability checks if the test results are consistent when repeated.
90.  c) Identifying themes involves analyzing text. Explanation: Analysis involves breaking down text to
identify key themes and patterns.
91.  b) Combining different skills to create a new project involves synthesis. Explanation: Synthesis is
the integration of various skills and knowledge to produce a new project.
92.  d) Construct validity ensures the test aligns with the intended construct. Explanation: This confirms
that the test measures the theoretical concept it is supposed to.
93.  a) Practicality includes evaluating costs associated with assessments. Explanation: Practicality takes
into account the costs and ease of implementing assessments.
94.  a) Content validity ensures alignment with curriculum goals. Explanation: This ensures that the test
content is consistent with the goals of the curriculum.
95.  c) Solving complex problems involves analysis. Explanation: Analysis is required to break down
complex problems into manageable parts.
96.  a) A performance checklist evaluates specific performance criteria in lab tasks. Explanation: It
assesses whether specific criteria for performance in lab tasks are met.
97.  a) Applying concepts to new situations tests the application level. Explanation: This involves using
learned concepts in different contexts to assess application skills.
98.  a) Multiple-choice questions test recall of facts. Explanation: These questions assess the ability to
remember and retrieve factual information.
99.  c) Ensuring fairness by evaluating if students had a chance to learn the content involves opportunity
to learn. Explanation: Fairness includes verifying that all students had equal opportunity to learn the
material.
100.  a) Practical aspects include time efficiency for administration and scoring. Explanation:
Time efficiency is a key consideration in the practical implementation of assessments.
101.  b) Immediate feedback enhances student motivation. Explanation: Providing prompt
feedback helps students stay motivated and engaged in their learning.
102.  d) Construct validity ensures the test accurately measures the intended concept. Explanation:
This validity confirms that the test measures what it is supposed to, based on the theoretical construct.
103.  b) Comparing with a standardized test assesses criterion-related validity. Explanation: This
method checks how well the test results align with those of a standardized benchmark.
104.  b) Content validity ensures the test covers all topics taught. Explanation: This verifies that
the test encompasses all relevant material from the syllabus.
105.  a) Stability reliability assesses consistency of test results over time. Explanation: This type
of reliability measures whether the test yields consistent results across different occasions.
106.  b) A product rating scale evaluates the quality of student essays. Explanation: A rubric
assesses the quality and effectiveness of student essays based on specific criteria.
107.  a) Positive feedback enhances student motivation and engagement. Explanation:
Constructive and encouraging feedback boosts students' motivation and involvement in their studies.
108.  a) Students reflect on their learning process through self-reports. Explanation: Self-reports
involve students evaluating their own learning and performance.
109.  a) Content validity ensures alignment with learning objectives. Explanation: This verifies
that the test measures the objectives outlined in the learning goals.
110.  a) Practical aspects consider the cost and ease of administration. Explanation: Practical
considerations include evaluating the financial and logistical feasibility of assessments.
111.  b) It is stated in behavioral terms.
Explanation: Learning targets should be stated in behavioral terms to ensure they are observable and
measurable.
112.  c) Content validity
Explanation: Content validity assesses whether the test covers the material taught and the format of the
instrument.
113.  c) Face validity
Explanation: Face validity refers to how a test appears to students and its perceived relevance.
114.  c) Criterion-related validity
Explanation: Criterion-related validity compares test results with an established external criterion to
assess accuracy.
115.  b) Increasing the number of open-ended items
Explanation: Open-ended items can enhance the validity by allowing more subjective responses.
116.  b) Stability reliability
Explanation: Stability reliability measures consistency over time by retesting the same group.
117.  c) Interrater reliability
Explanation: Interrater reliability assesses the consistency of scores from different raters.
118.  c) Internal consistency
Explanation: Internal consistency measures how well test items assess the same characteristic.
119.  c) It might increase the time required for scoring.
Explanation: Mixing question types can complicate and lengthen the scoring process.
120.  c) Content validity
Explanation: Content validity ensures that the test covers all relevant topics taught.
121.  c) Equal opportunity for all students
Explanation: Fairness involves providing all students with an equal opportunity to succeed.
122.  b) Product rating scale
Explanation: A rubric is used to evaluate the quality of student products against specific criteria.
123.  b) The extent to which two items measure the same concept
Explanation: Equivalency reliability involves comparing items that measure the same concept.
124.  c) Application
Explanation: Application involves using learned material in new situations.
125.  b) Time required for administration and scoring
Explanation: Practicality involves the efficiency and feasibility of administering and scoring the test.
126.  a) The degree to which a test measures a theoretical construct
Explanation: Construct validity assesses whether the test measures a theoretical construct.
127.  a) Criterion-related validity
Explanation: Criterion-related validity evaluates the test against a specific criterion or benchmark.
128.  b) Motivation
Explanation: Immediate feedback can enhance student motivation and learning.
129.  c) The circumstances under which the behavior will occur
Explanation: Conditions describe the context and resources available during performance.
130.  b) Administering the test to the same group on different occasions
Explanation: Stability reliability measures consistency over time by retesting.
131.  a) Equivalency reliability
Explanation: Equivalency reliability assesses whether different test items measure the same concept.
132.  c) The test's appearance and acceptability to students
Explanation: Face validity concerns the test's appearance and perceived relevance to students.
133.  c) Practical
Explanation: Practicality involves ease of administration, scoring, and interpretation.
134.  c) The extent to which the test covers the material taught
Explanation: Content validity assesses whether the test covers all relevant content.
135.  d) Analysis
Explanation: Analysis involves breaking down and understanding complex problems.
136.  a) Fairness
Explanation: Fairness addresses issues such as avoiding bias and providing equal opportunities.
137.  b) Goals provide a general direction; objectives are specific and measurable.
Explanation: Goals set the overall direction, while objectives outline specific, measurable steps.
138.  c) The level of performance required to meet the objective
Explanation: Degree refers to the level of performance or criteria needed for acceptable performance.
139.  d) To assess specific criteria in a performance task
Explanation: A performance checklist evaluates specific criteria related to a task.
140.  b) Criterion-related validity
Explanation: Criterion-related validity involves comparing the test results with scores from a standard
test.
141.  a) Fairness
Explanation: Ensuring all students had the opportunity to learn relates to fairness in assessment.
142.  b) Equivalency reliability
Explanation: Equivalency reliability is measured by comparing scores from two different forms of the
test.
143.  b) Construct validity
Explanation: Construct validity assesses if the test measures a specific theoretical construct.
144.  d) Ease of scoring and interpretation
Explanation: Practicality involves factors such as ease of scoring and interpretation.
145.  b) Demonstrating a physical skill, such as swimming
Explanation: Psychomotor domain includes motor skills and physical coordination.
146.  c) Ensuring confidentiality of student data
Explanation: Ethics in assessment involves protecting student data confidentiality.
147.  a) Content validity
Explanation: Content validity ensures the test measures what was taught.
148.  b) Reliability
Explanation: Reliability focuses on the consistency and dependability of test results.
149.  c) Application
Explanation: Application involves using learned material in new contexts.
150.  b) The ability to administer and score it efficiently
Explanation: Practicality concerns the feasibility of administering and scoring the test.
6.Revised taxonomy focuses on six levels: remember, understand, apply, analyze, evaluate and
create.
7. The objectives should include all important outcomes of the course or subject matter.
8.Psychomotor domain includes utilizing motor skills and ability to coordinate them.
9.The objectives should not be in harmony with the sound principles of learning.
10. Affective domain involves our feelings, emotions. and attitudes.
6. TRUE
7. TRUE
8. TRUE
9. FALSE
10. TRUE

6. Which of the following is an example of the "Internalizes Values" category in the affective
domain?
A. Listening attentively to a speaker
B. Demonstrating ethical behavior in all situations
C. Identifying different types of plants
D. Completing a complex math equation
ANSWER: B. Demonstrating ethical behavior in all situations
7. The psychomotor domain focuses primarily on:
A. Intellectual understanding
B. Physical movement and skills
C. Emotional responses
D. Attitudes and values
ANSWER: B. Physical movement and skills.
8. Analyzing is an example of:
A. Psychomotor Domain
B. Affective Domain
C. Cognitive Domain
D. All of the above
ANSWER: C. Cognitive Domain
9. In the affective domain, which category refers to organizing values into priorities by resolving
conflicts between them?
A. Valuing
B. Organization
C. Responding
D. Internalizing values
Answer: B. Organization
10.Which category in the psychomotor domain emphasizes using sensory cues to guide motor
activity?
A. Adaptation
B. Set
C. Perception
D. Origination
Answer: C. Perception
6. Builds structure or pattern from diverse element.
7. The behavior is pervasive consistent predictable and most important the characteristic of the
learner.
8. Proficiency is indicated by quick accurate and highly coordinated performance requiring a
minimum of energy.
9. Ordinating and adopting a series of action to achieve harmony and internal consistency.
10. Use effective body language such as gestures and facial expressions.
6. Creating
7. Internalizes values
8. Complex over response (expert)
9. Articulation
10. Nondiscursive communication
True 1. Application is the transfer of knowledge from one field of
study to another of from one concept to another concept in
the same discipline.
False 2. Knowledge, validity, reliability, checklists and
competencies are part of the hierarchy of educational
objectives proposed by Bloom. (knowledge,
comprehension, application, analysis, …)
True 3. The written-response instrument that can best assess or
test the students’ grasp of higher-level cognitive skills are
Essays
True 4. Reliability, consistency, dependability and stability can be
estimated by the Split-half method
True 5. Criterion-related is the type of validity that answers to the
question “How does this compare with the existing valid
test?”

1. It is the concept that assessment should be fair and of which covers a number of
aspects like the students opportunity to learn, avoiding teacher stereotype, etc.
A. Practicality B. Validity C. Reliability D. Fairness

2. Informed consent and anonymity and confidentiality in gathering, recording and


reporting data is something that should be considered under what concept?
A. Reliability B. Privacy C. Fairness D. Ethics

3. The objective of this assessment method is to assess the students’ stock knowledge

A. Performance B. Oral Questioning C. Reliability D. Observations and


Test Self Reports
4. Abilities are made up of relate competencies categorized as the following:

A. Physiological B. Psychomotor C. Affective D. Cognitive

5. This is the extent to which tests or procedures assess the same characteristic, skill, or
quality.
A. Competencies B. Construct validity C. Internal D. Equivalency
Consistency reliability
Analysis 1. Breaking down of a concept or idea into its components and
explaining the concept as a composition of these concepts.
Validity 2. Appropriateness, correctness, meaningfulness and usefulness of
the specific conclusions that a teacher reaches regarding the
teaching-learning situation.
Comprehension 3. Understanding, involves cognition or awareness of the
interrelationships
Knowledge 4. It is the acquisition of facts, concepts and theories
Evaluation and 5. Valuing and judgement or putting the worth of a concept or
Reasoning principle.

1. Goals are broad, overarching statements of what an organization wants to achieve in


the long run. (TRUE)
2. In General Educational Program Objectives, Behavior is measurable, short-term,
specific and observable. (FALSE)
3. The General goal that a university instructor aims to accomplish are outlined in the
General Educational Program Objectives. (FALSE)
4. Purpose, Goals, and Objectives are static and never change over time. (FALSE)
5. Goals are usually considered to be broad statement about what a
university ,college ,program ,or instructor would like students to achieve. (TRUE)

6.What refers to the ability to assess the value or purpose of something based on
specific criteria and to back that judgment with reasoning?
A. Evaluation
B. Application
C. Synthesis
D. Comprehension
Answer: A

7. What refers to breaking down of components involving understanding organization,


clarifying key points, and drawing conclusions?
A. Evaluation
B. Synthesis
C. Application
D. Analysis
Answer: D

8. These are Four Main Things that objective should specify except?
A. Audience
B. Behavior
C. Conflict
D.Degree
Answer: C

9. What are the two different concepts but are related to each other?
10. A. Purposes of Instructional Goals and Objectives
B. Goals and Objectives
C. Practicality and Efficiency
D. Learning and Outcomes
Answer: B

11. In conforming to the standards of conduct of a given profession or group, there are
four Ethical issues that may be raised except?
A. Possible harm to the participants
B. Confidential and temptation to assist students
C. Presence of concealment or reception
D. Awareness of learning
Answer: D

6. Ability to judge value for purpose; base on criteria; support judgment with reason.
— Evaluation
7. It is to break down into parts; understanding organization, clarifying, concluding.
— Analysis
8. Grasping the meaning of material; interpreting, predicting outcome and effects.
— Comprehension
9. It is an ability to use learned material in a new situation; apply rules, laws, methods,
theories.
— Application
10. It is a statement that specifies in behavioral (measurable) terms what a learner will
be able to do as a result of instruction.
— Learning Objectives
151.

You might also like