Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

505 Evaluation Chapter Questions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Brian Mitchell EDTECH 505 Chapter Questions Assignment

Chapter 1 Questions 1. What do all evaluations have in common? All evaluations involve, or should involve, a systematic collection and analysis of data to determine the effectiveness of a program. This data can be used in a variety of ways, from determining if goals have been met, to making decisions, etc. However, the data is the common denominator for all evaluations.

2. How would you characterize the differences between the efficiency, effectiveness, and impact of a program? Efficiency, effectiveness, and impact are all important aspects to determining the success of a program, as discovered through evaluation. Efficiency refers basically to the return on investment which results from a program. In many cases, the results of the other factors dont matter, if a program costs more than it returns. Effectiveness is how well a program accomplishes the specific goals that have been set for it. Effectiveness can be said to be the short-term results of the program. Impact determines the long-term results of a program, as exhibited by a change in behavior among the program participants. A program may be deemed effective by the ending evaluation of attitudes or skills among the participants, but if it does lead the participants to change the way they see or do things on a daily basis, it may not be considered to have much impact.

3. Why evaluate in the first place? There are several reasons for evaluation. Evaluation may be necessary to continue funding of a program or to determine the effectiveness of one particular program over another. Evaluation may also be used to determine the effectiveness of a pilot program in order to determine if a similar program should be developed or continued. Lastly, evaluation is very effective for determining the strengths of a program and recommending changes that may be necessary for improving the program.

Chapter 2 Questions 1. What are the benefits and limitations of an evaluation? There are many benefits to evaluation. One is that the sponsors of a program can tell if they are getting a good return on their investment, and whether or not they want to continue that investment. Another benefit is that the staff involved can become more proactive in working together for the good of the program, even if it is just through discussion and improved working relationships. Another benefit to evaluation is that is can help determine new audiences that the program may benefit or opportunities for growth in the program. Finally, and maybe most importantly, evaluation can give an organization a better sense of the outcomes, whether intentional or not, of a program. There are also several limitations to evaluation. One is that evaluation may indicate change that is not really feasible. Another limitation is that evaluation will shine more light on a program, which could result in greater criticism or scrutiny, which in turn could affect participants responses to evaluative questioning. Lastly, evaluation can sometimes result in trivial changes that may not always be the best for the program as a whole. This is especially true in cases where a program is so well done, that it may not need changes, yet the evaluator feels a need to include certain recommendations for improvement.

2. What factors ensure that an evaluation will be successful? There are several keys to a successful evaluation. One is that all stakeholders be involved. This is necessary to insure the completeness and accuracy of the evaluation. Another factor is the person or group conducting the evaluation. It is vital that the evaluation be done in an objective manner, using properly executed methods of data collection and analysis. Lastly, a successful evaluation will be dependent on the willingness of those involved to use the findings effectively. If participants know that an evaluation will not be used, it is not likely that they will respond with accurate results.

3. How might one use evaluation results? Evaluation results can be used in many ways. They might be used to evaluate the return on investment of funds or to determine whether funding should continue. The results may also be used to compare the program with alternative programs. An evaluation could be used to determine if a test program should be implemented, expanded, or ended. Future planning for the program is one of the biggest uses for evaluation results.

Chapter 3 Questions 1. What is the connection between monitoring and evaluation? Monitoring and evaluation are two distinct parts of a process that are often confused. Monitoring is basically the process of collecting data about a project. This is done as the project goes through its cycle. Some of the data gathered during the monitoring process can be used to evaluate the project. Formative evaluation takes place as the project goes on. The difference between monitoring and formative evaluation is that monitoring is not used to make changes to a program. Formative evaluation makes judgments about a program as it is happening and sometimes results in changes during the process. Summative evaluation takes place after the project is complete and may also use monitoring data to help it attain its results.

2. How does evaluation fit into planning? Evaluation is very important to planning, because the results of an evaluation can be used to make changes to a project before its next cycle of implementation begins. Evaluation can be done as the project progresses and/or after it finishes. The results of a successful evaluation will be incorporated into the planning process for making the next cycle of the project better than the previous one. 3. How can evaluation results be used in making decisions? The biggest way that evaluation results can be used for making decisions is in the fact that evaluation produces real and presumably accurate data. Mere monitoring or observation may only result in theories or feelings that may or may not be accurate. By having accurate and concrete data, a person or group can be assured that a decision will be made by using the best and most complete information available.

Chapter 4 Questions 1. What is the purpose of an EPD? An EPD, or Evaluators Program Description, looks at how well a program is achieving its objectives. In other words, it monitors what the program is doing and how well it is doing it. 2. What are the components of an EPD? The focus of an EPD may vary, depending on who will be getting the results of the evaluation. However, every EPD should lay out a programs goals and objectives, create a plan to accomplish those objectives, and show what methods for measurement are in place. 3. How does an evaluator develop and use the EPD? An evaluator must first determine the stakeholders that will be using the EPD. He or she should then research the needs of that group of people, in relation to the program and the organization. The evaluator should use that research to have a basic understanding of the projects purpose. Next, the evaluator meets with the targeted stakeholders and asks them questions about their desired outcomes for the project and also what they hope to determine through evaluation. Using what the evaluator already knows, combined with information gained from the targeted stakeholders, he or she can then begin to assemble an EPD. This EPD will be the basis for the evaluation process and for determining if the desired objectives were met.

Chapter 5 Questions 1. What are some of the more popular evaluation models? There are several different evaluation models, including: Discrepancy Model In a nutshell, this model looks at the differences between how the program is supposed to be and how it actually is. The evaluation is done from the outset of the program, and all involved in the program are involved in the evaluation. Goal-Free Model - In this model, the evaluator has no knowledge of project goals. Rather, he or she makes observations about how different aspects of the program are working and then draws conclusions based on those observations. In this way, unintended results are more readily identifiable. Transaction Model In this model, the evaluator is an active participant in the program and is constantly interacting with those involved. The evaluator observes the various aspects of the program and gives and receives constant feedback on those aspects in order to formulate a complete picture of the program. Decision-Making Model This model looks at the long-term effects of a program more than how the program is currently operating. This model is used for making a specific decision about the future of a program, and its methods are all used to support that decision. Goal-Based Model This is the most common model. It is simply an evaluation of the project goals and how well those goals were or were not achieved. 2. What are the component parts of an evaluation format? An evaluation format can include the following components: evaluation questions, program objectives, activities observed, data sources, population samples, data collection design,

responsibility, data analysis, and audience. A format may or may not include all of these components. 3. What are the basic differences between research and evaluation? There are distinct differences between research and evaluation. In research, people conducting the study start with a hypothesis that they are trying to prove or disprove through the scientific method. Their process involves controlled circumstances that help to eliminate alternate explanations and come up with an answer to their question. In evaluation, the people involved still collect and analyze data, but the purpose is to determine the effectiveness or improve a program. All stakeholders should be involved in an evaluation, whereas in research, the participants are unaware of anything outside their individual roles and the researchers tend to be more objective observers.

Chapter 6 Questions 1. What are the differences between qualitative and quantitative approaches? Qualitative data looks at the present state of the program, as opposed to the final outcome. It is gained through things like observations, interviews, case studies, and existing documents. Qualitative data looks at how well things are functioning within a program. Quantitative data looks at numeric data, and then draws patterns and conclusions from those numbers. This type of data is gained through tests, counts, measures, etc. 2. What are the levels of data that you might encounter? There are four main levels of data. Nominal data looks at an individual classification that is mutually exclusive. This could include things such as race, gender, job title, etc. A person will fall cleanly into one of these categories. No numbers or rankings are assigned to this data. Ordinal data assigns a classification, but ranks the data in some order. Interval data is ranked like ordinal data, but there is a set interval between each of the classifications. Ratio data is like ordinal and interval, but with an absolute zero value. This could include classifications like height or age. 3. What are some instruments that you might use or develop? There are many ways of collecting data. These could include existing data, like public voting, marriage, birth, and death records, or newly collected data. This could include data taken from interviews, surveys, tests, observations, etc.

Chapter 7 Questions 1. What are data? Data are results that are collected for analysis from numerical statistics, interview results, or observations. Data can really be in many different forms, as long as it is able to be coded or analyzed in an objective manner. 2. What are the main terms evaluators need to know in order to analyze data? The terms an evaluator needs to know concerning data are the measures of central tendency: mean, median, and mode. These are useful for finding certain mid-points of data. There are also measures of variability: ranges, quartiles, percentiles, standard deviation, and variance. These allow an evaluator to look at how data can be grouped together in various ways. Finally, there are the four types of data: nominal, ordinal, interval, and ratio. 3. How do mean, median, and mode (measures of central tendency) offer different perspectives on data sets? These types of data all offer a mid-point for a set of data. However, the mode offers the most frequent result, the mean offers the average of a group of results, and the median give the middle point for a set of results. These all offer similar, but slightly different ways of looking at the general tendency for a set of data.

Chapter 8 Questions 1. What are the differences between evaluation and research? Evaluation looks at the process taking place, where research is interested primarily in the results. Research is also done with complete control over the elements, factors, and variables, whereas evaluation may have unexpected surprises that arise. Research is also concerned with how the results apply to a whole body of knowledge, whereas evaluation is just looking at that one program. 2. Why might an evaluator want to use a sample of individuals instead of all individuals? When the numbers of individuals becomes too large and complex to analyze each one, then sampling is needed. Samples take a small representation of the whole group and draw generalizations about the whole group from the sample. The larger the sample, the more accurate the generalization. 3. What are the kinds of sampling? There are two kinds of sampling. Probability sampling is random sampling that includes simple, stratified, systematic, cluster, or multistage random sampling. Non-probability sampling includes judgment, quota, or purposive sampling. Some of these are more accurate than others, depending on the situation and the desired results.

Chapter 9 Questions 1. Do I need to write a final evaluation report? Yes. A final evaluation report is a way for someone who hasnt been part of the data collection and analysis to receive a summary of the findings. Final reports are essentially the final goal of an evaluation. They provide an concise description of the evaluation results in an easy-to-read format. 2. For whom do I need to write the evaluation report? The intended audience of the final report depends on who commissioned the evaluation in the first place. Generally, that is to whom the final report should be geared. However, the program participants may also receive the report, and may need the results to be summarized in a different way. Really, there could be several report recipients, each with different needs. All would need to have their own report, with the summary geared towards their particular interest in the results. 3. What are the key components of the evaluation report? Evaluation reports typically have seven different components. The first is a summary, which should be written last. Next comes the evaluation purpose, which describes the intent. Then comes background information about the program. Next is a description of the evaluation design. The next two sections are for the results of the evaluation and a discussion of those results. Finally, the report should include a conclusion and recommendations, based on the purpose or goals of the evaluation.

You might also like