Saturday, September 19, 2009

Case Study Assignment 2-Hillary's Response

The CIPP Model to Evaluate-The Case Study on Student Service Program

The CIPP model would be most appropriate to evaluate the Student Service Program. CIPP was proposed by Daniel Stufflebeam in the 1969 and is an acronym for Context, Input, Process, and Product (Guskey, 2000). The model seeks to evaluate each section; context evaluation is used in planning decision and the identification of the problem, while, Input evaluation involves the steps and resources needed to meet goals and objectives of the program (Guskey, 2000). Process evaluation provides information on whether the focus program is being done according to plan and it provides information on how well the program is being implemented. Product evaluation focuses on the outcome, the success, and the need for modification or continuation of a program (Guskey, 2000).

This model is quite flexible in nature, as it can be use for formative as well as for summative evaluation (Stufflebeam, 2002). Because of its flexibility, the Student Services program will need continuously evaluation to see the progress of each student. Prior to acceptance into the program, assessment will have to be conducted to determine the eligibility of a child with severe disability. The program supervisors will be able to adjust the program, where they see it necessary, to meet the needs of each individual. The ability to make changes within the program is also important, since the evaluation process may require changes to be made and to track the progress of the program. Although the CIPP model is ideal for long-term program, I am of the view that three years is sufficient for this method to be used.

The CIPP Model uses a decision-focused approach to evaluation, as a result, communication will be a key factor in that given that teachers are expected to conduct home visits and have the parent involve in the process. Home visits documented and reports shared with parents. Information provided to the different stakeholders is considered important; in this case, it would be the parents and the funding agencies. In order for the CIPP model to be effective there has to be some form of contractual agreement between the stakeholders, especially since they will be working with students diagnosed with severe of profound disabilities.

References

Daniel L. Stufflebeam, D. L. (2002, June). CIPP Evaluation Model Checklist. A tool for applying the Fifth Installment of the CIPP Model to assess long-term enterprise

Retrieved September 18, 2009 from http://www.wmich.edu/evalctr/checklists/cippchecklist.htm

Guskey, T. R. (2000). Evaluating Professional Development. Corwin Press.

Saturday, September 12, 2009

Program Evaluation- Hillary's comments

An overview of the Program

The Interactive Mathematics Program (IMP) was a four-year, problem-based mathematics curriculum for secondary-school students. Its main approach to mathematics education was designed to prepare students for the future workplace. The IMP curriculum used a more interactive approach to teaching mathematics and focus on the use of manipulatives and calculators. The program was conceived for those students who were being prepared for college to fulfil the mathematics standards developed by the National Council of Teacher of Mathematics (Reselk, 2007). The program was designed in 1989 and is currently being used in 250 high schools in 21 states in the United States. In the early stage of the program, it was funded by the California Postsecondary Education Commission and then later received funding from National Science Foundation for curriculum development, evaluation, and dissemination (Resek, 2007).

Evaluation overview

The evaluation of the Interactive Mathematics Program was done by Norman Webb at the Wisconsin Center of Educational Research (WCER) and was prepared for the American Educational Research Association (AERA) (Resek, 2007). In evaluating the program, tests were conducted at a number of IMP locations across the United States. Comparative data on student Scholastic Achievement Test (SAT) performance, as well as performance on activities involving probability, statistics, quantitative reasoning, and problem solving were collected. Separate and apart from the comparative data that were collected several studies were conducted by researchers who compared IMP students against those who were not apart of the program (Resek, 2007). Students were compared based; high school grades and retention, students’ performance on standardized tests, performance comparison on other tests, comparison of attitudes, and comparison of performance after secondary schools (Resek, 2007).

Personal Feedback

The IMP seems like a good program, the extent to which it actually worked was not stated simply because they used large number of students from the IMP locations across the United States in conducting their assessment of the program. This made it difficult to get a true assessment of the actual progress of the mathematics students. There seems to be no means of tracking students’ performance in mathematics after they left high school. The feedback from evaluation of the program was not easily understood because of the numerical data and the complex explanation. The language used was not simple, as a result it made it difficult for comprehension. Feedback from the evaluation seems quite inconclusive as there were no clear indications of whether or not the program was able to achieve its objectives. From the evaluation, it would appear that the program did not produce much success. Clearer recommendations are needed for the improvement and evaluation of the program.

Despite this, I think it was a good to compare different groups to get an idea of how well students did. I like the interactive approach that they took toward the teaching of mathematics and think this is a method that I would use in my classroom.

Reference

Resek, D. (2007). Evaluation of the interactive mathematics program. Key Curriculum Press.

www.mathimp.org/research/AERA_paper.html