The evaluation plan is designed to address 5 key points:
The evaluation will be directed by two external evaluators with Ph.D.s. Evaluators include one quantitative research psychologist and one qualitative research educator, who will collect both qualitative and quantitative data, conduct surveys, interview teachers, make onsite visits to teacher-participants’ classrooms, analyze test score data, write evaluation reports, and provide input and feedback to the Advisory Board, Teacher Support Teams, and the project director to help monitor and adjust activities to enhance the effectiveness of the Developing Master Teachers grant. More about the evaluation team.1. Methods of evaluation that provide for examining the effectiveness of project implementation strategies
In order to maintain the most rigorous methodological design to assess program effectiveness, a pre-post non-equivalent control group is necessary to assess participant and student achievement. Evaluators propose a randomized-quasi-experimental design using teacher participants as the experimental group and a control group consisting of teachers wait-listed to participate the following year. Incentive for control group participants will be a promised inclusion in the summer seminar the following year. Subsequently, additional schools can be promised inclusion in the intervention if they agree to participate as a control school. This design control allows for self-selection into the group and demographic information regarding school participation should be somewhat matched. Initial randomization of teacher participants will be conducted before the beginning of the intervention. Longitudinal data of all participants will also be investigated to provide an overall summary of program performance. Schools will be randomly assigned to experimental or control groups. Unfortunately, it would be impractical to randomly assign students to experimental and control groups. However, we believe that when a control group is used, effectiveness of outcome goals will be clearer when compared to a control group of teachers not currently receiving the intervention.
2. How methods of evaluation provide performance feedback and permit periodic assessment of progress toward achieving intended outcomes
A quasi-experimental design using pre and post test measures of evaluation will allow for a more reliable and valid measure of process and outcome evaluation goals. Using statistical analyses such as independent t-tests and other descriptive, univariate and multivariate statistics will evaluate progress toward key goals. Control and experimental groups will receive pre and post test measures of intended goals. Surveys, observations and interviews will be used in all assessments to examine whether significant differences occur in the experimental group. Further, participant demographic information and rates of attrition will be carefully monitored to assess mortality rates and possible correlates.3. Measuring the extent to which the methods of evaluation are thorough, feasible, and appropriate to the goals, objectives, and outcomes of the proposed project
Evaluators currently have existing quantitative and qualitative measures designed to assess project goals. Quantitative measures will be compared to qualitative measures using this multiple methods approach of data collection. Statistical analyses will examine the average responses of participants, while qualitative data will be used to provide a richer explanation of the program; In addition, quantitative measures used will be examined using factor analyses to investigate reliability and validity of materials.4. Methods of evaluation included for the use of objective performance assessments must be clearly related to the intended outcomes of the project and will produce qualitative and quantitative data to the best extent possible
Evaluators will gather descriptive information on the development and implementation of the project. At this stage, we will also provide an in-depth analysis of program participants (experimental and control). Using observations, interviews and surveys we will produce descriptive and statistically rigorous information on proposed vs. actual implementation of the project, critical issues encountered by providers as they attempt to implement the project, descriptions of participants, and lessons learned from the project. Further, when possible, evaluators will attempt to collect data with assistants who tend to be more objective and are blind to the hypotheses and therefore the data will not be susceptible to observer bias. With the multiple methods approach and qualitative data supplements the quantitative analyses will be enriched. These methods can identify problems with any specific aspect of the project’s implementation. The Program Director and staff can make any necessary adjustments and continue to observe the results of the adjustment. Furthermore, documenting project implementation adjustments along the way as well as lessons learned will provide a rich resource of information for others who may want to replicate this project. Data collection formats include: (1) a review of comments made by program participants, (2) observations and (3) surveys for all program participants.
Use of additional resources will also be examined and compared between control and experimental groups with regard to number and type of resources used to cover content materials. It is expected that 75% of teacher-participants will report utilizing additional resources compared to control participants.
E. Information sharing among teacher-participants, peers and mentors
“Have teachers been sharing and discussing their new content knowledge with others?”
“How often do teacher-participants meet with mentors and colleagues to discuss content knowledge of that time period of American history?”
Use of information sharing among participants, peers and master-teachers will be compared between control and experimental groups. It is expected that 100% of teacher-participants will exhibit an increase in information sharing among peers and master teachers. Teacher-participants’ use of website and library materials will also exhibit significant increases in use (60%) compared to control participants. While the program facilitates information sharing among peers, mentors and master teachers, one serious concern is that of contamination “spillover effects” of the experimental group. In order to control this problem, teacher-participants will be instructed to share information within their school systems only.5. The scope of which the evaluation will provide guidance about effective strategies for replication or testing in other settings
All results will be presented to the Director and Advisory Council on a quarterly basis. Products to be generated by the evaluation team include: reports to management staff regarding program progress and data findings presented in APA format that will include: title, abstract, literature review, methods, results and discussion (via mid-year assessments, participant interviews and a final evaluation report describing the findings). Based on information learned from these assessments, the evaluation team will describe the ongoing process and implementation of the project throughout the year; identify critical issues during implementation as identified by program participant assessment; identify the extent to which the program has been implemented as planned and address direct outcome evaluation components. As planned, the following year will include the original control group, and the process can be replicated using a different group of participants each year. Another school will then be chosen as a control group. This process serves to provide somewhat matched groups of control and experimental participants as well as providing evaluators with a way to replicate results using additional populations and settings.
Further Evaluative Objectives:
Evaluation Plan and Execution
The evaluation plan focuses on providing information about the implementation of the program and the outcome of project participants. This information will be used to inform ongoing project development and to determine whether the project is meeting its stated objectives. Through a combination of qualitative and quantitative approaches, the evaluation team will address the extent to which the preceding goals have been attained.
The evaluation has three components. The first is a performance monitoring system that will produce quarterly data on participant characteristics, project implementation indicators and participant outcome indicators. The second is a qualitative component which will be used to describe the development of the project, identify critical issues during implementation, and gather information critical for follow-up and replication. Third, the collection of outcome data on project participants using standardized measures will be evaluated.
A synopsis of time periods and execution of the evaluation plan can be seen in the following table.
Note: In this proposed evaluation design, CRE’s budget for a rigorous independent, external evaluation is set at 12% of the total project budget. Also, as an incentive to promote full-scale participation and also to help demonstrate the effectiveness of the project, teachers are paid a fee by CRE in two installments (at start and at finish of each year) to complete pre and post test data collection instruments on their perceptions/knowledge and also to administer the student survey and assessment instruments, and to score the students’ results on the Document-Based Questions (pre and post), using a common rubric. Additionally, each participating school is paid a fee by CRE at the end of the project to compensate the system for providing assistance with data collection.
Copyright 2007, Teaching American History