Tuesday, October 22, 2013
Accountability
I was excited to read Chapter 7 because I assumed it would shed more light and provide more information on a topic that we address often in this course. Due to the extreme focus on accountability in recent years, I hope that we will not find ourselves in a similar situation to the counselors in Mt. Data (the school referenced in the case study at the beginning of the chapter), as the schools where we serve will already have a functioning data collection method in place. If not, I think we will be well prepared to begin such a program due to the accumulation of skills and knowledge we’ve acquired through our involvement in this program.
I appreciated the discussion of the analysis of disaggregated data at the beginning of the chapter, which is data “broken down by specific demographics of interest, which presents patterns, themes, and trends within the aggregate data” (Dollarhide & Saginak, 2012, p. 110). By studying these types of trends in detail, teachers and counselors can develop and modify interventions for particular groups of students. The authors then discuss the importance of disseminating the data to stakeholder groups. In this section, I liked that the authors reminded us to proceed with caution when determining how much information to disseminate, at least initially, advising us to be especially selective in the results we share during the early stages of the program, as it may negatively impact the credibility of our program.
In the next section, the authors describe five main types of evaluation: 1) needs assessments, 2) outcome research, 3) formative evaluation, 4) implementation evaluations, and 5) outcomes evaluations. I’m familiar with all of these evaluation methods with the exception of outcome research, and I’m still a bit unclear as to what exactly this entails even after reading the chapter. I liked the discussion of needs assessments as I always thought they involved formal surveys, but I learned that they can also involve observations and conversations with teachers, parents, administrators, and students, which can then be used to generate questions for more formal survey methods.
I was not aware of the breadth of data-driven accountability models, but after reviewing the steps of each model, it appears that most of them are really similar and follow some general prescriptive steps, including developing a vision, committing to goals, selecting and evaluating interventions, and monitoring and reporting results. Gysber (2004) reports on the history of evaluation methods for counseling programs since the 1920s. It was interesting to learn that many of the student outcomes used to evaluate programs throughout history, such as dropout rates, disciplinary referrals, and improved study habits, are still very relevant and applicable in schools today.
Dollarhide, C., and Saginak, K. (2012). Comprehensive school counseling programs: K-12 delivery systems in action (2nd Ed.). Upper Saddle River, NJ: Pearson Education.
Gysbers, N. (2004). Comprehensive guidance and counseling programs: The evolution of accountability, Professional School Counseling, 8(1), 1-14.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment