Iowa study vs C.D.Howe “best schools” report

How do we know what makes a good school? Is it on the basis of many variables and a wide variety of social and academic factors? Or, it is based solely on a single measurement such as the results of the annual reading and math standardized tests administered by the Ontario Education Quality and Accountability Office (EQAO)?

Well, to read the Fraser Institute’s annual school rankings report cards or this weeks C.D. Howe “Ontario’s Best Public Schools” report, you’d have to think that EQAO scores and percentiles were all that mattered.

To view the e-brief, which was authored by Dr. David Johnson, an economist with Wilfrid Laurier University, simply scroll down the page a bit and click on ‘What’s New” to get to the PDF file.

Well, given what the Iowa Association of School Boards found nearly a decade ago, I would have to conclude that, although they may be a part of the whole picture, they are NOT all that matters — even when demographic details have been screened or adjusted.

The Iowa Study

The Iowa “Lighthouse” study — which is still very relevant today — wanted to find out how some school districts were able to create higher student achievement than others. However, to find that out, they didn’t focus on one single variable or what made a single school effective.

Rather, they looked at the whole school board experience and decision-making process — what everyone thought and did from trustees, board administrators, school principles, non-teaching staff, parents and classroom teachers.

What they found were high achieving school districts had a generalized “no excuses” attitude and belief that all students could achieve their educational goals.

In addition, the high achieving districts:

(1) were more knowledgeable about teaching and learning issues, including school improvement goals, curriculum, instruction, assessment and staff development;

(2) were able to clearly describe the purposes and processes of school improvement efforts and identify the board’s role in supporting those efforts;

(3) were able to give specific examples of how district goals were being carried out by administrators and teachers;

(4) could use data and other information on student needs and results to make decisions, as well as regularly monitoring progress on improvement efforts and modified direction as a result (e.g., standardized test results);

(5) were able to create a positive workplace for staff with supportive and regular staff development to help teachers be more effective;

(6) supported shared leadership and decision making among staff, and regularly expressed appreciation for staff members; and

(7) involved their communities and board members identified how they connect with and listen to their communities and focused on involving parents in education.

So, not surprisingly, the people involved in low student achievement school districts had limited expectations and often focused on factors that they believed kept students from learning, such as poverty, lack of parental support or societal factors.

Which brings me to Dr. Johnson’s paper.

C.D. Howe “Ontario’s Best Public Schools” Report

While I found the e-brief of the paper interesting, particularly when Dr. Johnson concluded that Catholic schools outperformed their public counterparts, I found the assumptions questionable and the results not generalizeable.

For example, Dr. Johnson starts off with three over-riding objectives (on the top of page one of the e-brief), two which are very promising. The third, however, is puzzling to say the least. He states:

“The resulting school ratings by percentile are useful not only to parents, but also to school board administrators and education officials who wish to identify schools whose practices deserve imitation.” (My italics.)

“Deserve imitation?” How on earth does Dr. Johnson propose to find out which educational practices can be imitated on the basis of the standardized test results — unless the agenda is for schools to only teach to the test? Because, as shown in the Iowa study, there are just too many variables to be able to pinpoint to one “signpost.”

Yet, that is exactly what he does in his summary. On page 4, for instance, he identifies St. Cecilia School as having a 96 percentile score, whereas: 

“The percentile score for Mount Hope [School] is 5, which suggests that “95% of schools do better … in Grade 3. There is room for improvement at schools like Mount Hope.”

Obviously improvement is needed at Mount Hope. However, therein lies the danger with school rankings — they place ALL their emphasis on one variable to the exclusion of all others making generalizations which cannot really be made.

No doubt while the St. Cecilia community is happy with the way Dr. Johnson has characterised their school, the parents, staff and students at Mount Hope must feel very differently.

Moreover, even if Dr. Johnson expresses school performance as a percentile, as he did in the St. Cecilia and Mount Hope examples,  he cannot identify which teaching practices deserve imitation because he cannot know what to imitate. In other words, in research terminology, he can answer the statistical “what” question but he cannot answer the qualitative “how” or “why.’

So, readers, what makes a good school?