School Governance

New York teacher sues the State after being rated ‘ineffective’

Written by Ideagen CompliSpace | Aug 12, 2015 2:00:00 PM

Sheri G Lederman, a long-term primary school teacher in New York, is suing the State over the methods they use to rate teachers. Ms Lederman has a doctorate in education and over 17 years of teaching experience however, despite her classes consistently ranking above state averages and being ‘highly regarded as an educator with a flawless record’, she was rated as ‘ineffective’ for the 2013-14 school year. The Washington Post reports that the case could have a large impact on the methods used to evaluate teachers across the country. Ms Lederman’s submissions voice a major concern among educators that the system used to review and rank teachers is incomprehensible as evidenced in her rating being ‘effective’ the previous school year and based on very similar student scores as in 2013-14.

How does this evaluation method work?

Many states in the U.S., including New York, use an evaluation method known as the value-added model (VAM) which brings together data in a complex computer model that predicts student performance over the upcoming school year. The difference between the predicted performance and the actual performance of the students represents the value-added by the teacher’s instruction and a rating is given to reflect this. The lawsuit alleges that this method ‘punishes excellence in education through a statistical black box which no rational educator or fact finder could see as fair, accurate or reliable’.

What are the problems with a VAM evaluation?

As VAM is reliant on data compiled from standardised, high-stakes tests in literacy and numeracy, the data does not always reflect the teacher’s ability to teach their specialised subject. Art teacher Jake Jacobs explained that 20% of his evaluation was based on maths scores from standardised points, which saw his rating drop from ‘effective’ to ‘developing’. This has consequences for job security of teachers, as repeated ‘ineffective’ ratings can lead to dismissal.

The use of test predictions can also distort the data. Luke Flynt, a Florida teacher who was also evaluated using this system, discovered that his highest performing students were judged against a predicted score that exceeded the maximum possible score. One of his student’s who achieved 100% impacted negatively on his rating as she had failed to achieve a predicted result of 286.3 out of 283.

Consequences for Australian schools

Following the introduction of the NAPLAN tests in Australian schools, the ways in which this data can be used have raised numerous concerns. School Governance has previously reported on the problematic incentives created by NAPLAN and fears that the data may not reflect the actual performance of the class.

Currently, this data is only used to measure performance and as a way for students, teachers and parents to evaluate long-term performance and identify any issues. This week, The Age reported that the Victorian government is considering using this data as one of the factors when allocating funding. This would involve using Year 5 NAPLAN results to allocate ‘equity funding’, which is currently used to assist disadvantaged students. At presesnt, Victorian secondary schools can be allocated small amounts of equity funding if their Year 7 students are in the bottom 15% in the state. The proposed changes would expand this greatly and could affect funding state-wide.

The New York case demonstrates the dangers of relying on narrowly sourced computer data to reflect the effectiveness and expertise of teachers. When used in conjunction with other data sets, it may be a useful source to pinpoint areas where more support is needed. However, given this model would award funding based on lower scores, steps should be taken to reflect that some students from a non-English-speaking background or with a disability do not sit the test. If Australian governments wish to use NAPLAN data as the foundation for determinations such as the allocation of funding, the limitations and problems with the data must be addressed.