As you may have heard, a government investigation discovered massive cheating on standardized tests in Atlanta Public Schools. But it wasn't the students who were cheating, it was the entire institution of public schools, from teachers up to principles and even the superintendent's office. In this particular case, many of the guilty educators, who changed standardized test answers before submitting them for grading, attributed the cheating to a hostile work environment. They claim that Superintendent Beverly Hall refused to accept responsibility for anything negative, but was quick to be noticed for successes.
This case may have been about pressure form the top so Hall could maintain a great image, but the first place my mind jumped when guessing the primary reason for the cheating was funding. Schools receive funding based on student performance as determined by standardized tests. The pressure to improve test scores increases each year it seems. This is not unlike increasing pressure for professors to provide quantitative results of their productivity.
It seems like more and more state governments are demanding accountability from professors (probably so governors can justify spending cuts for higher ed). But the million dollar question is: How exactly can a professor's productivity translate to numerical statistics? At large, public research universities, which make up most, if not all, state flagship universities, professors are expected to spend their time about equally on teaching, research, and service. Service is somewhat easy to judge because it is clear to the department of which committees a professor is a member. Research can also be easily quantified through a combination of publications and grant money, as compared to other faculty in the same field. Teaching is where the whole system falls apart and, of course, this is the category politicians want to scrutinize the most.
So how exactly can quality of teaching be quantified? Most universities make use of teaching evaluation forms filled out by students near the end of the term. These can be a useful method to determine trends in teaching style and quality, but they are hardly perfect. While difficult professors who just so happen to be awesome at engaging the students can receive top scores, it is much simpler for a professor to make the course an easy A if he or she wants a good evaluation. This is assuming, of course, that the majority of the students even fill out the evaluation with any amount of thought. In my experience, most students would just mark all 5s or 3s or whatever, just so they could finish in 30 seconds and get to leave class 14 minutes and 30 seconds early. Also, the questions asked on the evaluation forms at my undergrad made it unusually difficult to point out serious teaching flaws because the questions focused on things like my interest in the material, instructor's excitement, and instructor's respect of the students' race, religion, and gender. These major errors must be addressed if student evaluations are to be the quantitative representation of the teaching portion of a professor's work.
I think it would be most effective to have faculty evaluate other faculty by watching the lectures. This has its own problems, though. The instructor under evaluation may prepare more than usual and deliver an unusually good lecture. Perhaps this could be mixed with student evaluations. In the end, however, there is no clear-cut method of quantitatively evaluating faculty productivity. I do believe that most professors work hard and do not need extra pressure to keep up the good work. In my next post, I'll explain why I do not believe the public needs to be so critical of faculty.
No comments:
Post a Comment