Think Tanks in Education - a Grattan Gripe

Page created by Glen Harris
 
CONTINUE READING
Think Tanks in Education - a Grattan Gripe
Think Tanks in Education – a Grattan Gripe

The Grattan Institute, a Melbourne based think tank, has produced three reports on
teacher quality, the latest of which has had high visibility across the media and has been
quoted by Julia Gillard and Peter Garrett. The Grattan Institute prides itself on work
that is independent, rigorous and practical. The latest report on teacher feedback and
appraisal does not live up to these three standards and has the potential to result in the
devaluing of teachers and their work.

Introduction
It is concerning when poor research about teachers is made public. When that poor research is
taken up by the media there is cause for greater concern. When poor research is taken up by
policy makers then it is time to take serious notice. A case in point is the latest report from
the Grattan Institute’s School Education Program Better Teacher Appraisal and Feedback:
Improving Performance (the Report) (Jensen & Reichl, 2011). While the Grattan Institute
(the Institute) is to be congratulated for its interest in promoting teachers and the quality of
teachers’ work, there are serious limitations in the Report which potentially impact poorly on
the perception of teachers and their work 1.

The Institute’s School Education Program aims to improve school outcomes across sectors,
with a focus on teacher quality (Grattan Institute, 2010-2011b). The Institute prides itself on
producing work that is independent, rigorous and practical, and in the critique of the Report
presented here, using these three standards, it is found wanting. The focus first though is to
unpack the threads in the authors’ main argument in the Report.

A critique of the main line of argument in the Report
The main argument in the Report is that systems of teacher feedback and appraisal will lead
to more effective teaching, and more effective teaching will lead to increased student
learning. An increase in student learning is needed, the Report argues, because “Australia is
lagging in vital areas of school education” (Jensen & Reichl, 2011, p. 3). The Report
contends that Australian students should be the best in the world. Overall, this line of
argument has a general ‘feel good’ appeal, however a deeper analysis of the assumptions
behind each thread in the argument reveals consequences for the ways in which teachers and
their work are perceived.

1
 In April 2011, this author offered the Grattan Institute a draft of this paper for comment but there has been
no reply to date.
The first thread in the Report’s argument is that systems of teacher feedback and appraisal
will lead to more effective teaching. Three elements within this first thread of their argument
are unvoiced by the authors but highlighted here. The first element is the Report’s
unquestioning emphasis on measurement within teaching. It is ironic that many of the
researchers quoted by the Report’s authors are open about the limitations of their research,
acknowledging that it focuses on a narrow range of academic outcomes which do not
characterise the wider mission of modern schools. The danger from this Report is not the
measurement emphasis itself, the danger is that test scores become THE proxy measure for
truly effective teaching. Teachers value the contributions they make to a student’s life that
cannot be measured – instilling in their students a love of learning; developing their ability to
question; encouraging their curiosity. The danger from this Report is that it values teachers’
work only through those contributions which can be measured. By ignoring the non-
measureable contributions made by teachers to students’ lives, teachers and their work are
devalued. Those teachers who work effectively with their students contribute to the whole
child in their care, not just their test scores.

The second element to be highlighted is that the Report’s authors do not define ‘effective
teaching’. A close reading of the Report, and knowledge of previous reports authored by Dr
Jensen, leads to the conclusion that ‘effective teaching’ is meant in the sense used in one very
particular field of research which uses student test scores to identify teacher effectiveness.
‘Effective teachers’ are those whose students have good test scores, and the research field is
interested in determining just how much of the students’ test scores can be allocated to the
effects of the students’ teachers, as opposed to the effects of other inputs such as the students’
family background, socio economic status. By not acknowledging the research base from
which the Report’s authors have lifted this expression of ‘effective teachers’, the Report
seems to be arguing that ‘effective teachers’ are the cause of the students’ high test scores.
This flawed reasoning implies that the reason Australian students don’t have higher test
scores is because we have ineffective teachers. With successful marketing of the Report, this
message is then taken up by the media, the Prime Minister and the Minister for School
Education when they refer to it. Those teachers who work effectively with their students,
regardless of their test scoring potential, have reason to be concerned if this message is not
countered.

The final element of this thread of the authors’ argument is that they do not define ‘student
learning’. It could be assumed, given the history of the Institute’s School Education reports
(see Measuring What Matters: Student Progress (Jensen, 2010b)), that the authors define
student learning through test scores. Once again the danger here is not in test scores per se,
the danger is that test scores become THE proxy measure for student learning. Teachers
know that the children they are teaching today will be employed in jobs that currently do not
exist and that while they will need good literacy and numeracy skills, many of the other skills
they will need to perform well in their future working lives are neither measureable nor
‘testable’ Those teachers who work effectively with their students contribute to
comprehensive student learnings, not just student test scores.
The next thread in the Report’s argument is that an increase in teacher effectiveness is needed
because Australian students are lagging in international scores. This claim is difficult to argue
with as the authors do not state where these data come from! Previous Institute reports have
relied on Australia’s results in the OECD’s Programme for International Student Assessment
(PISA). PISA’s assessments are rigorously controlled standardised tests administered to a
selection of 15 year olds. The 2009 results shows Australia to have moved from a 2006 score
of 513 to 515 on Reading; 520 to 514 in Mathematics and maintained a score of 527 in
Science (Thomson, Bortoli, Nicholas, Hillman, & Buckley, 2009). These results are certainly
well above OECD averages and are arguably stable. Australia’s rank has dropped in 2009
compared to 2006 due to the entry of 20 countries/economies, some of which have produced
very high results such as Shanghai-China (Thomson, et al., 2009). The authors of the Report
seem to have confused test score with test rank, and thereby created a sense of disquiet about
the test performance of 15 year old students and their teachers that may not be warranted.

The last thread in the author’s argument is that adoption of their system of teacher feedback
and appraisal will lift the performance of Australian students to the best in the world. The
authors have missed the point that being ‘best’ tells us nothing of quality and indeed nothing
of equity. Without ignoring the deep issues of educational disadvantage across Australia, pre
service teachers, indeed probably many senior secondary students, know that ranks and
scores not only tell very different diagnostic stories about test performance, but also very
little about what has been learned.

It is time now to use the Institute’s measures of its work – independence, rigour and
practicality to determine the quality of the latest Report from its School Education Program.

1. Independence
The independence of the Institute is realised, according to its website, through its governance:
a Board of Directors appointed by a council of members (Grattan Institute, 2010-2011a).
While the names of the Board members are published on the website, the council of members
are not. It is unclear who is on the council of members are. The Institute does make its
foundational members clear: the University of Melbourne, BHP Billiton, the Australian
Government and the State Government of Victoria. The question of the Institute’s
independence from the Australian Labor Party (ALP) and the University of Melbourne are
taken up next.

The Institute’s strong links to the ALP and the University of Melbourne were made clear at
its launch when John Brumby, then Labor Premier of Victoria, stated that “we’ve been very
strong supporters of the idea of a new think tank for Australia, one based ... at Melbourne
University” (Department of Employment and Work Relations, 2008, p. 1). The Institute’s
initial funding was also announced at this time by the officials at the launch including Julia
Gillard (then Deputy Prime Minister), John Brumby and Allan Myers (Queen’s Counsel).
The funding details announced were $15 million from each of the Federal and Victorian
Labor governments and another $10 million from the University of Melbourne who also
supplied the Institute with its home. Amongst the Institute’s staff (as at July 2011) are
numerous graduates from the University: the current Chief Executive Officer; 1 of 3 Program
Directors, 2 of 3 Research Fellows, 5 of 6 Research Associates. The Vice Chancellor and
President of the University of Melbourne along with 2 Honorary Professorial Fellows are
amongst the 10 Board Members.

The continued confluence of the Institute and the ALP can be inferred in the following two
images: one from ALP’ School Reform website (Australian Labor Party, 2011) and the other
is the front cover of the Report in question here.

2. Rigour
The rigour of the Report will be critiqued through an analysis of its referencing, the
production of original research and the relevance of others’ research cited within the Report 2.

The minimal requirement of academic rigour is proper referencing, as this allows the
credibility of the claims made to be independently assessed. Unfortunately the rigour of the
Report is immediately questionable as this minimum requirement is not fulfilled. The
Reference list (pp 46-49) is replete with errors: duplicates; omission of author names;
omission of journal issue numbers; book chapters cited as books; unexpanded acronyms;
omission of webpage addresses, as well as incorrect authors and journal titles.

The Report claims that independent research is carried out as it purports to have conducted
extensive interviews with principals and teachers (pp 3 and 8). Additionally numerous school
principals, industry officials and participants were thanked for their input (p 2), and yet the
Report does not contain any analyses, quotes or anecdotes from these supposed primary data
sources, unless one is to assume the various text boxes, suggestive of case studies, emerge
from these sources. The Report certainly presents secondary data. One example is the use of
the combined voice of over 2 200 Australian secondary teachers from the OECD’s Teaching
and Learning International Survey (TALIS) (p 3 and pp 7-8). However this ‘voice’ is an
aggregated response to strictly formulated international survey questions. As is shown below
this source of data does not provide evidence of any claim made by the authors, and so its
presence, it could be argued, is to add a credibility borrowed from the status of the OECD’s
programs to the Report as a whole.
2
 All references in this and the next 2 sections of the paper are to page numbers from the Report unless
otherwise referenced
The last area of critique is in the use of others’ research. In academically rigorous reports,
others’ research may provide contextual background; support hypotheses being made or
detail previous work in the field. In academically rigorous reports, others’ research is
surveyed across the whole field of research, not cherry picked to substantiate claims being
made. This Report, like many influential think tank reports from overseas which have been
reviewed, does not “provide either a comprehensive review of the literature or a defensible
interpretation of the findings of whatever scant research is cited” (Welner & Molnar, 2007, p.
1). Chapter 1 of the Report is where the bulk others’ research is cited. Ignoring the polemics
at the beginning of the chapter not one of the references cited (p 6) provides directly
measured evidence to back up the authors’ claim that teacher effectiveness accounts for a
larger proportion of differences in student outcomes than effects from differences between
schools. While these data may exist, such findings are not to be found in the research the
authors cite.

The next claim made is that teacher effectiveness is the largest factor influencing student
outcomes outside of family background (p 6). Once again while data to support this may
exist, the Report cites data from TALIS and TALIS does not collect data on student
outcomes! Given the major author of the Report also authored the cited chapter from the
TALIS Report, it seems that he has confused an underlying assumption of the claim in the
TALIS survey with evidence for that claim.

The second paragraph starts with another polemic: “[t]he evidence from Australia and
overseas is remarkably consistent” (p. 6). The reader is left wondering just what this
consistent evidence might be and what it is consistent with, as the authors’ citations do not
provide any further information. The reader must surmise that the consistency lies in the
positive relationship found between teacher effectiveness and student achievement scores.
The consistency is not surprising since the field of research the authors rely on as evidence
for their claim create correlations between teachers and student test scores in order to
determine what contribution to the test scores teachers make!

The ‘new system’ of feedback and appraisal for teachers outlined in the Report is claimed to
be able to improve teacher effectiveness by 20-30% (p 6). Recall that while the authors do
not define teacher effectiveness, its research origins tell us it is linked purely to student test
scores. This author has been unable to find any reference to magnitudes of 20-30% in either
of the papers cited in the Report to substantiate the claim (p 6). More problematic is the
relevance of these papers to the authors’ claim since the two papers were concerned with
developing instructional programs for handicapped pupils. John Hattie’s book (2009) is also
cited as evidence, however the reader is left questioning which part of the synthesis of over
800 meta analyses is to be searched as validation of the authors’ claim.

The problems associated with not defining teacher effectiveness and citing works whose
relevance is questionable, grow in significance when the authors’ claim is publicly repeated
by Prime Minister Gillard who said: “we know independent research conducted as recently as
this year shows that a system of meaningful appraisal and feedback for teachers can increase
their effectiveness by 20 to 30 percent” (Thompson, 2011). In Gillard’s reference to
‘independent research’ she appears to mistakenly link the research purportedly carried out by
the Report’s authors to secondary research they cite.

Chapter 1 of the Report concludes with a summary of the authors’ argument and a startling
claim that their new system will have a greater impact on Australia’s economic growth than
any other reform currently before Australian governments (p 6). Even if they had actually
meant educational reform, such a claim is founded on one of the author’s previous reports for
the Institute (Jensen, 2010a) which contained econometric analyses showing that an increase
in literacy and numeracy will increase the nation’s long-term GDP growth. Three important
points are highlighted here. The first of which is worth repeating - this link between test
scores and teacher effectiveness comes about by the very definition of teacher effectiveness
in a certain area of educational research studies and so adds little to any argument about
teacher quality.

The second point to highlight is that GDP, as a measure of economic activity, is neither
questioned nor justified by the authors as an appropriate measure, to link to student outcomes,
however defined. Indeed it could be argued to be an outdated notion since the main
achievement of a conference held in 2007, hosted by the European Commission, European
Parliament, Club of Rome, OECD, and WWF, Stavros Dimas, the European Commissioner
for Environment, was a demonstration of “the political consensus on the need to go beyond
GDP” (Environment Commission, 2007) as a measure of a socially inclusive and sustainable
economy. The argument could also be extended to question why the measurements need to be
economically based at all. The Australian Bureau of Statistics has been looking since 1999 to
establish appropriate measures of progress, which they see as encompassing “more than
improvements in the material standard of living or other changes in the economic aspects of
life; it also includes changes in the social and environmental circumstances” (Australian
Bureau of Statistics, 2005). Prosperity is just one of many goals stated in The Melbourne
Declaration on Educational Goals for Young Australians: “As a nation Australia
values the central role of education in building a democratic, equitable and just society— a
society that is prosperous, cohesive and culturally diverse, and that values Australia’s
Indigenous cultures as a key part of the nation’s history, present and future” (The Melbourne
Declaration) (Ministerial Council on Education Early Childhood Development and Youth
Affairs, 2008, p. 4).

The third and final point to highlight is that the authors’ focus on ‘teacher effectiveness’, as
defined by the research field they subscribe to in the Report, devalues the other benefits of
education extolled in The Melbourne Declaration. This devaluation then can be said to extend
to a devaluation of the work done by teachers which is not just directed at improving test
scores, but rather reflects the spirit of The Melbourne Declaration. Indeed if a school defines
‘effective teaching’ as producing confident and creative individuals or active and informed
citizens as per the The Melbourne Declaration, much of this Report, predicated on test scores
as the measure of education, is irrelevant.
It is time now to turn to the last of the three criteria by which the Institute judges its work –
practicality.

3. Practicality
The practicality of the ‘new system’ of teacher feedback and appraisal proposed by the
Report’s authors relies on schools choosing from among eight methods of teacher feedback
and appraisal (p 10). Nowhere in the Report is the value of these particular eight methods
either justified or questioned relative to any other choice of methods. The authors do justify
their assertion that the variety of methods is the key to the success of their system by alluding
to a “balanced scorecard” (p 10), which is never defined in the Report. The reference to it
shows that it arose in 1992 research as a complementary financial measure for business unit
performance – its relevance to education is questionable.

The Report’s authors exhort their readers to only choose those methods that reflect fact based
evidence (p 9), so it seems peculiar that their ‘new system’ of eight methods includes:

   •   a method which, the authors admit, is neither an accurate nor complete measure of
       teacher effectiveness (p 12), however this method is deemed sufficiently important by
       the authors to be a required element of their system (p 9),

   •   three separate methods whose difference is only in observer status or geography (pp
       13, 15 and 21); and

   •   another method which the authors claim has had limited research in schools (p 18).

From this brief summary of the presentation and rationale of the ‘new system’ it is difficult to
argue that it provides for a practical product for schools.

Visibility and Influence of the Report
This section of the paper outlines the media visibility and potential political influence of both
the Report and the Institute’s School Education Program. The traditional approach of
academics to think tank style reports is to ignore them as their quality does not merit analysis
( ). The poor quality of this Report has not stopped it gaining visibility and influence –
academics can no longer afford to ignore reports such as this.

The Report has been highly visible across mainstream media such as the ABC News; ABC
Radio National; the Sydney Morning Herald, The Australian and Channel 9 TV. Its presence
in the virtual environment has allowed it access to audiences of parents, principals, teachers,
policy experts and other educators. The profile of the School Education Program is evidenced
by the calibre of people who cite it, Julia Gillard and Peter Garrett are two examples, and the
documents in which it appears, the Overview for the National Professional Standards for
Teachers (Australian Institute of Teaching and School Leadership, 2011) and a submission to
the Senate from the head of the Department of Education and Training in the ACT
(Watterston, 2010).

The Institute’s influence on public policy was heralded at its launch by Julia Gillard who said
that it was to be a national leader on public policy (Department of Employment and Work
Relations, 2008). After two years in operation, its success in fulfilling its self-defined role
within public policy was made evident when (the then) Prime Minister Kevin Rudd claimed
that think tanks, such as the Institute, were able to provide the best analytical skills and access
to the best research that the public service lacked (Rudd, 2010). While this author is not privy
to the capabilities of the public service, the foregoing critique of the Report questions the
veracity of Rudd’s claim.

Conclusion
It is concerning when poor research, such as that from the Grattan Institute’s School
Education Program, is made public. When that poor research is taken up by the media there is
cause for greater concern since they neither understand, nor take the time to understand, why
that research is poor. When poor research, such as the Report is taken up by policy makers
such as the Prime Minister and Peter Garret Minister for School Education, then it is time to
take serious notice. To reiterate, the Institute is to be congratulated for its interest in
promoting teachers and the quality of their work. The Institute’s Report has a narrow focus
on what constitutes teacher effectiveness and student learning. The authors do not
acknowledge the research base from which they lift these terms and so imply causality
between the two that other research does not support. In not making that narrow focus clear
to readers of their Report and through the high media and political visibility the Report
gained, any voice which privileges a different sort of ‘effective’ teacher and a different sort of
‘student learning’ is effectively silenced. In this way, rather than supporting teachers and the
quality of their work, the Institute has done them a disservice.

References

Australian Bureau of Statistics. (2005). Is life in Australia getting better? Beyond GDP: Measures of
         economic, social and environmental progressYear Book Australia, 2005. Canberra: Australian
         Bureau of Statistics. Retrieved from
         http://www.abs.gov.au/ausstats/abs@.nsf/Previousproducts/1301.0Feature%20Article3620
         05?opendocument&tabname=Summary&prodno=1301.0&issue=2005&num=&view=.
Australian Institute of Teaching and School Leadership. (2011). National Professional Standards for
         Teachers Retrieved May 17, 2011, from http://www.aitsl.edu.au/ta/go/home/pid/713
Australian Labor Party. (2011). School Reform Retrieved July 17, 2011, from
         http://www.alp.org.au/agenda/school-reform/
Department of Employment and Work Relations. (2008). Joint Press Conference: Australian Institute
         of Public Policy. Canberra: Australian Commonwealth Government Retrieved from
         http://mediacentre.dewr.gov.au/mediacentre/Gillard/Releases/AustralianInstituteofPublicP
         olicy.htm.
Environment Commission. (2007). Summary notes from the Beyond GDP conference: Highlights from
         the presentations and the discussion. Beyond GDP: Measuring Progress, True Wealth, and
         the Well-being of Nations. Brussels: European Commission.
Grattan Institute. (2010-2011a). Grattan Institute Retrieved April 22, 2011, from
         http://www.grattan.edu.au
Grattan Institute. (2010-2011b). Grattan Institute: School Education. Retrieved August 14, 2011,
         from http://www.grattan.edu.au/programs/education.php
Hattie, J. A. C. (2009). Visible Learning. A synthesis of over 800 Meta-Analyses Relating to
         Achievement. Abingdon: Routledge.
Jensen, B. (2010a). Investing in our teachers, Investing in our Economy. Melbourne: Grattan
        Institute.
Jensen, B. (2010b). Measuring What Matters: Student Progress. Melbourne: Grattan Institute.
Jensen, B., & Reichl, J. (2011). Better Teacher Appraisal and Feedback: Improving Performance.
        Melbourne: Grattan Institute.
Ministerial Council on Education Early Childhood Development and Youth Affairs. (2008). Melbourne
        Declaration on Educational Goals for Young Australians. Retrieved from
        http://www.mceecdya.edu.au/verve/_resources/National_Declaration_on_the_Educational
        _Goals_for_Young_Australians.pdf
Rudd, K. (2010). A new era for the Australian Public Service and the ANU. Canberra: Australian
        National University News.
Thompson, J. (2011). Top Teachers to get financial rewards. Retrieved from
        www.abc.net.au/news/stories/2011/05/02/320516.htm
Thomson, S., Bortoli, L. D., Nicholas, M., Hillman, K., & Buckley, S. (2009). PISA in Brief. Highlights
        from the full Australian Report. Challenges for Australian Education: Results from PISA 2009.
        Melbourne: ACER.
Watterston, J. (2010). Submission to the Senate Inquiry into the Administration and Reporting of
        NAPLAN testing. Canberra: ACT Department of Education and Training.
Welner, K. G., & Molnar, A. (2007). Truthiness in Education. Education Week, 26(25), 44-32.

                                                                                    Robyn Faulkner
                                                              PhD Candidate University of Canberra
                                                             Contact: robyn.faulkner@canberra.edu.au
You can also read