Viewpoints discussion board | University of Illinois

National Coalition for Learning Outcomes Assesment

blog navigation

Results for "October, 2012"

blog posts

  • The Culture Change Imperative for Learning Assessment

     

     

    The September 2012 issue of the NILOA Newsletter included NILOA’s 15th Occasional Paper, “The Seven Red Herrings About Standardized Assessments in Higher Education,” written by Roger Benjamin, with a foreword by Peter Ewell, and including commentaries by Margaret Miller, Terrel Rhodes, Trudy Banta, Gary Pike, and Gordon Davies. The points made in that paper for and against standardized tests of student learning are provocative and clarifying but, as Ewell noted, they are arguments with which we are already quite familiar. Ultimately, how best to assess learning for the purposes of furthering learning and accountability is a question for empirical inquiry that can draw on powerful resources in higher education expertise.

     

    Much assessment of learning already exists on campuses. The vexing problem, however, is that little of it consistently and coherently signals to students the institution’s expectations and the standards shared by faculty and staff. While there are faculty members on every campus who practice exceptionally inventive and effective assessment, such practice rarely is pervasive such that it both purposefully contributes to all students’ learning and informs an institution-level portrait of student learning and development.

     

    Why is systemic assessment still so rare? The answer is embedded in the papers mentioned above: Resistance to learning assessment is in the DNA of the academy’s current culture. Benjamin speaks of this charitably as “institutional inertia.” Davies, more bluntly, notes that despite rhetoric to the contrary, neither higher education’s values nor its rewards for the individuals within it have changed. The academy’s incentive and reward system is not about student learning but about institutional prestige measured by selective admissions, endowment, and research prowess. As Davies put it, “Colleges and universities have a huge investment in the status quo, and they are not likely to support changes that may be needed in what and how they do it.”

     

    But as considerable research and many critics point out, the status quo is no longer tenable. A culture change in higher education is imperative. Far too many college graduates have not achieved widely accepted and significant higher learning outcomes such as the ability to think critically and creatively, speak and write cogently and clearly, solve problems, comprehend complex issues, accept responsibility and accountability, or understand the perspective of others. The central contributor to this learning crisis is culture—both the larger culture surrounding the academy and that within colleges and universities themselves. With regard to the latter, the shared norms, values, standards, expectations, and priorities of teaching and learning on most campuses are not powerful enough to support true higher learning. We do not demand enough from students; our standards are not high enough; we accept half-hearted work from students who have not asked enough of themselves; and we do not support students in asking for more from their teachers. Degrees have become deliverables (purchased, not earned); credit hours are accumulated and courses passed with little concern for coherence or quality because we are not willing to make students work hard to attain shared high standards to earn them. As a result, students do not experience the kind of integrated, holistic, developmental, rigorous undergraduate education they absolutely must have for truly transformative higher learning to occur.

     

    To put student learning at the center of each institution’s work demands that we know the extent to which learning is occurring and that we provide timely and appropriate feedback to students and teachers. To change institutional culture requires that we recognize and embrace the cumulative and collective nature of higher learning and the powerful role that learning assessment plays in outcomes of that nature. Thinking critically and writing creatively, for example, are skills learned cumulatively over the span of the entire undergraduate program. Objectives and standards for excellence in these skills must be shared—intentionally articulated, planned around, and assessed by faculty and staff across all courses and programs. Higher learning requires far more instruction, practice, assessment, and feedback than is currently provided or expected within single courses or other isolated learning experiences.

     

    The assessment challenge of cumulative learning is that it requires faculty to come together—collectively—and to agree on which outcomes, expectations, and standards they share and endorse, and then, throughout their various courses and programs, to reinforce these outcomes, expectations, and standards. The assessment of cumulative learning demands change in the institutional culture of learning, change that requires faculty to significantly raise their expectations and standards for learning outcomes and that ensures the adequate formative and summative assessment of those outcomes. Outcomes, expectations, and standards, moreover, must be transparent. When students engage with faculty and staff in pursuing transparent, institution-wide outcomes, expectations, and standards, and when they receive frequent and appropriate feedback, higher learning improves. In this sense, learning assessment is best understood not as an external imposition by the state or administration but rather as a powerful dimension of teaching and learning derived, practiced, and promoted by faculty and staff to improve the quality and quantity of undergraduate learning.

     

    Given the cumulative and collective nature of higher learning, establishing and sustaining a conscientious, diligent, rigorous, campus-wide regime of learning assessment requires changes not just in attitudes but also in campus policies and commonly agreed practices to advance and sustain a more intentional learning culture. Learning assessment, for example, should not be the burden of a small knot of dedicated faculty and staff who understand its benefits and are willing to suffer its additional costs; when that happens, exhaustion, disenchantment, and frustration are inevitable.

     

    To say that academic culture change—however imperative—is hard is an understatement. The work culture of academia rightfully offers each individual faculty member a great deal of freedom for independent judgments about the aims and content of learning. Yet relationships, not just between faculty and administration but also among faculty members themselves, create cultural and power barriers that are difficult to overcome. Constructing shared outcomes, expectations, standards, and assessment tools, and conducting effective learning assessment requires precious time and effort. Incentive and reward systems are currently skewed against such change. Reappointment, promotion, and tenure criteria need to be adjusted to align with these greater expectations for teaching and for the more time-consuming engagement with students that effective learning assessment requires. Given the limits of most doctoral programs, faculty and staff need better opportunities to learn more about appropriate assessment and how to implement it. And, of course, myriad pros and cons arise with the issue of comparing similar institutions to develop learning benchmarks.

     

    The task list above is hardly exhaustive. This kind of change, ultimately, may be less about expertise and more about will. Changing the academic culture requires sustained, shared, courageous leadership by faculty, staff, administration, and governing boards. Anything less invites those outside the academy to act as referees, which is never good for either the academy or the NFL.

     

    Richard H. Hersh, formerly president of Hobart and William Smith Colleges and Trinity College (Hartford), currently serves as senior consultant for Keeling & Associates, LLC, a higher education consulting practice. Richard P. Keeling, formerly a faculty member and senior student affairs administrator at the University of Virginia and the University of Wisconsin–Madison, leads Keeling & Associates. Hersh and Keeling are the authors of a new book, We’re Losing Our Minds: Rethinking American Higher Education (Palgrave Macmillan, 2012).

  • Comments on the Commentaries about 'Seven Red Herrings'

    I am pleased to accept the invitation to briefly respond to some of the points made by those who commented on my “Seven Red Herrings” paper which appeared in the September 2012 issue of the NILOA monthly newsletter.  In his Foreword, Peter Ewell predicted that the merits and role of standardized testing will almost certainly continue to be debated.  With this in mind, I also offer a few thoughts about what to expect in the future.

    Trudy Banta, Gary Pike, and Terrel Rhodes view the promise and potential of standardized testing differently than Margaret Miller and Gordon Davies.  Miller sees standardized measures as essential, because the field demands highly reliable and valid assessment tools.  At the same time, she believes formative assessment is important as well, albeit for different purposes.  Davies goes a step further by saying that colleges and universities must use standardized student learning outcomes measures to assure the public of that these institutions continue to make meaningful, valued contributions both to individuals and the larger society. 

    Banta and Pike represent the formative end of the assessment continuum.  Most of the arguments they presented in their commentary about standardized assessment measures, particularly the Collegiate Learning Assessment (CLA), have appeared previously.  Many of their points have been addressed by CLA staff, the Educational Testing Service (ETS), and other researchers, including a summary of approximately 90 studies (Benjamin, et al. 2012).  Although my paper was not about the CLA per se, it is worth summarizing several cogent responses available elsewhere to the Banta and Pike’s main arguments. 

    For example, average CLA value-added scores are highly reliable especially at the institution level (freshmen=.94; seniors=.86).  Aggregate student motivation is not a significant predictor of aggregate CLA performance, and does not invalidate the comparison of colleges based upon CLA scores.  Moreover, the types of incentives that students seem to prefer are not related to motivation and performance. 

    Although we continue to believe that a no-stakes approach is appropriate for the value-added model in higher education, motivation is a problem for individual student results. CAE (Council for Aid to Education) now offers a version of the CLA protocol, CLA+, which is reliable and valid for individual student performance, as does the Education Testing Service with its Proficiency Profile, and the American College Testing Program with its Collegiate Assessment of Academic Progress.  It may well be appropriate in the future to attach stakes to the CLA, which, in turn, likely will increase student motivation to do well.

    There is no interaction between CLA task content and field of study. Our researchers find that the CLA protocol measures 30% of the knowledge and skills faculty desire.  Results are improved significantly if a representative sample is drawn.   Finally, that the CLA is highly correlated with the SAT does not mean the two tests measure the same thing.  High school grades combined with the CLA predict freshmen and senior GPA at about the same level as the SAT alone.  High school grades plus the SAT and CLA generate a higher prediction than either test alone. This would not be true if the SAT and CLA measured the same thing.

    Banta and Pike are correct in advocating a focus on disciplines, but stray off track by rejecting that standardized test can accurately measure generic cognitive skills (Benjamin et al. 2012).  The mean size effect of the growth in student learning outcomes for all colleges testing annually for the past eight years is approximately .73 standard deviations, demonstrating that college attendance is associated with improving these skills.

    Banta and Pike suggest there is qualitative evidence to buttress their claims.  It would be helpful to know the evidence to which they refer.  Measurement scientists privilege statistical-based evidence.  This makes conversation between the two groups difficult.  Elsewhere I (Benjamin, 2012) explained what I call the assumption of the equality of fields of inquiry.  Faculty members are reluctant to question the legitimacy of fields of inquiry that they may not be familiar with. There are solid reasons for this assumption.  For example, an obscure field of molecular biology in veterinary medicine focusing on retroviruses in monkeys was critical in helping researchers develop treatments for AIDS. Breakthroughs in one scientific field may lead to startling breakthroughs in others.  Measurement science is a field of inquiry that is too well established to be dismissed by colleagues arguing for formative assessments only.  For example, Banta and Pike and Rhodes make good arguments for using e-portfolios to assess student learning.  However, e-portfolios do not yet pass muster as tools that are sufficiently reliable and valid to obviate the need for appropriate standardized tests for decisions with stakes attached.

    Both Davies and Miller want testing organizations to make public student outcome test results.  What I should have said was that external demands will require institutions to make their student learning outcomes transparent and that peer review principles aligned with core values of the academy will provide foundational support for higher education leaders creating assessment reporting systems

    Peter Ewell noted that faculty prefer to keep assessment results confidential, for internal use only.  It is worth noting that testing organizations can achieve greater economies of scale in test development which lowers the price of individual assessments.  Aided by recent developments in education technology, there appears to be a burst of innovation in creating assessments for direct use by faculty as instructional tools.  Finally, samples of students tested at individual institutions are seldom large enough for the results to be considered sufficiently reliable.  More widely used standardized assessments can boost confidence in the results found at individual institutions.

    What We Can Expect

    The competency-based model now gaining considerable traction will require assessments that corroborate the efficacy of the student learning claimed.  Many of those assessments will be standardized tests.  There is and will continue to be ample room for formative and standardized tests in postsecondary education.  The issue is how to better leverage the virtues of both, for the benefit of improved teaching and learning for the larger societal goals Davies posited.

    This, then, is not the time to defend the status quo.  Many colleagues may be comfortable in defending positions that marginalize assessment in postsecondary education.  Because increasing numbers of private and public leaders believe human capital is the nation’s principal resource, debates about how to improve education will continue to grow.  The rise of Internet-based education and concerns for the quality of higher education provided by more traditional means are fueling external demands for increased transparency, restructuring, and accountability.

    External demands for benchmarking student learning outcomes are destined to increase.  However, higher education institutions possess a high level of legitimacy and relative autonomy anchored by department-based governance.  The initial challenges for increased transparency of student learning outcomes will come from external forces.  Responses to these demands will be developed by innovators within the higher education community.  We need all hands on deck to experiment with ways to improve teaching and learning.

    Finally, higher education institutions must respond to persistent external demands for more systematic evidence about student learning outcomes.  In doing so, the enterprise must also maintain faculty autonomy in determining appropriate assessment approaches; reject college and university ranking systems; privilege efforts to improve student learning; develop assessment protocols that combine standardized and formative assessments; and adhere to peer review principles when constructing accountability systems.  About this last observation, there seems little to debate.

    References

    Benjamin, R. (2012). The new limits of education policy: Avoiding a tragedy of the commons.  London: Edward Elgar.

    Benjamin, R. Elliot S., Klein S., Steedle, J., Zahner, D., & Patterson, J. (2012). The case for generic skills and performance assessment in the United States and international settings.  New York: Council for Advancement of Education.