Viewpoints discussion board | University of Illinois

National Coalition for Learning Outcomes Assesment

Entering content area for Viewpoints discussion board

blog posts

  • What if the VSA Morphed into the VST?

    Good on Doug Lederman at Inside Higher Ed for bringing us up to speed on recent developments related to the six-year old Voluntary System Accountability (VSA).  Even though much of what we find today in the way of assessment tools and approaches was either being used on college campuses or on the drawing board prior to 2007, the VSA undoubtedly pushed some aspects of the work further along than would have happened if matters were left to individual institutions.  This is surely the case with regard to transparency, a feature of public accountability to which I will return to later.

     

    As Lederman reminded us, the VSA was a timely political response by the postsecondary enterprise that was under what felt like unprecedented scrutiny as to its value.  But it was also a much-needed substantive stab at what universities could do to be more forthcoming about their performance to respond to the interests of various parties on and off the campus.  Of course, some institutions were well out in front in such efforts.  And some national projects, such as the National Survey of Student Engagement, were designed with similar purposes in mind.  But in launching the VSA, the Association of Public and Land-Grant Universities (APLU) and the American Association of State Colleges and Universities (AASCU) became the first national institutional membership organizations to lend their imprimaturs to a vehicle designed to encourage universities to measure student learning and to report the findings.  Recall that at the time the long-delayed Higher Education Reauthorization was looming.  The vast majority of colleges and universities had little to show related to their performance other than what many considered unacceptably low graduation rates (which were artificially dampened by the unfair IPEDS calculation algorithm).

     

    David Shulenburger, the long-serving provost at the University of Kansas, had by then joined the APLU staff, and was one of the architects of the first VSA draft along with Peter McPherson, president of APLU (called the National Association of State Universities and Land-Grant Colleges at the time).  McPherson introduced the VSA at the Spellings Commission public hearing in Indianapolis during his testimony as part of a panel on which yours truly was a contributor.  I dare say no one else on the panel or the members of the Spellings Commission knew what was coming.  Indeed, most of the media attention during and flowing subsequently from that April 2007 event focused on the VSA.

     

    Assessment work has come a long way since the VSA was introduced.  And, to its credit, the VSA has both ushered in and attempted to reflect the advances.  As Lederman’s article makes plain, the VSA was and continues to be an imperfect solution to a pressing but complicated problem.   At the least, its continued presence on the accountability and improvement landscape has prompted others to launch their own transparency efforts, such as the American Association of Community College’s Voluntary Framework for Accountability (VFA), the private sector’s U-CAN, and the short-lived Transparency by Design effort.

     

    To their credit, those responsible for the VSA are contemplating additional changes to make it more useful and, therefore, more attractive to the institutions that must use it if the VSA itself is to be of value.  One of the more noteworthy changes to the VSA is to allow universities to populate the College Portrait website of the VSA with multiple forms of evidence of student accomplishment, something that many APLU and AASCU member schools wanted.

     

    Here are four additional challenges the VSA and other similar transparency efforts must address.

     

    1. Determine which audiences want what kinds of information.  External groups such as parents and prospective students have trouble making meaning of test score numbers that supposedly represent the so-called “typical student” enrolled at different universities.  They are far more interested in knowing about the experiences of people who are like a particular type of student (oh yes, and they are very interested in cost data!).  On the other hand, internal audiences such as faculty and staff may want (or at least expect to see) detailed information— perhaps even dense data displays— accompanied by a careful analysis of which conclusions can be drawn from the data.
    2. Present the information of interest to respective audiences in language that is clear and meaningful.  Prospective students, for example, would find helpful seeing and hearing someone explain what someone like them can expect to do if they enrolled at this institution, including the odds that during their studies they would engage in one or more high-impact practices such as study abroad or an internship.  Such information, coupled with contextualized interpretations of outcomes measures, will be far more instructive than rows and columns of numbers in the abstract.  This suggests that one welcome approach would be a template with links to portals customized for various groups (governing boards, parents, prospective, current and former students, etc.) featuring video snippets from faculty and students.  Some institutions do this now, but they are few and far between.
    3. Make accessible information representing both student performance for various program or major fields and student performance data from program/major-fields “rolled up” to represent institutional level performance.  One of the criticisms of the VSA (fair, but it also applies to other templates) was that a single number produced by any given test is woefully inadequate to represent the range and depth of learning that occurs on a college campus.  In addition, such a number provides little guidance for what faculty and staff could do to improve teaching and learning on the ground—in program or major field classes, labs, and studios—where much of the learning is induced via well-designed assignments and other educationally purposeful tasks.  Experiments to aggregate rubric scores at the institutional level such as the initiative at University of Kansas described in Lederman’s article merit our attention.  It is a promising, but still challenging frontier for assessment work on campus.
    4. Adopt a qualifications framework to present evidence of student accomplishment across a range of desired outcomes.  Inspired by and building on degree qualifications frameworks from other countries, the Degree Qualifications Profile informed by AAC&U’s Essential Learning Outcomes and advanced by Lumina Foundation for Education is one such example.  This would nudge the assessment agenda forward by providing some coherence and continuity across individual institutional reports while at the same time allowing a college or university to emphasize distinctive patterns of student proficiencies.  Such data would also be much more useful for individual programs and majors committed to identifying places where instruction and student performance need attention.  And such an approach could serve as a foundation for documenting the “competencies” (however defined) presented by students whose learning is the product of the combination of, for example, self-selected MOOCs and other delivery systems or life experiences.

     

    To be sure, we are a long way from doing well—at least at scale— the four modifications I’ve briefly described.  Taken together, however, they can help de-emphasize the accountability function of learning outcomes assessment and put more weight on using what we are learning to improve the outcomes we seek.  This would require, among other things, that institutions be more forthcoming about what they do know about student and institutional performance, and experiment with different ways of reporting this and how they have used the information to what effect.

     

    A year ago, the Kettering Foundation released a report suggesting that the public is less interested in the blunt edges of accountability and more interested in having trustworthy information about such societal institutions as schools and hospitals.  After all, the report concluded, most people who pay attention to such matters understand that numbers can be manipulated to tell different stories.  What the public wants is assurance that the institution under scrutiny is doing the right things the right way with an eye toward earning and sustaining the public trust.  This does not imply ignoring the accountability function of assessment.  It does, though, mean that we must be much more transparent about what we are doing.  Granted, much remains to be done to develop the kinds of tools and approaches required by both the accountability and improvement purposes of assessment.  In the meantime, perhaps if the VSA were to be renamed the Voluntary System of Transparency, it would help focus us more clearly on what we need to do in the near term to enhance student learning and institutional effectiveness.

     

    For additional information about learning outcomes and assessing student learning, see the AAC&U publications, Assessing College Student Learning and Assessing Outcomes and Improving Achievementand AAC&U’s initiative, Quality Collaboratives.