Viewpoints discussion board | University of Illinois

National Coalition for Learning Outcomes Assesment

blog navigation

blog posts

  • Why Assess Student Learning? What the Measuring Stick Series Revealed

Comments

cbeyer@u.washington.edu Nov 21, 2011 4:09 pm

1. How is quality defined in the assessment of student learning outcomes in higher education? How should it be defined? A large and growing body of research on undergraduate learning has provided strong evidence that undergraduate learning is discipline-specific. (For an interesting range of examples see Bransford et al., 2000; Beyer et al., 2007; Donald, 2002; Pace & Middendorf, 2004; Wineburg, 2001, 1991; Neumann et al., 2002; Menges & Austin, 2001; Shulman, 1988; Biglan, 1973.) Everyone accepts that the disciplinary nature of learning in college means that the content students learn differs across the disciplines. We do not expect a history major to know how to track errors in accounting or a biology major to be able to write a sestina that is not boring. However, the disciplinary nature of learning in collegeas the research clearly showsextends to the skills students learn, as well. In other words, what an electrical engineering major learns about thinking, writing, quantitative analysis, and research in college differs from what an art major or a political science major learns in these same skill areas. Therefore, because both content and skills are mediated by the academic disciplines in which they are taught, the methods, quality, and extent of that learning can only be defined and assessed by the experts in those fields: faculty members and their designates. 2. Who is responsible for the quality of student learning outcomes in higher education? Who should be responsible? Everyone at the university is responsible for learning outcomes, because learning in college is bigger and broader than learning in the major. Students in college should be and are transformed by their college experience in multiple ways, all of which are important to their future lives and learning. If students do not learn new things in their college experience about who they are, what they might aspire to, what they value, and how they are the same as and different from people from other places and backgrounds, then we have failed them. Good assessment tracks those changes, as well as academic growth. I believe that the more we study the broader changes in students growth, the more we are likely to identify disciplinary patterns in it. Regarding academic development, however, responsibility for the quality of student learning begins with departments and faculty; extends to Deans, Provosts, Presidents and Regents who should support assessment with both money and words; involves staff members responsible for programs related to academic development, such as service learning and study abroad; and includes students commitment to their own learning. To say that only the faculty or only the students or only the administrators are responsible ignores the rich complexity of what learning in college is and does. 3. What practical ways are you measuring and improving quality? The University of Washington (UW) uses the standard methods everyone else usesalumni surveys, the NSSE, departmental exit surveys, course evaluations, accreditation opportunities, a 10-year departmental review process that includes assessment of learning, and biennial assessment reports from departments that list departmental learning goals and methods for assessing them (see http://www.washington.edu/oea/pdfs/reports/OEAReport1102.pdf). In addition, before the budget cuts that have gutted hiring, raises, promotions, and morale at ours and other institutions across the country, the UW worked with individual departments on identifying learning goals for majors and on ways to assess those goals that were consistent with departmental culture and practice. 4. How should we measure the tools and assessments used to evaluate quality? If the tools we are using to assess learning are not valid measures of the learning we know characterizes colleges and universitiesif, for example, they cannot help us understand how well art majors and biology majors learn the practices, values, and conventions of their disciplinesthen we should not throw our money away by using them. Furthermore, if those measures cannot give art and biology departments meaningful information to guide the direction of curricular change, then we should not use them. Also, this question addresses another question about assessment. What assessment scale leads to better student learning and how do we know? Are students attending institutions that do a lot of assessment learning more than those at institutions that do only a little? How do we know? Also, if we have only a few bucks, should we spend them on program-based assessment or on classroom-based assessment? Which has a bigger impact on student learning? It's time that we in the assessment business did some research on our own effectiveness.5. How can we navigate the tensions between assessment for improvement and assessment for accountability?We should keep tracking the easily-acquired assessment for accountability numbers from our institutionsgraduation, retention, gender and ethnicity of faculty and students, average grades, and so on. Those aspects of our work matter, and they are not separate from improvement. All of us need to improve graduation rates, for example, and some institutions should focus on that. But we should not pretend that we can track learning and teaching in the same simple way that we count heads at graduation. Meaningful assessment of teaching and learning requires considering authentic student work in the major, creating a curricular map that helps departments see where their learning goals for majors are being taught and practiced, and conducting conversations with faculty who care about their subjects and their students and who are empowered to make decisions about them both by their expertise and their positions on the ground. Furthermore and finally, we need to stand against the one-number-fits-all assessment disaster that K-12 is experiencing in its assessment processes. Diversity is strength. We should honor the diversity of our institutions and our programs. That is what has made us the best system of higher education in the world. We standardize it and reduce it to single numbers at our own peril.ReferencesBransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.) For the National Research Council. (2000). How people learn: Brain, mind, experience, and school. Washington, D. C.: National Academy Press. Beyer, C. H., Gillmore, G. M., and Fisher, A. T. (2007). Inside the undergraduate experience: The University of Washingtons Study of Undergraduate Learning. San Francisco: Jossey-Bass.Biglan, A. (1973). The characteristics of subject matter in different academic areas. Journal of Applied Psychology, 57(3), 195-203.Donald, J. G. (2002). Learning to think: Disciplinary perspectives. San Francisco: Jossey-Bass.Menges, R. J. & Austin, A. E. (2001). Teaching in higher education. In Richardson V. (Ed.) (2001) Handbook of research on teaching (4th ed.). Washington, D.C.: American Educational Research Association, 1066-1101, 1122-1156.Neumann, R., Parry S., & Becher, T. (2002). Teaching and learning in their disciplinary contexts: A conceptual analysis. Studies in Higher Education, 27, 405-417.Pace, D. & Middendorf, J. (Eds.) (2004). Decoding the disciplines: Helping students learn disciplinary ways of thinking. San Francisco: Jossey-Bass.Shulman, Lee S. (1988). A union of insufficiencies: strategies for teacher assessment in a period of educational reform. Educational Leadership, 46(3), 36-42.Wineburg, S. (2001). Interview with Randy Bass. Visible Knowledge Project, Georgetown University, from http://crossroads.georgetown.edu/vkp/conversations/participants/html. Accessed on 10/12/06.Wineburg, S. (1991). On the reading of historical texts: Notes on the breach between school and academy. American Educational Research Journal, 28(3), 495-519.

Reply to cbeyer@u.washington.edu at 4:09 pm
gjea2@illinois.edu Nov 28, 2011 8:54 am

Thanks for your thoughtful response Cathy. I agree, we have seen a lot of growth in department and discipline specific ways of measuring student learning outcomes (please see http://www.learningoutcomeassessment.org/CollegesUniversityPrograms.html and the NILOA Down and In report, http://www.learningoutcomeassessment.org/DownAndIn.htm for more information). Yet I think this growth is mainly from within higher education and those external, say policy makers, still want some way to comparatively examine student learning outcomes for all students. Many institutions have campus wide, university level learning outcomes statements that departments align with or reference in their own specific outcomes.You raise some good questions: What assessment scale leads to better student learning and how do we know? Are students attending institutions that do a lot of assessment learning more than those at institutions that do only a little? How do we know? Also, if we have only a few bucks, should we spend them on program-based assessment or on classroom-based assessment? Which has a bigger impact on student learning? It's time that we in the assessment business did some research on our own effectiveness.Anyone else want to jump in?Thanks for your contribution in this conversation.

Reply to gjea2@illinois.edu at 8:54 am