Viewpoints discussion board | University of Illinois

National Coalition for Learning Outcomes Assesment

Entering content area for Viewpoints discussion board

blog posts

  • Guidelines to Consider in Being Strategic about Assessment

    In the 1960’s when the first major push for formal evaluation in education occurred, the goal was to make decision making more rational and objective.  The movement stressed quantitative evidence and reflected modernism. “Outcomes,” often couched in terms of measurable behavioral objectives, were all the rage at the time. In this paper, however, we argue that constructive assessment must incorporate three important guiding themes: inclusiveness, “sitting beside”, and usefulness – themes which influenced and structured our guidelines for assessing student learning.

    The first theme of our work was shaped by two influential scholars that highlighted the importance of inclusiveness in their perspectives on assessment.  Robert Stake, one of the pioneers in qualitative inquiry, wrote in 1967 of the countenance of evaluation as consisting of antecedents, transactions, and outcomes. In 1993, Alexander Astin, from his perspective of student development, stressed that assessment ought to include inputs, experiences, and outcomes.  In sum, the first theme is that assessment needs to be inclusive of more than just outcomes to be effective.

    The second theme of our work is based on our definition of assessment and its guiding purposes – to sit beside – which emphasizes the Latin root of the work, assidere, which means “To Sit Beside” (and to assist).  Assessment as “sitting beside” reinforces the human element.  “Sitting Beside” as an image highlights exchanges and shared responsibility among members of the academy.  To “Sit Beside” brings to mind such verbs as to engage, to involve, to interact, to share, and to trust.  It conjures up team learning, working together, discussing, reflecting, helping, building, collaborating, and recognizing positively the political nature of assessment.  It points to cooperative learning, community, communication, coaching, caring, and consultation. In short, “sitting beside” as opposed to “standing over,” is the primary implementation goal of assessment.   

    Sitting beside brings into sharp relief the different and sometimes conflictual roles of assessment that Peter Ewell (2009) discussed in a recent NILOA Occasional Paper.  Both formative, developmental improvement-oriented assessment and summative, evaluative accountability-oriented assessment are needed.  But assessment focused on improvement is more apt to be used within the academy and tends to give focus to assessment, perhaps in the form of a question or issue that is calling out for improvement.  

    The third theme is that assessment needs to be useful. In the 1970’s there was a big push on the utility of assessment, partly out of the frustration of those in the field of evaluation that their work in assessment was not being used. Michael Patton, a leader in assessment four decades ago, is still a voice worth listening to in this area. The same issue is in fact true today. We have often stated that “Assessment not being used is at best useless.”  

    The question becomes how can we encourage and facilitate more productive use of assessment results?  We have argued that good assessment is based on commitment and not control – or active ownership of assessment in an environment that is problem-solving oriented and integrated into the normal policy and decision-making processes of the relevant stakeholders. In short assessment needs to be strategic (Braskamp & Braskamp, 2007). Our own guidelines emphasize the importance of being strategic in heightening the usefulness of assessment efforts, building on earlier work by Robert Brown and Larry Braskamp (1980), who offered a list of 50 items to be considered in judging the usefulness of assessment, as well as a more recent list by Trudy Banta in her 2002 book.

    Our focus on these themes over many years culminated in the latest work funded by The Teagle Foundation titled, “Guidelines for judging the effectiveness of assessing student learning.”The aim of this initiative was to test – to receive feedback – from IR professionals, assessment professionals, faculty, administrators, and academic leaders on campus about our attempt to write guidelines that would amplify the argument that assessment needs to be strategic—not necessarily more abundant. In our current work, we decided to update the Brown and Braskamp (1980) list of 50 items and in the process we discovered and rediscovered a few bits of wisdom and examples of good practice that follow. 

    First, while a focus on student learning outcomes is an important part of assessment, it is incomplete, especially if the goal of assessment is to assist stakeholders in improving student learning and development.  What is needed is a more inclusive strategy which includes student characteristic and dispositions (inputs), student experiences, as well as outcomes and indicators of student learning. It goes almost without stating that NSSE has made a significant contribution in stressing the importance of engagement in the journey of college students (Kuh, Kinzie, Schuh, Whitt, & Associates, 2010).

    Second, the integration of inclusiveness, “sitting beside, and “usefulness” is critical but also very difficult to achieve.  Stakeholders are often challenged by the complexity of these three components, which extend beyond the proverbial goal of developing students’ critical thinking and do not occur naturally.  Thus, nonuse of evidence is too often the result. The human factor cannot be overestimated in its importance.

    Third, the need for “self selection” in the assessment of learning experiences is often overlooked.  This generalization has become more salient as we have focused our work on assessing the influences of different types of learning environments that reflect an increasingly global and international perspective (Green, 2012). When students take the Global Perspective Inventory (GPI) (Braskamp, Braskamp, & Engberg, 2013) they are measured on their development of a global perceptive (outcomes); their experiences in the curriculum, co-curriculum, and campus community (experiences); as well as demographic characteristics and entering dispositions. Engberg and  Jourian (2013) have noted, for instance, that while many students grow along the different dimensions of the GPI during a semester abroad, these differences are best explained by examining factors such as student-faculty relationships, pedagogical processes, intercultural wonderment, and student background characteristics. 

    In sum, scores on an outcome measure do not tell the whole story. This principle is important in recognizing the influence of ethnic, racial, religious, social, and cultural backgrounds of students in their journey while in college.  We are pleased that NILOA included in its impressive set of articles in Viewpoint an announcement of a recently initiated Center for Culturally Responsive Evaluation and Assessment (CREA) at the University of Illinois Urbana-Champaign.    

    Fourth, the process of assessment needs to pay more attention to the stakeholders, whether they are policymakers, administrators, faculty teaching a course, or students. Strategic planning needs the people with power and influence involved, if actual use of evidence and improvement are to occur. Working with stakeholders may help define and refine the focus of assessment and provide a natural bridge that allows for the application of findings in ways that lead to improvement. Ownership is a critical factor.  

    We thus offer once again a set of 50 guidelines organized into five areas:  Havinga clear purpose andreadiness for assessment; Involving stakeholders throughout the assessment process; what andhowto assessiscritical; assessment istellingastory; and improvement and follow-up are an integralpart of the assessmentprocess.  We welcome you to read these, if you are planning, implementing, and/or judging the benefits and contributions of your assessment efforts. We suggest that you first give a score from 1 – 5 depending on how important you think a particular Guideline is to you.  Then, we recommend sharing your judgments with your colleagues, especially with different stakeholders (i.e., “sit beside”). Finally, please contact us if you decide to use the guidelines and let us know your experience and whether there are additional items you wish to add to the list (But the feedback we have received is that the list is succinct and thus more useful, so adding more Guidelines is not the solution.  We have many on the cutting floor now.)

    We end with a rather personal note.  Our advice is to take a rather long view of the role and impact of assessment. Immediate impact is very seldom, about the only conclusion we can make with certainty. Colleagues have commented on this set, and one asked how long did it take us to write these, and Larry replied, “About 50 years.”

     

    References

    Astin, A. (1993). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Washington, DC: Oryx Press.

    Banta, T. W. and Associates (2002). Building a scholarship of assessment. San Francisco, CA: Jossey-Bass.

    Braskamp, L. A.  (1989). So, what’s the use? In P.J. Gray (Ed). Achieving Assessment Goals Using Evaluation Techniques. New Directions for Higher Education, 67, (pp. 43-50).  San Francisco, Jossey-Bass.

    Braskamp, L. A. & D. C. Braskamp(Fall 2007).   Fostering holistic student learning and development of college students: A strategic way to think about it.  The Department Chair, 18, 2, 1-3.

    Braskamp, L.A., Braskamp, D. C., & Engberg, M.E. (2013). Global Perspective Inventory.   https://gpi.central.edu/supportDocs/manual.pdf

    Brown, R. D. & Braskamp, L. A. (1980). Summary: Common themes and a checklist. In Braskamp, L.A. & Brown, R.D. (Eds.). Utilization of Evaluative InformationDirections for Program Evaluation, 5, (pp. 91-97). San Francisco, CA: Jossey-Bass.

    Engberg, M.E. & Jourian, T.J. (2013, November). Intercultural wonderment and study abroad.  Presentation at the Annual Meeting of the Association for the Study of Higher Education, St. Louis, MO. 

    Ewell, P. T. (2009, November). Assessment, accountability, and improvement: Revisiting the       tension (NILOA Occasional Paper No.1). Urbana, IL: University of Illinois and Indiana         University, National Institute for Learning Outcomes Assessment.

    Green, M. F. (2012). Measuring and assessing internationalization. New York, NY: NAFSA Association of International Educators.

    Kuh, G. D., Kinzie, J., Schuh, J. H., Whitt, E. J. & Associates (2010). Student success in college: Creating conditions that matter. San Francisco, CA: Jossey-Bass.  

    National Institute for Learning Outcomes Assessment. University of Illinois at Urbana-Champaign: Champaign, IL. Retrieved at www.learningoutcomeassessment.org/

    Patton, M.Q. (2012). Essentials of utilization-focused evaluation. Los Angeles, CA: Sage Publications.

    Stake, R.E. (1967).  The countenance of educational evaluation. Teachers College Record, 68, 523-540.