November 2013 Cover
THE
BULLETIN
Volume 81 | Issue 6
November 2013

Grounded in Reality: Writing Learning Outcomes

Jennifer Pelletier, D’Arcy Oaks, & Lance Kennedy-Phillips

Higher education leaders have long argued the importance of both articulating and measuring outcomes to demonstrate that services and programming contribute to student development. Similarly, the ACUI core competencies call for professionals to develop knowledge and skills in identifying and assessing outcomes for student learning. Articulation and measurement serve multiple purposes: communicating our value to stakeholders, helping celebrate our successes, holding decision-makers and program Outcomes - CCcoordinators accountable, and contributing to broad visioning such as strategic planning for the division or the institution.

But beyond understanding the importance of an outcomes-orientation, much less attention has been paid to what constitutes good and appropriate outcomes. Outcomes function as our “contract” with students, empowering them to hold the institution accountable for the programs and services offered, which are often funded through student fees. Given those stakes, the quality of the articulated outcomes is paramount.

One of the fundamental lessons related to outcomes-based assessment in student affairs is that good outcomes reflect the work we do, not our aspirations. Using this distinction, student affairs units can better situate their activities in the broader context of the division and institution, articulate those activities with appropriate outcomes, and apply those outcomes to propel a reasonable and useful assessment plan. The reality check is this: Are the outcomes appropriate given the scope of the program, activity, or service, and given the organizational level? Only after confirming outcomes are scaled accordingly can we use them to plan programs and services effectively. 

Organizational Effectiveness Model

Ohio State University’s organizational effectiveness model is a road map for departments as they align their major activities and expected outcomes with the mission, goals, and vision of the division. The key to the success of this model has been its focus on the departments. The identification of major departmental activities, outcomes, and performance indicators that measure success and fulfill the department’s mission and goals in turn support the fulfillment of the division’s mission and goals.

Major Activities
Major activities define what a department does on a day-to-day basis. The mission for each department should assist in defining major activities. Major activities are not usually one program, service, or project; they are defined broadly and could include multiple items.

For the purpose of illustrating the effectiveness model, let’s examine the campus programming board as an example of a fairly common experience in many college unions. Assume the campus programming board has responsibilities for planning and implementing campus-wide programming that is directed by student members and advised by professional and/or graduate staff. Major activities for a campus programming board could include: 1) Offering a diverse programming calendar to meet the needs of a diverse student population, and 2) Offering leadership development opportunities for student leaders

Outcome Statements
Outcome statements are directly related to the major activities. These statements can be characterized as learning, developmental, or operational, depending on the nature of the activity. The goal is to develop two to four outcome statements per major activity that can help guide the outcomes-based assessment.

One popular approach to developing outcomes is the A-B-C-D method in which: “A,” for Audience, describes the “who” that is performing the action, most often a student learner. “B” is the Behavior that is expected as a result of “C,” the Condition, which is most often participation in a service or program. “D,” the toughest to delineate, is the threshold, or Degree, to which the behavior would be performed to satisfy the outcome.

Continuing with the campus programming board example, one outcome related to the first major activity—a diverse programming calendar—could be that students will recall and explain key points after attending a specific lecture program. Related to the second major activity—student leadership development—one outcome could be to demonstrate both competence and confidence in interpersonal communication as a result of regular communication with peers, campus stakeholders, and professional agents.

Performance Indicators

Performance indicators are measures of success for each major activity and its associated expected outcomes. Performance indicators should include measureable items that require baseline data and current data. These measures (magnitude measures) determine the effect the major activity has on the university community (e.g., number of students participating or number of programs facilitated by the department). The performance indicators measure the expected outcomes. These measurements generate data that can be used to make changes and improvements, which will enable programs to achieve their intended outcomes.

Performance indicators for the first outcome statement—recall and explain key points from the lecture—could be the number of total attendees, percentage of tickets redeemed, and cost per attendee. For the second outcome statement—competence and confidence in interpersonal communication—performance indicators could be the number of skill-based training workshops offered for board leaders and student hours spent per week planning programs.

Outcomes-Based Assessment
Assessment authorities John Schuh and Lee Upcraft have asserted the purpose of assessment is to improve practice. Beyond grounding practice in outcomes, it is also important to make sure that outcomes assessment is done rigorously and with sound methods. Departments should focus on conducting high-quality assessment that allows for the most actionable evidence of practice and success. To this end, student affairs departments new to the assessment process might be best served by concentrating on only one outcome of one major activity during a given time period.

Outcomes-based assessment for the first outcome statement in our example—recall and explain key points from the lecture—could include a learning-based survey of lecture attendees. Outcomes-based assessment for the second outcome statement—competence and confidence in interpersonal communication—could be a qualitative analysis of student leader reflections that is merged with advisors’ behavioral observations of student performance in communication-based activities.

A good assessment plan is focused on student outcomes that are aligned with the goals and purpose of the program. Thus, development of outcomes for a program or service should be done when the program is developed. Articulating clear outcomes in the beginning will guide the scope of services needed to achieve the outcome. When developing outcomes, practitioners should be mindful of the type of data that will be necessary to show fulfillment of the program, thus determining the appropriate performance indicators. For example, if the assessment plan calls for the review of the student self-reflections, this must be included in the planning of the program and the instructions to students must be consistent to allow for effective analysis and interpretation.

Levels of Outcomes

It is important to distinguish between levels of outcomes; that is, there are program- or activity-level outcomes, there are division- or institution-level outcomes, and in the middle, there are unit- or office-level outcomes. The most specific are the program- or activity-level outcomes, and the broadest/most general are the division-level outcomes. In her book Assessing for Learning, author Outcomes - Figure 1Peggy Maki stated: “There is an underlying coherence [between levels of outcomes] that contributes to student learning.” Assessment professionals from DePaul University also have argued that there are levels of reach and specificity, represented in their diagram (Figure 1).

Division-level and department-level outcomes provide a broad macro view of student outcomes across multiple programs and sometimes over multiple years. While they will reflect the department’s mission and purpose, they will also broadly describe a higher level of student achievement than can be measured in one program or service. Sometimes, the true point of evidence takes place at the program or activity level. This is where staff members are actively engaged with students and where a department can begin to measure specific and targeted outcomes. By measuring specific outcomes at the activity or programmatic level, practitioners can begin to map those results to the broader, more general departmental and divisional goals. By mapping specific outcomes to broader outcomes, departments and division will have evidence to show effectiveness and to demonstrate that the division’s goals align with the larger university mission and goals.

There are several useful student development theories and paradigms that undergird this work; among them are Alexander Astin’s work with involvement; William Perry’s work with cognitive, intellectual, and ethical development; David Kolb’s work with the experiential aspects of student learning; and Marcia Baxter Magolda’s work with self-authorship. Many of the particular conceptions about learning at Ohio State come from the late psychologist Urie Bronfenbrenner’s conceptions of the ecology of human development, especially his explication of various embedded human systems that interact and promote development in individuals. He explains that an individual operates in a “microsystem” such as family and peers, that operates in a “mesosystem” representing these groups interacting, in an “exosystem” that affects the individual indirectly, and in a “macrosystem” of cultural norms and values. Bronfenbrenner’s work is useful because it describes the influence of groups, organizations, and culture on an individual, and also how groups and organizations influence each other.

But it is Benjamin Bloom’s taxonomy that lends itself particularly to assessing student development in cocurricular realms. Stage models have many flaws, but the cognitive and affective domains (and to a lesser extent, the psychomotor domain) of Bloom’s taxonomy are useful to student affairs assessment for several reasons. Among them, the domains give a starting point for thinking about and developing specific outcomes for a program or service because the taxonomies relate, not to an individual learner operating in a vacuum, but to some educational system or process, as in our case of cocurricular activities. The domains describe the progress, or development, of the learner, and also describe intended influences or effects of the contexts in which the learners are operating and interacting, which often are designed and organized by student affairs staff members. Therefore, these domains and outcomes can be used to appropriately challenge and support students. If professionals do this, we will have done the real work of developing outcomes: declaring specific and achievable aspects of value added by the cocurriculum.

The levels of the cognitive domain are, lowest to highest: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The levels of the affective domain are, lowest to highest: Receiving, Responding, Valuing, Organizing/Conceptualizing, and Internalizing Values. The model of the domains assumes that learning at the higher levels depends on learning within the lower levels.

As described, there are two kinds of “levels” relevant to outcomes, a level of domain (for our purposes, either cognitive or affective) and levels of reach or specificity. These two kinds of levels can be, loosely mapped to each other, as is depicted here. When mapped together, they suggest a pairing between reach/specificity and development. Activity-level outcomes should be associated with the “lower” levels of the domains (Knowledge and Comprehension, and/or Receiving and Responding) and on the other side of the spectrum, division- or institution-level outcomes should be associated with the “higher” levels of the domains (Synthesis and Evaluation, and/or Organizing/Conceptualizing, Internalizing Values). The mapping of the two sets of levels helps practitioners write outcomes that reflect the work they actually do.

For example, a college union may identify an outcome focused on building community: “Our college union will foster connections to the campus and local community.” Such an outcome is lofty, general, and difficult or impossible to measure. A more concise outcome, tied to a specific program or service, would be more appropriate: “Participants in a community immersion program will be able to describe neighborhood assets and needs.” This adjusted outcome can be measured, and if the data show the desired outcome is not being met, programming can be altered. In this example, we certainly hope that students will feel connected to our community, but what we can measure is their ability to describe the state of their local community. In this manner, the outcome reflects the work we do, not lofty goals and aspirations.

To plan and implement services and programs effectively, we need to articulate outcomes with the appropriate scope, situated in the broader educational and organizational context. As college unions exist in a competitive higher education landscape, the need for outcomes-based assessment is clear. College union professionals want to be able to demonstrate how programs and services contribute to the larger institutional mission. We can make this case with more robust evidence when outcomes are written on a measurable scale and then collectively mapped to the broad university goals.

 


 

Contributors

PelletierJennifer Pelletier is an assistant director for Ohio Union Events and Student Activities at The Ohio State University. In this role, she develops and implements training and learning initiatives for student employees, professional staff, and graduate students. Pelletier has been involved with ACUI as a presenter at annual and regional conferences, online learning programs, and regional conference program teams.

 

OaksD’Arcy John Oaks, Ph.D., is the associate director for planning and assessment at the Center for the Study of Student Life in the Office of Student Life at The Ohio State University. His role includes assessing cocurricular learning and other “value-added” aspects of student affairs programming and services and guiding unit-level strategic planning. He serves on the Board of Directors of the Student Affairs Assessment Leaders (S.A.A.L.).

  

Kennedy-PhillipsLance Kennedy-Phillips, Ph.D. is executive director of the Center for the Study of Student Life at The Ohio State University. In this role, he leads assessment, research and evaluation of cocurricular learning and programs. Kennedy-Phillips served as national co-chair for the NASPA Assessment, Research, and Evaluation Knowledge Community from 2010–13. In addition, he served as director of the Foundations I Institute for the Association for Institutional Research from 2008–12.