Assessment is imperative if we want to make informed decisions about future programs. The tools we use to inform us are varied: satisfaction surveys, pre and post tests, raw data collection, checklists, rubrics, portfolios, focus groups, and interviewing, among many others. In my two years at OSU, I have been involved with the design, interpretation, and implementation of several of these tools, and I have learned various ways to take this assessment experience with me to my next institution and position in order to inform my department about the ways to improve programming. As a place of learning and growth for me, I have been attending the Student Affairs Assessment Council meetings to hear what other departments are doing and strategize the many ways in which I can design and implement assessment now and in the future.

In Career Services’ programs, I have had input in multiple surveys to assess learning and satisfaction. We piloted a satisfaction survey in spring term of 2012 that covered students’ impressions of all of the services we offer, including counseling appointments, drop in resume and cover letter critiques, and events. We also tried a learning outcome survey that went to students several weeks after their counseling appointments to assess whether they had learned resources and actions that were useful in their career development. Each year, we also do a senior survey, asking graduating students about their career development process while at OSU. I was able to view these ahead of time and evaluate the questions to make sure that we were asking what we wanted to know. We have developed surveys for various programs, including the Nonprofit and Volunteering Fair, the Sigi3 software program, and our Career Fairs. The surveys give us a sense of how that particular program is working for students, whether they feel it to be worthwhile, what they knew about it beforehand, what wasn’t useful about it to them, etc. This type of surveying can tell us an immense amount about how to improve the program for the following term or the following year. For instance, with the Nonprofit and Volunteering Fair survey, we learned that most people who came to the event and took the survey did not know about the breakout sessions we had that morning. Next year, the event planning committee will need to do a better job of marketing those breakout sessions.

Raw data collections, including the number of attendees, services utilized, GPAs, retention rates, and graduation rates, can be useful to look at over time for trends and for program evaluations as well. At the Student Enrichment Program at WOU, I have been involved in analyzing raw data regarding a supplemental instruction program they had for their developmental math courses. Over several years, they were seeing no difference in grades for the course between those who took the supplemental instruction and those who did not. They acknowledged that some of this could be due to self-selection, that is that those who took SI already knew that they were not strong in math. However, they wanted to try a different model to see if a change would emerge. So, they’ve moved to one-on-one tutoring in math, hiring several student tutors to work within the already established tutoring center. They began this new program in the fall, so the data is just now beginning to emerge, correlating students who used tutoring services with their math grades. However, the process is slow, as keeping records of who has been using the tutoring services for math has not been systematic. The department hopes to improve that data collection process in winter and spring and have meaningful data to look at soon so that they can compare the success of Supplemental Instruction to tutoring.

Last year, one of the goals for Career Services was to improve our online services tool, Beaver JobNet. As a part of this improvement process, I organized a focus group of volunteers to come into our office and perform a sequence of tasks on Beaver JobNet and then speak to us about their experiences. I designed a worksheet that led them through a series of common functions in the system, search for this job or upload your resume for example, to get a sense of which functions were intuitive in the system and which functions were confusing. This process created the impetus for  many changes in the system, providing more guideposts for students, as well as desires for changes that would need to be made by the software company. However, I can say that we have improved our messaging and guidance steadily since we learned of the problems students were having in our system.

We want to make data-driven, not arbitrary, decisions in higher education, doing those things which we’ve proven work well. In order to gather that data, we need to design and use tools that accurately reflect the meaning we want to learn; asking not only whether students enjoy the program, but if it is having the desired impact on their learning and development. This is an area within which I have a lot of growth and development to continue to do throughout my career; my natural inclination is not to look at numbers but to evaluate how I feel about it, whatever it is. I know that this tendency is not acceptable in this new age of accountability, and I understand why it is not enough to feel that something is going well or poorly. We need to prove it. And we do this by designing and implementing smart assessment tools.

 

Advertisements