You are here

Collect Assessment Data

Programs can design rubrics to assess student learning at events that occur on an annual basis or at milestone graduate student events, such as the thesis or dissertation defense, candidacy exam, or final oral exam. In addition to rubrics, programs can identify key assignments or exam questions from core classes and systematically collect data from all students in the program.

Rubrics are scoring guides that faculty develop and use to assess performance on student assignments, such as  essays, papers, theses, and dissertations. Programs that have never used rubrics to assess student learning can follow the simple steps below. This information is adapted from “Assessing Student Learning, 2nd Edition” by Linda Suskie.

Step 1: Pick an assignment that all graduate students will complete (for example, a qualifying exam, candidacy exam, or final oral exam). Assemble a team of faculty members who are interested in graduate program assessment to meet and develop a rubric for the assignment.

Step 2: Determine the type of rubric that will be developed for this assignment: checklist rubric, rating scale rubric, descriptive rubric, or structured observation guides.

A checklist rubric is a simple list that an evaluator will use to determine whether the assignment included or did not include each item on the list. It is a simple rubric, but it may not adequately assess the quality of a written assignment.

A rating scale rubric includes an assessment of quality. Typically, characteristics of the paper or presentation are rated on a "strongly agree" to "strongly disagree" scale, or an "outstanding" to "inadequate scale." One limitation of this type of rubric is that each rater will have a different opinion of what “outstanding” versus “very good” means, for example. A second limitation is that students do not receive useful feedback with this type of rubric.

Descriptive rubrics are similar to rating scale rubrics, but the ratings are accompanied by brief descriptions of what each rating means. For example, an assessment of ”outstanding“ for the organization of a paper may have the following descriptive explanation: “Clearly, concisely written. Logical, intuitive progression of ideas and supporting information. Clear and direct cues to all information.” While more time-consuming to develop, descriptive rubrics address some of the limitations of rating scale rubrics.

Step 3: Create the rubric, but there is no need to start from scratch! There are many good models on the Internet, as well as examples included on the Graduate School’s assessment page. However, programs that wish to develop a fresh rubric should consider the following questions. First, how does the assignment align with the learning goals? Second, what skills should students demonstrate when completing the assignment? Third, what does good student work equate to in terms of writing or presentation?

Tips for developing an effective rubric

  1. Include at least three levels (e.g., inadequate, adequate, excellent/outstanding), but no more than five because at that point it is hard to distinguish between numbers.
  2. Label each level with names, such as "exceeds expectations," "meets expectations," "approaching expectations," or "below expectations."
  3. If this is a descriptive rubric, summarize what each rating implies in terms of quality.
  4. Pilot test the rubric with samples of student work.