You are here

Assessment Myths

Find out the facts behind common assessment myths

Myth 1

Assessment is just the latest fad in higher education

The Reality

Assessment has been

  • Common practice in higher education for over thirty years
  • The emphasis of accreditors for at least twenty-five years
  • A formal expectation of all regional accreditors since 2004

     

Myth 2

Assessment is just about collecting evidence in case the Higher Learning Commission wants to see it some time

The Reality

  • Collecting evidence isn’t enough
  • Assessment needs systems and strategies that produce actionable evidence
  • Assessment is Closer to standardized and coordinated individual practice and further from bureaucracy?

     

Myth 3

Assessment strategies need to be objective and uniform

The Reality

  • Most judgments about student performance are subjective
  • What makes strategies “valid” is the consistent application of criteria
  • Consistency builds over time, with practice

     

Myth 4

Grades ought to be a good enough indicator of student performance; additional assessment isn’t really necessary

The Reality

  • Desired outcomes (course, program, and institutional) may not be measured by the grading process
  • An assignment or course may support multiple learning outcomes
  • A course grade often measures much more than intended outcomes

     

    A Digression on Outcomes

    • Outcomes are the intended or desired results of student work
    • Outcomes describe what a students can do with what he/she knows
    • Outcomes are performance-based
    • Outcomes define the essential abilities of a graduate or course completer – what a student carries with him/herself to the next level

       

  • Many variables affect grades participation, attendance, group work with student not in the program, and so on
  • Individual assignments have objectives that may only tangentially connect with outcomes while the rest of the course does not
  • Outcomes may be developed and measured only partially in some courses, one outcome often is measured through multiple courses

     

Myth 5

Assessment tools must be validated before they are used

The Reality

  • Externally validated tools ignore nuanced differences between and among disciplines
  • “Home-grown” tools can reflect institutional and departmental values and can be validated over time
  • Value in the process is validation if used continuously and revised over time

     

Myth 6

There is no valid way to assess abstract qualities like critical thinking

The Reality

  • Commercial instruments may be useful if they align with institutional outcomes or values
  • VALUE rubrics might be a starting point
  • Institutions can develop rubrics that cross disciplines but respect disciplinary differences as well by aiming at core elements, the “deep learning” that students carry with them
  • Graduate Candidacy and Defense exam rubrics work very well

     

Myth 7

Once you’ve defined outcomes and mapped them through the curriculum, you can be relatively sure that students will develop them

The Reality

  • Taking a course doesn’t guarantee learning
  • Students must demonstrate the achievement of the learning outcome
  • Assessing formative learning is as important as assessing summative learning

     

Myth 8

Programs only need to assess a final test or capstone course

The Reality

  • “High stakes” for both the student and the institution
  • Challenging to measure all outcomes at once
  • No opportunity for mid-course corrections

     

Myth 9

Assessment is just another sneaky way of evaluating faculty

The Reality

  • Assessment results should not be used to evaluate individual faculty
  • Participating in the assessment process can be a job expectation and therefore be evaluated, but the results of an assessment should not

     

Myth 10

Assessment is unnecessary busy work

The Reality

  • Assessment emphasizes improving student learning and not completing reports
  • Assessment should connect to existing program processes and structures
  • The best approach organic and local

     

Myth 11

Technology makes it all easier; all we need for good assessment is a good data management system

The Reality

  • Different disciplines / different strategies
  • Technology should not drive assessment and reporting methodologies
  • Technology cannot make judgments
  • Disciplinary learning experts still need to analyze the data and plan improvements
  • Data structures may limit intuitive judgments

     

Myth 12

Responsibility for assessing general education or institutional outcomes belongs to specific service departments, such as Institutional Assessment Office

The Reality

  • Institutional outcomes are an institutional responsibility and programmatic outcomes belong solely to the programs
  • Programs assess what graduates can do later in their careers in their respective fields

     

Myth 13

Assessment is easy and/or involves no extra work

The Reality

  • Assessment is an intuitive process for faculty
  • Assessment involves intentional curriculum planning
  • Assessment requires more coordination than most current curricula
  • Assessment involves public conversations about outcomes and criteria and public analysis of results
  • May involve both aggregated and individual analyses
  • Conversations can lead to clearer, less anecdotal, more concrete decision-making about curriculum, pedagogy, and resources
  • Results inform external stakeholders of institutional and programmatic progress
  • “Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development” – Marchese, in Palomba and Banta, Assessment Essentials, 1999