Assessing Student Learning in Higher Education

Section 1

Section 2
Background and Rationale for Assessment

Section 3
Student Learning Outcomes (SLOs)

Section 4
 Assessment Tools and Data
 Quality Data
Defining Terms
Assessment Tools
Grades & Assessment
Primary Trait Analysis
Selecting the Tools
Creating a Tool
Your SLOs

Section 5
Course Assessment

Section 6
Program Assessment


Section 7
Closing the Loop

Section 8
Implementing Assessment Training on Campus


Section 9
References & Resources



Using Materials from this Website


Types of Assessment Data and Assessments

These definitions are paired for emphasis and contrast. Skim them now and refer to them if they are needed later.

Evidence of program and institutional outcomes performance. Quantitative or qualitative, direct or indirect data that provides information concerning the extent to which an institution meets the goals and outcomes it has established and publicized to its stakeholders.

Direct data.  Direct data measures the exact value. For instance, a math test directly measures a student's learning in math by defining a criteria and standard, then having the student analyze a problem.

Indirect data. Data that measures a variable related to the intended value. For instance a personís math skills may be indirectly measured through an employers questionnaire asking about the computational skills of graduating students.

Qualitative data. Data collected as descriptive information, such as a narrative or portfolio. These types of data, often collected in open-ended questions, feedback surveys, or summary reports, are more difficult to compare, reproduce, and generalize. It is bulky to store and to report; however, it is often the most valuable and insightful data generated, often providing potential solutions or modifications in the form of feedback.

Quantitative data. Data collected as numerical or statistical values. These data use actual numbers (scores, rates, etc) to express quantities of a variable. Qualitative data, such as opinions, can be displayed as numerical data by using Likert scaled responses which assigns a numerical value to each response (e.g. 5 = strongly agree to 1 = strongly disagree). This data is easy to store and manage; it can be generalized and reproduced, but has limited value due to the rigidity of the responses and must be carefully constructed to be valid.

Formative assessment. Formative assessment generates useful feedback for development and improvement. The purpose is to provide an opportunity to perform and receive guidance (such as in class assignments, quizzes, discussion, lab activities, etc.) that will improve or shape a final performance. This stands in contrast to summative assessment where the final result is a verdict and the participant may never receive feedback for improvement such as on a standardized test or licensing exam or a final exam.


Summative assessment. A summative assessment is a final determination of knowledge, skills, and abilities. This could be exemplified by exit or licensing exams, senior recitals, or any final evaluation which is not created to provide feedback for improvement, but is used for final judgments. Some midterm exams may fit in this category if it is the last time the student has an opportunity to be evaluated on specific material.

Criterion-based assessments. Assessment evaluated or scored using a set of criteria to appraise or evaluate work. Criterion-referenced evaluation is based on proficiency not subjective measures such as improvement.   

Norm-referenced assessment.  Assessment of an individual is compared to that of another individual or to the same individualís improvement over time. Individuals are commonly ranked to determine a median or average. This technique addresses overall mastery, but provides little detail about specific skills.

Embedded assessment. Embedded assessment occurs within the regular class or curricular activity. Class assignments linked to student learning outcomes through primary trait analysis, serve as grading and assessment instruments. Individual questions on exams can be embedded in numerous classes to provide departmental, program, or institutional assessment information. An additional benefit to embedded assessment is immediate feedback on the pedagogy and student needs.

Standardized assessment. Assessments created, tested, and usually sold by an educational testing company e.g. GREís, SAT, ACT for broad public usage and data comparison, usually scored normatively.

Homegrown or Local assessment. This type of assessment is developed and validated for a specific purpose, course, or function and is usually criterion-referenced to promote validity.

The next section will discuss some of the advantages and disadvantages of standardized assessments as compared to local or homegrown assessments.

Proceed to Assessment Tools

Resources and Links


Beyond Confusion: An Assessment Glossary by Leskes of the AAC&U





Report from the Project on Accreditation and Assessment. Peter Ewell's standard definitions described by John Nichols










Norm- and Criterion-Referenced Testing. Bond, 1996


Janet Fulks
Assessing Student Learning in Community Colleges (2004), Bakersfield College