Outcomes related to changes in beliefs or development of certain values.
Authentic assessment. Assessment that evaluates the
student’s ability to use their knowledge and to perform tasks that are
approximate those found in the work place or other venues outside of the
techniques. Classroom assessment techniques (CATs) are
“simple tools for collecting data on student learning in order to
improve it” (Angelo & Cross, 1993, p. 26). CATs are short, flexible,
classroom techniques that provide rapid, informative feedback to improve
classroom dynamics by monitoring learning, from the student’s
perspective, throughout the semester. Data from CATs are evaluated and
used to facilitate continuous modifications and improvement in the
Classroom-based assessment is the formative and summative evaluation of
student learning within a single course. This assessment involves
evaluating the curriculum as designed, taught, and learned. It involves
the collection of data aimed at measuring successful learning in the
individual course and improving instruction with a goal to improving
Collegiality. Mutually respectful discussion that
leads to participative decision making.
Competencies refer to the
specific level of performance that students are expected to master.
Assessment evaluated or scored using a set of criteria to appraise or
evaluate work. Criterion-referenced evaluation is based on proficiency
not subjective measures such as improvement.
Culture of evidence. The
term culture of evidence refers to an institutional culture that
supports and integrates research, data analysis, evaluation, and planned
change as a result of assessment (Pacheco, 1999). This culture is marked
by the generation and valuing of quantitative and qualitative data
providing accountability for institutionally defined outcomes (Wright,
Data that measures the exact value. For instance, a math test directly
measures a student's learning in math. (Contrast with indirect data
Embedded assessment occurs within the regular class or
curricular activity. Class assignments linked to student learning outcomes
through primary trait analysis, serve as grading and assessment
instruments. Individual questions on exams can be embedded in numerous
classes to provide departmental, program, or institutional assessment
information. An additional benefit to embedded assessment is immediate
feedback on the pedagogy and student needs.
Evidence of program and institutional performance.
Quantitative or qualitative, direct or indirect data that provides
information concerning the extent to which an institution meets the goals
it has established and publicized to its stakeholders.
Formative assessment. Formative
assessment generates useful feedback for development and improvement.
The purpose is to provide an opportunity to perform and receive guidance
(such as in class assignments, quizzes, discussion, lab activities, etc.)
that will improve or shape a final performance.
This stands in contrast to summative assessment where the final result is
a verdict and the participant may never receive feedback for improvement
such as on a standardized test or licensing exam or a final exam.
This type of assessment is developed and validated for a specific purpose,
course, or function and is usually criterion-referenced to promote
Data that measures a variable related to the intended value. For instance
a person's math skills may be indirectly measured through an employers
questionnaire asking about the computational skills of graduating
competency. The ability to
access, analyze, and determine the reliability of information on a
Knowledge. Particular areas of disciplinary or professional content that students
can recall, relate, and appropriately deploy.
Particular levels of knowledge, skills, and abilities that a student has
attained at the end of engagement in a particular set of collegiate
The Likert scale assigns a numerical value to responses in order
to quantify subjective data. The responses are usually along a continuum
such as - responses of strongly disagree, disagree, neutral,
agree, or strongly agree- and are assigned values of such as 1-5. This
allows easy manipulation of data but attention must be given to the
validity and reliability of the tool.
Metacognition. Metacognition is the act of thinking about
one's own thinking and regulating one's own learning. It involves critical
analysis of how decisions are made and vital material is consciously
learned and acted upon.
In norm-referenced assessment an individual's performance
is compared to another individual . Individuals are commonly ranked to
determine a median or average. This technique addresses overall mastery,
but provides little detail about specific skills. This can also be used to
track an individuals own improvement over time.
Learning outcomes are defined in higher education assessment practice as something that happens to an individual student as a result of
attendance at a higher education institution.
Pedagogy - Pedagogy is the art and science of how something
is taught and how students learn it. Pedagogy includes how the teaching
occurs, the approach to teaching and learning, the way the content is
delivered and what the students learn as a result of the process. In some
cases pedagogy is applied to children and andragogy to adults; but
pedagogy is commonly used in reference to any aspect of teaching and
learning in any classroom.
Primary Trait Analysis
(PTA) is the process of identifying major traits or characteristics that
are expected in student work. After the primary traits are identified,
specific criteria with performance standards, are defined for each trait.
Data collected as descriptive information, such as a narrative or
portfolio. These types of data, often collected in open-ended questions,
feedback surveys, or summary reports, is more difficult to compare,
reproduce, and generalize. It is bulky to store and to report, however, it
is often the most valuable and insightful data generated, often providing
potential solutions or modifications in the form of feedback.
Data collected as numerical or statistical values. These data use actual
numbers (scores, rates, etc) to express quantities of a variable.
Qualitative data, such as opinions, can be displayed as numerical data by
using Likert scaled responses which assigns a numerical value to each
response (e.g. 5 = strongly agree to 1 = strongly disagree). This data is
easy to store and manage; it can be generalized and reproduced, but has
limited value due to the rigidity of the responses and must be carefully
constructed to be valid.
refers to the reproducibility of results over time or a measure of the
consistency when an assessment tool is used multiple times. In other words, if the same
person took the test five times, the data should be consistent. This
refers not only to reproducible results from the same participant, but
also to repeated scoring by the same or multiple evaluators.
Rubric. A rubric is a
set of criteria used to determine scoring for an assignment, performance,
or product. Rubrics may be holistic providing general guidance or
analytical assigning specific scoring point values.
learned capacity to do something.
Assessments created, tested, and usually sold by an educational testing
company e.g. GRE’s, SAT, ACT for broad public usage and data comparison,
usually scored normatively.
Student Learning Outcomes (SLO). Student learning outcomes
are the specific measurable goals and results that are expected subsequent
to a learning experience. These outcomes may involve knowledge
(cognitive), skills (behavioral), or attitudes (affective) that provide
evidence that learning has occurred as a result of a specified course,
program activity, or process.
Summative assessment. A summative
a final determination of knowledge, skills, and
abilities. This could be exemplified by exit or licensing exams,
senior recitals, or any final evaluation which is not created to provide
feedback for improvement, but is used for final judgments. Some midterm
exams may fit in this category if it is the last time the student has an
opportunity to be evaluated on specific material.
Validity. An indication that an assessment
method accurately measures what it is designed to measure with limited
effect from extraneous data or variables. To some extent this must also
relate to the integrity of inferences made from the data.