Assessing Student Learning in Community Colleges

  

 

Definition of Terms

Abilities. This level of accomplishment relates to the integration of knowledge, skills, and attitudes in complex ways that require multiple elements of learning.

 

Active learning. Active learning is an approach where students are participating in learning beyond passively absorbing knowledge such as in a didactic session. Actively learning students are involved in solving problems, applying knowledge, working with other students, and engaging the material to construct their own understanding and use of the information. Examples of active learning methods include those methods where deeper thinking and analysis are the responsibility of the student and the faculty member acts as a coach or facilitator to achieve specified outcomes. Examples of active learning include: inquiry-based learning, case-study methods, project development, modeling, collaborative learning, problem-based learning, buzz groups or brainstorming, and simulations.

Assessment. Assessment refers to a process where methods are used by a faculty member, department, program or institution to generate and collect data for evaluation of processes, courses, and programs with the ultimate purpose of evaluating overall educational quality and improving student learning. This term refers to any method used to gather evidence and evaluate quality and may include both quantitative and qualitative data.

Attitudinal outcomes. Outcomes related to changes in beliefs or development of certain values.

Authentic assessment. Assessment that evaluates the student’s ability to use their knowledge and to perform tasks that are approximate those found in the work place or other venues outside of the classroom setting.

Classroom assessment techniques. Classroom assessment techniques (CATs) are “simple tools for collecting data on student learning in order to improve it” (Angelo & Cross, 1993, p. 26). CATs are short, flexible, classroom techniques that provide rapid, informative feedback to improve classroom dynamics by monitoring learning, from the student’s perspective, throughout the semester. Data from CATs are evaluated and used to facilitate continuous modifications and improvement in the classroom.

Classroom-based assessment. Classroom-based assessment is the formative and summative evaluation of student learning within a single course. This assessment involves evaluating the curriculum as designed, taught, and learned. It involves the collection of data aimed at measuring successful learning in the individual course and improving instruction with a goal to improving learning.

 

Collegiality. Mutually respectful discussion that leads to participative decision making.

 

Competencies. Competencies refer to the specific level of performance that students are expected to master.

 

Criterion-based assessments. Assessment evaluated or scored using a set of criteria to appraise or evaluate work. Criterion-referenced evaluation is based on proficiency not subjective measures such as improvement.   

Culture of evidence. The term culture of evidence refers to an institutional culture that supports and integrates research, data analysis, evaluation, and planned change as a result of assessment (Pacheco, 1999). This culture is marked by the generation and valuing of quantitative and qualitative data providing accountability for institutionally defined outcomes (Wright, 1999).

 

Direct data. Data that measures the exact value. For instance, a math test directly measures a student's learning in math. (Contrast with indirect data below.)

 

Embedded assessment. Embedded assessment occurs within the regular class or curricular activity. Class assignments linked to student learning outcomes through primary trait analysis, serve as grading and assessment instruments. Individual questions on exams can be embedded in numerous classes to provide departmental, program, or institutional assessment information. An additional benefit to embedded assessment is immediate feedback on the pedagogy and student needs.

 

Evidence of program and institutional performance. Quantitative or qualitative, direct or indirect data that provides information concerning the extent to which an institution meets the goals it has established and publicized to its stakeholders.

 

Formative assessment. Formative assessment generates useful feedback for development and improvement. The purpose is to provide an opportunity to perform and receive guidance (such as in class assignments, quizzes, discussion, lab activities, etc.) that will improve or shape a final performance. This stands in contrast to summative assessment where the final result is a verdict and the participant may never receive feedback for improvement such as on a standardized test or licensing exam or a final exam.

 

Homegrown or Local assessment. This type of assessment is developed and validated for a specific purpose, course, or function and is usually criterion-referenced to promote validity.

 

Indirect data. Data that measures a variable related to the intended value. For instance a person's math skills may be indirectly measured through an employers questionnaire asking about the computational skills of graduating students.

 

Information competency. The ability to access, analyze, and determine the reliability of  information on a given topic.

 

Knowledge. Particular areas of disciplinary or professional content that students can recall, relate, and appropriately deploy.

 

Learning.  Particular levels of knowledge, skills, and abilities that a student has attained at the end of engagement in a particular set of collegiate experiences.

 

Likert scale. The Likert scale assigns a  numerical value to responses in order to quantify subjective data. The responses are usually along a continuum such as - responses of strongly disagree, disagree, neutral,   agree, or strongly agree- and are assigned values of such as 1-5. This allows easy manipulation of data but attention must be given to the validity and reliability of the tool.

 

Metacognition. Metacognition is the act of thinking about one's own thinking and regulating one's own learning. It involves critical analysis of how decisions are made and vital material is consciously learned and acted upon.

 

Norm-referenced assessment.  In norm-referenced assessment an individual's performance is compared to another individual . Individuals are commonly ranked to determine a median or average. This technique addresses overall mastery, but provides little detail about specific skills. This can also be used to track an individuals own improvement over time.

 

Outcomes - Learning outcomes are defined in higher education assessment practice as something that happens to an individual student as a result of attendance at a higher education institution.

 

Pedagogy - Pedagogy is the art and science of how something is taught and how students learn it. Pedagogy includes how the teaching occurs, the approach to teaching and learning, the way the content is delivered and what the students learn as a result of the process. In some cases pedagogy is applied to children and andragogy to adults; but pedagogy is commonly used in reference to any aspect of teaching and learning in any classroom.

 

Primary Trait Analysis (PTA) is the process of identifying major traits or characteristics that are expected in student work. After the primary traits are identified, specific criteria with performance standards, are defined for each trait.

 

Qualitative data. Data collected as descriptive information, such as a narrative or portfolio. These types of data, often collected in open-ended questions, feedback surveys, or summary reports, is more difficult to compare, reproduce, and generalize. It is bulky to store and to report, however, it is often the most valuable and insightful data generated, often providing potential solutions or modifications in the form of feedback.

 

Quantitative data. Data collected as numerical or statistical values. These data use actual numbers (scores, rates, etc) to express quantities of a variable. Qualitative data, such as opinions, can be displayed as numerical data by using Likert scaled responses which assigns a numerical value to each response (e.g. 5 = strongly agree to 1 = strongly disagree). This data is easy to store and manage; it can be generalized and reproduced, but has limited value due to the rigidity of the responses and must be carefully constructed to be valid.

Reliability. Reliability refers to the reproducibility of results over time or a measure of the consistency when an assessment tool is used multiple times. In other words, if the same person took the test five times, the data should be consistent. This refers not only to reproducible results from the same participant, but also to repeated scoring by the same or multiple evaluators.

 

Rubric. A rubric is a set of criteria used to determine scoring for an assignment, performance, or product. Rubrics may be holistic providing general guidance or analytical assigning specific scoring point values.

 

Skills. the learned capacity to do something.

 

Standardized assessment. Assessments created, tested, and usually sold by an educational testing company e.g. GRE’s, SAT, ACT for broad public usage and data comparison, usually scored normatively.

 

Student Learning Outcomes (SLO). Student learning outcomes are the specific measurable goals and results that are expected subsequent to a learning experience. These outcomes may involve knowledge (cognitive), skills (behavioral), or attitudes (affective) that provide evidence that learning has occurred as a result of a specified course, program activity, or process.

 

Summative assessment. A summative assessment is a final determination of knowledge, skills, and abilities. This could be exemplified by exit or licensing exams, senior recitals, or any final evaluation which is not created to provide feedback for improvement, but is used for final judgments. Some midterm exams may fit in this category if it is the last time the student has an opportunity to be evaluated on specific material.

 

Validity. An indication that an assessment method accurately measures what it is designed to measure with limited effect from extraneous data or variables. To some extent this must also relate to the integrity of inferences made from the data.

 

 Return to Home Page

Resources and Links

Active Learning Links
Bonwell's
Active Learning Website

San Diego State University Active Learning Website

University of Minnesota Examples of Active Learning

 

Research Terminology and Methods
 

Trochim, William M. The Research Methods Knowledge Base

Astin, A.W. (1993). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education.

 Classroom assessment techniques: A handbook for college teachers.
Angelo, & Cross,1993.

 

 

 

 

 

 

 

 

 

Culture of evidence. Pacheco, 1999.

 

 

Janet Fulks
Assessing Student Learning in Community Colleges (2004), Bakersfield College
jfulks@bakersfieldcollege.edu
07/12/2004