Student, Context and Instructional Features Assessment
Introduction | Assessment of
Student Learning | Resources on Student Assessment
Assessment of Context and Instructional Features
Assessment is frequently thought of only as the process of finding out what students know and are able to do. However, student learning doesn’t take place in a void; thus it is important to also understand the context within which the learning is taking place., i.e., assess the demographic, social, economic, technological and other factors about the world in which the students and the institution find themselves. Another factor which impacts students is the nature of the curriculum, and whether instructional methods are aligned with learning outcomes and current research about effective instructional practices.
The following two sections provide some suggestions and resources for assessing student learning as well as the context and instructional features.
Figure 1 provides a diagram showing the types of assessment addressed below.
|Figure 1 - Assessment Methods|
Assessment of Student Learning
There are many books and other materials on student assessment methods. We have selected two sources to highlight here that will provide you will very useful information and references to other materials.
Rick Stiggins, Director of the Assessment Training Institute, has published a book (Stiggins, 2001) and a set of videos (Professional Development Package: Comprehensive Training Materials for Student-Involved Classroom Assessment) with excellent information. The major limitation of these materials is that they focus on K-12 classrooms so they take a bit of translating to the community college context. However, the general information about types of assessment and his very clear ways of explaining the many assessment concepts that exist make his materials worthwhile.
Rick’s interactive instructional videotapes (each with an small self- instruction booklet) have the following titles:
- Imagine! Assessments that Energize Students
- Creating Sound Classroom Assessments
- Assessing Reasoning in the Classroom
- Common Sense Paper and Pencil Assessments
- Report Card Grading: Strategies and Solutions
- Student-Involved Conferences
- Student-Involved Performance Assessment
In his book, Rick organizes classroom assessment techniques into four basic
- Selected Response Assessment: This is traditionally called objective
testing and typically involves questions such as multiple choice,
true-false, matching and fill-in.
- Essay Assessment: Assessment that elicits brief original written
responses to essay exercises posed by the teacher who then reads the
response and judges quality.
- Performance Assessment: Students involved in activities that require
them to demonstrate mastery of certain performance skills or their ability
to create products that meet certain standards of quality.
- Personal Communications Assessment: Any personal communication between
student and teacher that communicates to the teacher valuable information
about the student's achievement.
Another excellent source of examples and information specific to Classroom Assessment Techniques (CATs) in STEM disciplines at the postsecondary level is the Field-Tested Learning Assessment Guide (FLAG) for Science, Math, Engineering, and Technology Instructors (www.flaguide.org). This website provides excellent self-instructional, web-based modules that introduce classroom assessment techniques of value in STEM courses. Each example was written by a college or university instructor who uses the technique.
We have abstracted descriptions of their techniques to answer the questions: “What it is?” and “Why use it?” We have organized their techniques into Stiggins’ four general categories to facilitate crosswalks between these two resources.
- Selected Response Assessment: This is traditionally called
objective testing and typically involves questions such as multiple choice,
true false, matching and fill-in.
The FLAG group has examples of multiple choice tests as well as two other
techniques—Conceptual Diagnostic Tests and Student Assessment of Learning
Gains (SALG). Here is information from their website on these techniques.
(Information from their website is similarly provided for the other three
broad categories of student assessment below.)
- Multiple Choice Test
What? Multiple-choice tests are fundamentally recognition tasks
where students must identify the correct response. They can be used to
measure knowledge, skills, abilities, values, thinking skills, etc.
Multiple-choice tests consist of a number of items that pose a question
for which students select an answer from among a number of choices.
Items can also be statements to which students find the best completion.
Why? Multiple choice testing is an efficient and effective way to
assess a wide range of knowledge, skills, attitudes and abilities. When
done well, it allows broad and even deep coverage of content in a
relatively efficient way.
- Conceptual Diagnostic Test
What? A conceptual diagnostic test aims to assess students'
conceptual understanding of key ideas in a discipline, especially those
that are prone to misconceptions. They are discipline-specific. Although
the format typically is multiple-choice, unlike traditional
multiple-choice items the distractors are designed to elicit known
Why? It is used to assess how well students understand key
concepts and what misconceptions they have in a STEM field prior to,
during, and after instruction.
- Student Assessment of Learning Gains (SALG)
What? The SALG is a web-based instrument consisting of statements
about the degree of "gain" (on a five-point scale) which students
perceive they've made in specific aspects of the class. Instructors can
add, delete, or edit questions. The instrument is administered on-line,
and typically takes 10-15 minutes. A summary of results is instantly
available in both statistical and graphical form.
Why? The SALG instrument can spotlight those elements in the
course that best support student learning and those that need
- Essay Assessment: Assessment that elicits brief original written
responses to essay exercises posed by the teacher who then reads the
response and judges quality.
- Minute Paper
What? A Minute Paper is a concise note (taking one minute),
written by students (individually or in groups), that focuses on a short
question presented by the instructor to the class, usually at the end of
Why? The Minute Paper provides real-time feedback from a class to
find out if students recognized—or were confused by— the main points of
a class session.
- Weekly Reports
What? Weekly Reports are papers written by students each week, in
which they address 3 questions: What did I learn this week? What
questions remain unclear?, and What questions would you ask your
students if you were the professor to find out if they understood the
Why? Weekly Reports provide rapid feedback about what students
think they are learning and what conceptual difficulties they are
- Performance Assessment: Students involved in activities that
require them to demonstrate mastery of certain performance skills or their
ability to create products that meet certain standards of quality.
- Performance Assessment
What? Performance assessments are designed to judge student
abilities to USE specific knowledge and research skills. Most
performance assessments require the student to manipulate equipment to
solve a problem or make an analysis. Rich performance assessments reveal
a variety of problem-solving approaches, thus providing insight into a
student's level of conceptual and procedural knowledge. Scoring rubrics
are typically needed to evaluate performance assessments.
Why? Facts and concepts that can be measured with multiple-choice
tests are fundamental in any undergraduate STEM course. However,
knowledge of methods, procedures and analysis skills are equally
important. Student growth in these latter areas are difficult to
evaluate with conventional multiple-choice examinations. Performance
assessments, used along with more traditional forms of assessment,
provide a more complete picture of student achievement.
- Concept Maps
What? A concept map is a diagram of nodes, each containing
concept labels, which are linked together with directional lines, also
labeled. The concept nodes are arranged in hierarchical levels that move
from general to specific concepts.
Why? Concept maps assess how well students see the "big picture."
They provide a useful and visually appealing way of illustrating
students' conceptual knowledge.
- Mathematical Thinking
What? The Mathematical Thinking Classroom Assessment Techniques
(Math CATs) are designed to promote and assess thinking skills in
mathematics. They help students know what to do when faced with problems
which are not identical to the technical exercises commonly encountered
in mathematics classes. The FLAG website provides five mathematical
thinking assessments focused on: (1) fault finding and fixing;(2)
plausible estimation; (3) creating measures; (4) convincing and proving,
and (5) reasoning from evidence.
Why? The Math CATs offer ways to assess and instill a broad range
of mathematical thinking skills. These skills include: checking results
and correcting mistakes; making plausible estimates of quantities which
are not known; modeling and defining new concepts; judging statements
and creating proof; organizing unsorted data and drawing conclusions.
What? Student portfolios are a collection of evidence, prepared
by the student and evaluated by the faculty member, to demonstrate
mastery, comprehension, application, and synthesis of a given set of
concepts. In a high quality portfolio, students organize, synthesize,
and clearly describe their achievements and effectively communicate what
they have learned.
Why? Portfolio assessment strategies provide a structure for
long-duration, in-depth assignments. The use of portfolios make students
responsible for demonstrating mastery of concepts.
- Personal Communications Assessment: Any personal communication
between student and teacher that communicates to the teacher valuable
information about the student's achievement.
What? During class, the instructor presents questions about key
concepts, along with several possible answers. Students in the class
indicate by, for example, a show of hands, which answer they think is
correct. If most of the class does not identified the correct answer,
the instructor gives students a short time to try to persuade their
neighbor(s) that their answer is correct. The question is asked a second
time by the instructor to gauge class mastery. Many variations on this
general method exist.
Why? ConcepTests allow the instructor to obtain immediate
feedback on the level of class understanding. Students obtain immediate
practice in using STEM terminology and concepts. Students have an
opportunity to enhance teamwork and communication skills. Instructors
have reported substantial improvements in class attendance and attitude
toward the course.
What? A formal interview consists of a series of well-chosen
questions (and often a set of tasks or problems) designed to elicit a
picture of a student's understanding about a scientific concept or set
of related concepts. The interview may be videotaped or audiotaped for
Why? The interview may be used to assess students’ understanding
for purposes of grading or may be designed to provide the instructor
with feedback about how to improve their teaching and the organization
of their courses.
- wwwflaguide.org. Field-Tested
Learning Assessment Guide for Science, Math, Engineering, and Technology
Assessment Training Institute, 317 SW Alder Street, Suite 1200, Portland, OR
97204, Tel: 800-480-3060, Fax: 503-228-3014
- Stiggins, R. (2001). Student-involved classroom assessment.
Upper Saddle River, NJ: Prentice-Hall, Inc. (This book is available in BC’s
Professional Growth Center.)
- Wiggins, G. & McTighe, J. (1998). Understanding by design.
Alexandria, VA: Association for Supervision and Curriculum Development.
The following four books are focused specifically on post-secondary
- Maki, P. (2004). Assessing for learning: Building a sustainable
commitment across the institution. Sterling, VA: Stylus.
- Walvoord, B. (2004). Assessment clear and simple: A practical guide for
institutions, departments, and general education. San Francisco, CA: Jossey-Bass.
- Schuh, J., Upcraft, L., & Associates. (2001). Assessment practice in
student affairs: An applications manual. San Francisco, CA: Jossey-Bass.
- Walvoord, B. & Anderson, V. (1998). Effective grading: A tool for
learning and assessment. San Francisco, CA: Jossey-Bass.
- Assessment Training Institute. Professional Development Package:
Comprehensive Training Materials for Student-Involved Classroom Assessment
This comes with 7 videos and 3 books. In addition, there are booklets that
match each of the videos. These materials are available in BC’s Professional
Growth Center from Sarah Phinney, Instructional Web Specialist
Assessment of Context and Instructional Features
This section focuses on methods of assessing (a) the context in which the student learning is occurring and (b) the instructional methods/materials and other program features. These are two categories of factors that affect student learning. (See Student, Context, and Instructional Features Assessment Methods for a diagram showing the relationship of student learning assessment to the context and program features assessment.)
The context is often a factor over which instructors have little control, whereas they have much more control over program features. Context and program features are two separate categories of factors that can differ significantly in how susceptible they are to change. However, we have grouped them together here because the methods of data collection are often similar for the two. Figure 1 above shows which methods are most likely to be used for each of these categories.
Context Assessment. Context assessment refers to gathering data about demographic, social, economic, technological and other factors related to the world in which the students and institution find themselves. It may mean gathering information about needs of employers, career interests of students, social structures of the community, growth projections, and many other factors.
Instructional Features Assessment. Assessing the features of the instructional program may mean looking at the curriculum being used and its match to the learning outcomes, whether instructional methods are aligned with learning outcomes or current research about effective instructional practices. It may mean looking at how programs are scheduled and how courses fit together to make up a program.
There are a wide variety of data gathering and analysis methods that can be used for context and program features assessment. Here we present five commonly used methods that provide good bang for the buck. These are largely qualitative data gathering techniques. See the Institutional Research component of the BC website for many fine examples of quantitative data that are already available. These can be used as is, or additional analyses could be done to look at subpopulations of students (See Trend Analysis below.)
Interviews are particularly useful for getting the story behind a
participant's experiences. The interviewer can pursue in-depth information
around a topic. Interviews may be useful as follow-up to certain respondents
to questionnaires, e.g., to further investigate their responses. Usually
open-ended questions are asked during interviews. An interview schedule or
protocol is used by the interviewer. It specifies the questions to be asked,
the sequence of the questions, and guidelines for the interviewer to use at
the beginning and end of the interview.
- Focus Groups
Basically, focus groups are interviews, but of 6-10 people at the same time
in the same group who have some similar nature, e.g., similar age group,
status in a program, etc. The members of the focus group are free to talk
with and influence each other in the process of sharing their ideas and
views on the topics being focused on. Focus groups are a powerful means to
evaluate services or test new ideas. One can get a great deal of information
during a focus group session. See Focus Group Methods.
Questionnaires present a set of written questions to which everyone in a
sample is asked to respond. They are a way to reach large numbers of people.
They can elicit either qualitative or quantitative information. Before you
start to design your questions, clearly articulate what problem or need is
to be addressed using the information to be gathered by the questions.
Review why you're using the questionnaire and what you hope to accomplish by
it. This provides focus on what information you need and, ultimately, on
what questions should be used.
An excellent resource for developing questionnaires is:
Cox, J. (1996). Your opinion, please! How to build the best
questionnaires in the field of education. Thousand Oaks, CA: Corwin.
(Check out this book with Sarah Phinney.)
Questionnaires are often referred to as surveys. However, the term “survey”
is a broader. It can also refer to interviews that are conducted with a
sample of people. Survey research is the term for the general type of
research in which questionnaires or interviews are used as the method of
A very detailed, easy-to-use and informative source about surveys, see:
Fink, A. (ed.) (2003). The survey kit. Thousand Oaks, CA: Sage.
It has 10 small booklets on topics ranging from how to ask survey questions
to how to analyze and report on surveys. It covers both in-person and
Sometimes it is much more useful to actually observe a situation than to
have those involved report on it via questionnaires or interviews. Trained
observers make ratings of conditions or events by comparing their perception
of a situation to a pre-specified rating scale. The rating scales usually
have quite detailed written descriptions or pictures that are used to
compare to what the observer sees, hears, touches or in other ways senses.
The observer is usually quite knowledgeable of that which they are observing
(e.g., someone who is observing a classroom with a focus on instructional
methods is familiar with a range of teaching techniques). At times
open-ended observation guides are also used so the rater can identify
instances of a certain type of general behavior or event.
Classroom observations are often very valuable. It is often especially
helpful in evaluative inquiry and action research to have a colleague
observe a class. The instructor may ask the observer to watch for particular
actions on the part of the instructor and/or the students. For example, the
instructor may ask the observer to note if he/she has some type of bias in
which students he/she calls on in class or the amount of time spent in
lecture versus interaction with students.
The Reformed Teaching Observation Protocol (RTOP) website has an observation
guide for looking at math/science classroom practice to see how it corresponds
to the latest in research on teaching techniques. There is an observation form
along with video clips of science/math secondary and postsecondary instructors
teaching a lesson. You can rate the lessons and then compare your ratings to
those of the developers of the materials.
The website is: www.ecept.net/purcell/RTOP_full/index.htm
Also, see the following book for more general information:
Wholey, J., Hatry, H., & Newcomer, K. (eds). (2004). Handbook of practical
program evaluation. San Francisco, CA: Jossey-Bass.
Journaling is a reflective activity. When journaling is used in evaluation,
participants in an inquiry process are asked to write in a journal at certain
intervals to capture their current thinking, behaviors, and/or feelings about
designated topics. They may be given a specific set of questions to guide their
journaling. The idea is that subtle changes occur in our thinking, behavior, and
feelings as we are learning and changing. The journal allows one to capture
Typically, the journal, however, is read only by the person who keeps that
journal. At certain points, the journaller is asked to review his/her journal
and provide a summary related to the topic of interest and, if desired, provide
some excerpts from the journal. It is important that the journaller knows that
the journal will not be seen by others so he/she can be very honest and
self-revealing. Then the journaller has the option to summarize his/her
reflections at the level of revelation that seems appropriate when reporting for
the evaluation process.
- Trend Analysis
Trend analysis refers to looking at the same or similar data collected over a
particular time period. The analysis is designed to look at shifts over time.
Data collected about student success, student retention, differences by ethnic
group, gender, age, and other variables is often very useful.
Quantitative data is most easily analyzed for trends.
Bakersfield College’s Institutional Research office provides much data on its
website (www.bakersfieldcollege.edu/irp/IRP_Home.asp) that can be or has been analyzed to look at trends among students and