Indirect measures provide evidence that suggests but does not demonstrate student learning. These methods are typically less time- and labor-intensive than direct measures. Indirect methods include several types of surveys, exit interviews, and curriculum analysis. Each is described below, along with a suggestion of the trade-off of benefits and disadvantages that is inherent with each method.
The surveys listed below may be either locally or commercially developed. Existing college-wide surveys (such as the Graduating Senior Survey may already be collecting information that directly relates to intended program-level learning outcomes. Sometimes, additional local questions can be added to university-wide surveys such as the National Survey of Student Engagement (NSSE) to avoid the necessity of a separate data collection. Staff members from the Office of Assessment can provide information about other university-wide surveys and the possibility of adding local questions.
While several of the sections below describe surveys or interviews with individual students, it is important to note that in certain circumstances the same information may be collected by focus groups. It may be easier, for example, to conduct exit interviews in groups rather than with individuals. When the particular issues to be addressed by an assessment are not yet well articulated, focus groups of students, alumni, or employers may be used to crystallize the salient concepts. These can then be used as the basis for subsequent surveys. On the other hand, when an assessment begins with a survey which yields results that are difficult to interpret, follow-up focus groups may be helpful to elucidate the findings. Or for very small programs, focus groups may simply be more feasible to conduct than surveys.
Quick Jump To:
Students have a sense of their own competence. Student self-efficacy involves the rating by students of their perception of their own achievement in particular learning outcomes. Research shows a significant, although imperfect, correlation between actual and perceived competence. What can be problematic are gender and demographic differences in the accuracy of self-efficacy. For example, certain groups of students may rate their quantitative skills at a level below that indicated by standardized tests. Also, unless the answers are anonymous, students will be likely to overrate their abilities. The same is true if students perceive they can be penalized by their answers.
Benefits include the inexpensive nature of the tool. A relatively simple survey can be constructed which simply asks students to rate their competence in different areas. Also, pre- and posttest assessment can be conducted to examine changes both in self-efficacy and perceived importance of a topical area. Another benefit is that all learning outcomes can be assessed simultaneously, in one test.
Disadvantages include an imperfect relationship between self-efficacy judgments and actual competence; student self-reporting may not always be congruent with their actual level of achievement.
Given that student satisfaction with a program or course is not a learning outcome, satisfaction may or may not relate to outcomes assessment. But satisfaction may correlate with other variables. For this reason, a common component of assessment systems is the student satisfaction survey. Such surveys may consider the extent to which students are satisfied with their interactions with faculty members, with their introductory or advanced courses, or with their preparedness coming out of the program. Individual course evaluations are not program-level student satisfaction surveys. Course evaluations typically do not address information related to the program as a whole and should not be considered for use in the program assessment.
Benefits include the relative simplicity of administering this type of survey. Standardized, commercial surveys are available that provide comparison data from other institutions. Institutional satisfaction surveys may also provide information for the program without requiring additional surveying.
Disadvantages include the difficulty of designing questions appropriately, or, again, a potential hazard in linking student satisfaction and achievement of learning outcomes.
A good example of Student Satisfaction survey is the Noel-Levitz Student Satisfaction Inventory.
If learning outcomes include elements of appreciation or understanding of particular issues of concern, student attitudes can be measured as part of the assessment program. For example, informed appreciation for the arts may be assessed using an attitudinal survey. Another example may be students' empathy toward disadvantaged groups, which can be measured in an attitudinal survey. A further example would be attitudes toward learning or toward the profession. Both standardized tests and locally designed surveys can be used for this purpose, although the responses are very sensitive to the wording of the questions.
Benefits include the simplicity of administering the system.
Disadvantages include the challenge of determining student attitudes in a reliable manner.
Rather than assess students' attitudes, self-efficacy, or satisfaction through the use of surveys, students may be interviewed directly in individual or focus-group settings. Such interviews allow a more thorough, free-form exploration of the issues through the use of follow-up questions that depend on students' responses.
Benefits include the depth and richness of information that can be obtained through interviews and focus groups. Focus groups may be preferred for programs with small numbers of students.
Disadvantages include the time- and labor-intensive nature of conducting such interviews and in analyzing the information obtained from interviews for comparison across multiple interviews. Also, student anonymity needs to be protected in this tool, and stray comments about individual faculty members must not become part of the assessment data.
The perspective that students have on their education may change significantly after time away from school. Some learning outcomes lend themselves more naturally to questions posed some time after graduation. For example, an outcome involving preparation for professional practice can best be assessed, albeit indirectly, after the student has graduated and been employed in the job market.
Benefits include the real-world perspective that can be obtained from alumni.
Disadvantages include the difficulty of finding and reaching alumni, the possibly self-selective nature of those who choose to respond, and the relatively narrow scope of learning outcomes that can be assessed in this manner.
It is possible that some of the students' knowledge and skills are evident to the employers who rely on these characteristics. Thus, some accrediting bodies either require or encourage programs to perform an assessment through the major employers of their students. These may range from information as basic as hiring data, to site supervisor evaluations, to detailed surveys of the characteristics that the employers perceive in program graduates. Advisory boards, anecdotal information, and placement data may be used in place of formal surveys.
Benefits of this tool include the real-world perspective that employers might be able to provide.
Disadvantages include the potentially limited ability of employers to assess their employees' characteristics in terms of specific learning outcomes, or the inability of employers to assess graduates only from a particular school. Also, this tool depends on surveying employers with sufficient numbers of graduates. In large corporations, it may even be difficult to find the right person to contact for this information. In addition, former students may object to having their employers surveyed in this way.
Historically, accrediting bodies have required institutions or programs to document the information that students are receiving and the content that the program delivers in its courses. Documentation can be obtained from the curriculum and syllabi of individual courses.
With the focus on learning-outcomes assessment, programs are required to show that students actually exhibit the skills and qualities that the program wishes to develop. However, a curriculum analysis may still be relevant and is often included in accreditation documents. For example, some accrediting bodies may require the documentation of the number of hours devoted to a particular subject in the curriculum.
Benefits include the relatively straightforward task of analyzing the content of the curriculum, for which only course syllabi may be needed.
Disadvantages include the potential inequality between delivery of material and documentation of learning for specific outcomes.
Last Modified: October 19, 2012