The intent of this glossary is to provide a common vocabulary for the terms frequently used in the context of college outcome assessment. It is a work in progress by the Assessment Committee. The definitions provided in this list were adapted from various sources.
CCA Assessment Glossary (PDF Format)
Academic Course Map: The specific order of courses a student needs to take each semester to finish a degree. At CCA, there are two-, three-, and four-year academic course maps.
1. An outward-focused activity in which an institution reports on its financial health; physical and technological infrastructure; staff and faculty capacities; and educational effectiveness. The purpose of accreditation is to provide public accountability to external audiences. The accrediting body of CCA is the Higher Learning Commission (HLC).
2. Accreditation is the recognition that an institution maintains standards requisite for its graduates to gain admission to other reputable institutions of higher learning or to achieve credentials for professional practice. The goal of accreditation is to ensure that education provided by institutions of higher education meets acceptable levels of quality.
Anchor (also called Exemplars): A sample of student work (product or performance) used to illustrate each level of a scoring rubric; critical for training scorers of performances since it serves as a standard against which other student work is compared.
Alignment: The process of intentionally connecting course, program, general education, and institutional learning outcomes. At the program level, alignment represents the ideal cohesive relationship between curriculum and outcomes. Checking alignment allows program faculty to determine whether the curriculum provides sufficient and appropriately sequenced opportunities for students to develop the knowledge, skills, and dispositions identified in SLOs.
Assessment: One or more processes that identify, collect, and prepare data to evaluate the attainment of student outcomes. Effective assessment uses relevant direct, indirect, quantitative and qualitative measures as appropriate to the outcome being measured. Appropriate sampling methods may be used as part of an assessment process.
Assessment Artifact: Product or performance collected from students as evidence of learning achievement.
Assessment Cycle: An assessment cycle includes a period for planning and submission of an assessment plan, a period for data collection, and lastly, data analysis and the preparation of an assessment aeport.
Assessment Instrument: The tool used to evaluate or measure the product or performance collected from students as evidence of learning achievement.
Assessment Method: The tool used to gather evidence of learning. The method can be direct or indirect.
Assessment Plan: A collaboratively-developed planning document that establishes a multi-year plan for outcomes assessment. Assessment plans articulate when each student learning objective will be assessed; the types of direct and indirect evidence (aligned to each learning outcome) that will be collected and analyzed; plans for analyzing the data; procedures to guide discussion and application of results; and timelines and responsibilities.
Assessment Report: A document that discusses assessment data, analysis, and application of results.
Backward Design (also called Reverse Design): Backward design is a method of designing educational curriculum by setting goals before choosing instructional methods and forms of assessment. Backward design of curriculum typically involves three stages: identifying outcomes, determining assessments, and developing learning activities and lesson plans to achieve outcomes.
Benchmark (also called Threshold):
1. A standard or point of reference against which gathered data may be compared or assessed.
2. The rate for an accepted level of performance or success for the given outcome. The expected level is often expressed as a percentage in relation to the criterion.
Budget Process: The process by which budgets are created and approved.
Capstone: A course or experience toward the end of a program in which students have the opportunity to demonstrate their cumulative knowledge, skills, and dispositions related to some or all of the learning outcomes. In capstone courses/experiences, students produce direct evidence of their learning. Examples of capstone assignments include: standardized assessments, exhibitions, presentations, performances, and/or research papers.
Common Course Numbering System (CCNS): Established by the Colorado Community College System (CCCS), the CCNS establishes shared course numbers, course descriptions, and course outcomes across the colleges in the system.
Closing the Loop: Closing the Loop encompasses analyzing results from outcome assessments, using results to make changes to improve student learning, and re-assessing outcomes in order to determine the effect those changes had on student learning.
Constituents: People the institution serves, advocates for, or organizes.
Content or Content Area: The subject matter of a discipline.
Course: A unit of instruction that has the following: a formalized syllabus; a description; a condensed outline or statement; an approval in accordance with board policy; and an instructor of record.
Course-Level Outcomes (CLO) (also called objectives, competencies, goals): Statements which articulate, in measurable terms, what students should know and be able to demonstrate as a result of and at the conclusion of a course. These are established using CCNS outcomes, GT requirements, and departmental standards. These outcomes communicate course goals explicitly; and foster transfer of responsibility for learning from faculty to students.
Course-Level Assessment: The intentional collection of evidence of student learning with which the instructor can assess mastery of one or more course-level outcomes. Through course-level assessment, faculty provide timely and useful feedback to students, use data to assign grades, and record data related to students’ achievement of the CLOs in question. Course-embedded assessment that occurs towards the end of a program can also yield data for program outcomes assessment efforts.
Criteria: The discrete domains of a subject against which a learning performance is rated. For example, criteria included in an assessment of student writing might include accuracy of content, appropriate use of evidence to support argument, organization, and adherence to the conventions of academic English.
Criterion-Referenced Testing: Refers to evaluating students against an absolute standard of achievement, rather than evaluating them in comparison with the performance of other students. A standard of performance is set to represent a level of expertise or mastery of skills or knowledge.
Curriculum Mapping: The analytic process in which faculty examine the alignment between course-level, program-level and institution-level outcomes. The primary purpose of curriculum mapping is to identify courses in which program and institution level outcomes are introduced (I), practiced (P), or should be demonstrated (D). Ideally, this analytic process results in a publicly available visual representation--a curriculum map; in addition to promoting transparency, curriculum mapping helps faculty identify courses from which to gather student work for the assessment of a particular learning outcome.
Curriculum Map: A document that shows the alignment of learning outcomes at different levels. It is the outcome of curriculum mapping.
Degrees with Designation (DWDs): A Colorado Department of Higher Education (CDHE) approved associate of arts or associate of science degree within a specific academic discipline prearranged program path, which allows students to transfer degrees and enroll as juniors at any Colorado public four-year college or university.
Diagnostic Assessment: Information gathering at the beginning of a course or program. Diagnostic assessment can yield actionable information about students’ prior knowledge; additionally, diagnostic assessment activities provide information for students about what they will be expected to know and do at the conclusion of a course or program. Often takes the form of a “pre-test.”
Direct Assessment of Learning: Gathers evidence, based on student performance, which demonstrates the learning itself rather than just the collection of grades data. Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples: most classroom testing for grades is direct assessment (in this instance within the confines of a course), as is the evaluation of a research paper in terms of the discriminating use of sources. The latter example could assess learning accomplished within a single course or, if part of a senior requirement, could also assess cumulative learning.
Embedded Assessment: A means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Often uses for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative.
Evaluation: A judgment about whether course/program/institutional goals were achieved.
External Assessment: Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams. In CTE programs, this type of assessment is often part of the credentialing process for the student.
Formative Assessment: Information gathering strategies that provide actionable evidence related to students’ progress toward mastery of the learning outcomes during the term (or class period). An integral part of excellent instruction, regular formative assessment provides valuable information to faculty regarding instructional strategies that are/aren’t producing student learning. Formative assessment also provides students with information about their progress in a course.
GT Competencies: Learning outcomes established by the Colorado Department of Higher Education (CDHE) that are required in courses that are part of the GT (Guaranteed Transfer) pathways. https://highered.colorado.gov/academics/transfers/gtPathways/Criteria/competency.html
GT Pathways: GT Pathways (Guaranteed Transfer) courses, in which the student earns a C- or higher, will always transfer and apply to GT Pathways requirements in AA, AS, and most bachelor’s degrees at every public Colorado college and university. GT Pathways do not apply to some degrees (such as many engineering, computer science, nursing, and others).
Guided Pathways: The whole package of academic pathways, programs, advising, and other support services that get students from application to graduation. Degrees and programs at CCA are organized into 6 different Guided Pathways.
High Stakes Assessment: The decision to use the results of assessment to set a hurdle that needs to be cleared for completing a program of study, receiving certification, or moving to the next level. Most often, the assessment so used is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. Examples: at the secondary school level, statewide exams required for graduation; in postgraduate education, the bar exam.
Holistic Scoring Method: A scoring method which assigns a single score based on an overall appraisal or impression of performance rather than analyzing the various dimensions separately. A holistic scoring rubric can be specifically linked to focused (written) or implied (general impression) criteria. Some forms of holistic assessment do not use written criteria at all but rely solely on anchor papers for training and scoring.
Indirect Evidence: Data from which it is possible to make inferences about student learning. Sources of indirect evidence include students’ perceptions of their own learning gathered through self-report surveys; focus groups; exit interviews; alumni and current student surveys; and graduation and retention data and reports. Indirect evidence alone is insufficient to make meaningful decisions about program or institutional effectiveness.
Institutional Outcome: The knowledge, skills, abilities, and attitudes that students are expected to develop as a result of their overall experiences with any aspect of the college, including courses, programs, and student services.
Institution-Level Assessment: Uses the institution as the level of analysis and focuses on Institutional Outcomes. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability.
Inter-Rater Reliability: Also known as inter-rater agreement, or concordance. The degree of agreement among raters that gives a score of how much homogeneity, or consensus, there is in the ratings given by judges.
LEAP: Launched in 2005, Liberal Education and America’s Promise (LEAP) is a national public advocacy and campus action initiative. LEAP essential outcomes and rubrics were used during the GT competencies revision process. https://www.aacu.org/leap
Learning Outcome (goal, competency, objective): Statements that describe the skills or knowledge learned by the students in a course, program, activity, etc. which can be demonstrated and measured.
Local Assessment: Means and methods that are developed by an institution’s faculty or staff based on their teaching approaches, students, and learning goals. An example would be an English Department’s construction and use of a writing rubric to assess incoming freshmen writing samples, which might then be used assign students to appropriate writing courses or might be compared to senior writing samples to get a measure of value-added.
Metric: The end product of measurement, also known as results. Must be contextualized to carry meaning.
Norming: A process of conversation and analysis through which assessors reach consistent agreement about the meaning and applicability of assessment criteria, such as a rubric. When such agreement is reached, the readers are said to be “normed” to the particular instrument. It is important to check for Inter-rater Agreement and re-norm as needed. Also known as calibration, this process promotes consistent application of assessment standards.
Norm-Referenced Assessment: Measurement of relative performance; e.g., a measure of how well one student performs in comparison to other students completing the same evaluative activity. The usefulness of norm-referencing is limited for program assessment because it does not provide information about students’ performances on criteria related to Program or Course Learning Outcomes.
Performance Criteria: The standards by which student performance is evaluated. Performance criteria help assessors maintain objectivity and provide students with important information about expectations, giving them a target or goal to strive for.
Performance Indicator: A sign that something has happened (i.e., an indicator of learning). A performance indicator provides examples or concrete descriptions of what is expected at varying levels of mastery.
1. Each and every individual degree, certificate and/or department including all of the core, required, and elective courses on the academic side as well as the college services on the non-academic side that support a student’s completion efforts.
2. A coherent sequence of courses designed to prepare individuals for employment or further education in a specific occupational area. A certificate program is a program requiring less than 60 credit hours (usually less than 45 credit hours) and a degree program requires 60 or more college-level credit hours. A degree program can be designed for either employment or transfer. Certificate programs lead to employment or are stackable towards a degree program which allows students to complete a certificate while completing a degree.
At CCA, the term program may be defined as a department, service, degree, or certificate.
Program Assessment, Co-Curricular: Assessment of the co-curricular programs, activities, and learning experiences that complement the student learning experience.
Program Assessment, Curricular: Assessment of the programs, activities, and learning that occur in the classroom leading toward completion of a certificate or degree.
Program Review (also called Department Review in some cases): A cyclical process through which program staff/faculty engage in inquiry to support evidence-based decision making. The purpose of program review is to generate actionable and meaningful results to inform discussions about program effectiveness, sustainability, budget, and strategic planning. Best practices for program review call for the inclusion of multiple sources of indirect and direct evidence, gathered and analyzed over time, rather than all at once in advance of a self-study.
Program-Level Outcomes (PLO): Statements which articulate, in measurable terms, what students should know and be able to demonstrate as a result of and at the conclusion of a program. PLOs communicate program goals explicitly; and foster transfer of responsibility for learning from faculty to students.
Program-Level Assessment: The systematic and intentional collection and analysis of aggregated evidence (direct and indirect) of student learning to inform conversations about program effectiveness. The results of program-level assessment can be used in self-studies prepared as part of regular Program Review. Program-level assessment is inquiry-driven. For CCA assessment purposes, Programs may be defined as departments, services, degrees, or certificates.
Qualitative Assessment: Collects data that does not lend itself to quantitative methods but rather to interpretive criteria.
Quantitative Assessment: Collects data that can be analyzed using quantitative methods.
Rubric: An instrument that describes the knowledge and skills required to demonstrate mastery in an assignment. Rubrics often use scales that include four or more categories. Unlike checklists, rubrics are designed as scoring guides, which clearly articulate what mastery looks like at each performance level. Rubrics communicate the expectations of a given assignment or task, if shared beforehand, and structure how student work is evaluated.
Stakeholder: Any group or individual who can affect or is affected by the achievement of the college’s objective.
Stakeholders, External: Alumni, accrediting bodies, employers, governing entities, the community served, donors.
Stakeholders, Internal: Students, parents, staff, faculty, instructors, administration.
Strategic Planning: An organization’s process of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy.
Student Involvement: The process of engaging students in every facet of the educational process for the purpose of strengthening their commitment to the college experience.
Summative Assessment: A snapshot of student learning at a particular point-in-time, usually at the end of a course or program. Data from summative assessment can inform an individual faculty member’s planning for the next quarter or program faculty interested in assessing students’ mastery of program learning outcomes at a particular time.
Validity: Describes how well an instrument measures what it is intended to measure. Also refers to the trustworthiness of conclusions drawn from analyses. It is important to consider validity when making claims about the effectiveness of a particular program or instructional approach.
Value-Added: The effects educational providers have on students during their programs of study comprise the value-added feature of academics. Participation in higher education has value-added impact when student learning and development occur at levels above those that occur through natural maturation, usually measured as longitudinal change or difference between pretest and posttest.