Below we’ve compiled some definitions of terms used in the Assessment & You series that will help you achieve a higher level of competency in assessment, evaluation, and storytelling. If you have any questions about any of these terms, or would like to suggest new terms to include, please leave a comment below.
New terms will be added monthly for each new Assessment & You post.
| Assessment |
Any effort to compile, analyze, and interpret data in order to improve your work.
| Anecdotal Data |
Any evidence that is based on anecdotes rather than systematic assessment. It might be information you got from a focus group that wasn’t totally representative, a few conversations you’ve had that make you think there’s a trend going on, or just plain observations. This type of data shouldn’t be used to make change or generalizations, but it can show the need for more assessment.
| Cycle of Assessment |
The act of moving through the stages of gathering, analyzing, and interpreting evidence and then implementing change based on your findings.
| Data |
Things that are known or assumed as facts, that are collected together to provide the basis for reasoning and calculation.
| Documenting |
The act of recording evidence, opinions, experiences, feedback, and results. It may be as formal as a professional report or as informal as draft notes.
| Quantitative Assessment |
The kind of assessment that gives you numbers. Anything from attendance numbers, money spent, or statistics. This kind of data is pretty easy to sort and interpret since it’s there in black and white, but it can be hard to inspire with it.
| Qualitative Assessment |
The kind of assessment that gives you stories. Whether using focus groups, reflection papers, exit interviews, etc. This kind of assessment takes longer, and is more difficult to interpret than quantitative, but it often gives us very human and transformational results that can provide inspiration. It’s important to note that we shouldn’t use qualitative assessment as a basis to generalize trends since the data is usually coming from a small group of individuals.
| Correlative Data |
A measurable relationship between two things, but without evidence for causation (one thing is not known to be responsible for the other). For example, we might have correlative data showing that student involvement is related to higher academic achievement, but we can’t say that involvement causes a student to get higher grades.
| Rubric |
A document that assigns a numeric score to learning. It does this by describing levels that list associated criteria, behaviours, and demonstrated knowledge or skills.
| Focus Group |
A qualitative method of assessment that uses a small sample of people brought together in an interactive group setting to gather their perspectives about issues, programs, or ideas. A facilitator poses prepared questions and records their responses.
| Narrative |
A system of stories. Narrative and story are often used interchangeably, but the difference is that a narrative organizes the elements of multiple stories into a sequence similar to how a novel consists of multiple chapters.
| Learning Objective |
A statement outlining the intended result of an experience that speaks to the content of the program in question. These are usually more broad than learning outcomes.
| Learning Outcome |
A statement outlining the measured, or actual result of an experience. Learning outcomes should be specific examples of what students can do as a result of their learning.
| Bias |
An inclination or preconceived opinion about something or someone. For example, you might want to find out that a workshop you ran was a success and might be biased toward seeing it as a good program.
| Professional Standard |
Specifications that are designed by a professional body to make the work of the professionals more effective. The CAS Standards of Professional Practice are a great example of standards for Student Affairs.
| SMART |
A mnemonic that defines the elements of a good goal. For our purposes, it stands for Specific, Measurable, Achievable, Relevant, and Timely.
| Raw Data |
Data that has not been handled in any way for analysis.
| Scope Assessment |
Assessment that studies the extent or range of impact. For example, attendance or usage numbers can tell us about how many people we interact with within the community.
| Satisfaction Assessment |
Assessment that studies the level to which respondents are happy with what they have received. Feedback surveys are a typical example of a satisfaction assessment tool.
| Demographics |
Social statistics that relate to a population. Things like age, gender, ethnicity, etc. would be considered demographic information.
| Infographic |
An image designed to present information so that it can be read at a glance. For example, check out this infographic from Student Learning Support.
| Indirect Assessment |
Tools that ask the students to report on their knowledge and skills. This gives a picture of student perceptions, satisfaction, and frame of mind, but not necessarily what they actually know.
| Direct Assessment |
Tools that test the actual knowledge and/or skills of students. Used pre- and post- experience can give us compelling evidence that learning achieved happened as a result of the designed experience.
| Formative Assessment |
These are assessment activities that are used during the learning process to assess how it is proceeding and then used to modify activities to improve the chances of reaching the set outcomes.
| Summative Assessment |
These are assessment activities that are carried out to summarize and report on development at a particular time, typically at the end of a program/course.
| Evaluation |
Evaluative assessment activities are used to judge how well the learning experience achieved the set outcomes.
| Research |
Systematic investigation for the purpose of establishing facts and generating new knowledge in a specific discipline.
| Private Institutions |
Universities or colleges that operate on a for-profit basis and do not receive any funding from government sources.
| Public Institutions |
Universities or colleges that receive partial funding from government sources, and do not work to generate profit.
| Completion Rate |
The percentage of respondents who go on to complete the entire survey.
| Likert Scale |
An ordinal response scale that helps to assign a quantitative value to a respondent’s opinion. There can be varying number of points on the scale, but 4 or 5 are most common. Including a mid-point in the scale has been debated as to how it affects the validity of data.
| Ordinal |
Data that is ordered. Ranking from oldest to newest, the order of winners in a race, or, for our purposes, on a scale of 1 to 5. Ordinal data cannot tell us if the difference between scores is equal. For example, if someone rates a program as a 4 and another as a 2, it doesn’t necessarily mean that they liked one program twice as much.
| Preamble |
An introduction to your survey. It is best practice to include who you are, the purpose of the survey, and how you plan to use the data provided. This would also be the place where you would describe any incentives and how the respondent might receive them.
| Response Rate |
The percentage of your sample (the group of people you sent the survey to) that starts the survey.
| Validated Survey |
Validation is based on the opinion of the researcher or other experts that the survey designed is accurately measuring what it was designed to measure. If you are validating a survey you have designed on your own, you would need to test it out until you are confident that it is getting you the data you want. Otherwise, you could participate in an ethics review or seek the opinion of a research expert to weigh in.
| Objective standard |
The legal definition of objective means how something would be perceived by a reasonable neutral observer. For our purposes, it refers to the standard created by grouping together valid samples of similar data, thus negating any subjective opinions, so that we can compare ourselves to it.
| Professional association |
A (usually) non-profit organization that exists to further a profession through advocacy, development of its members, dialogue, and community support.
| Internal Benchmarking |
Making a comparison between teams, groups, or programs within the same institution to better understand how the subject is performing.
| External Benchmarking |
Making a comparison between one or more similar programs at various institutions to establish how each data set is ranked in terms of performance.
| Best Practice |
A method, technique, or design that consistently proves to be the most effective option. That means something that has been tried several times in different places that we now recognize as the best way to do it.
| Minimum Standard of Practice |
The basic standard of performance that is ethically required by a profession for something to be considered part of the field’s work. It just means that this is the absolute minimum you should be doing.
ARTICLE 8: What Is A “Culture of Assessment”?
| No glossary terms. |
ARTICLE 9: Stories They Can Feel: Using Data To Make A Real Difference
| Open Data |
Data that is shared freely to anyone. The idea is that by gathering data and publicly sharing it, that it will enable more people to use this data to inform and fuel innovation.
| Closed Data |
Data that has restricted access. It might be available for a charge, only to members of a certain organization, or confidential due to privacy concerns.
| Data Visualization |
Any graphic representation of data. This includes graphs, charts, infographics, word clouds, diagrams, and many more. Their purpose is to compile insights from data into a simple representation that provides context.