FSSE’s Commitment to Data Quality
As part of FSSE’s commitment to transparency and continuous improvement, we routinely assess the quality of our survey and resulting data. We also embrace our responsibility to share the results with the higher education community. This psychometric portfolio is a framework for presenting our studies of the validity, reliability, and other indicators of quality of FSSE’s data. The information presented here serves higher education leaders, researchers, and professionals who have an interest in using FSSE data and trusting their results. Within the portfolio, each study is described in a brief technical report that includes research questions, data, methods, results, selected, references, and more. Your feedback is most welcome. Please contact us at email@example.com.
Suggested citation for overall portfolio reference: Faculty Survey of Student Engagement. (2016). FSSE Psychometric Portfolio. Retrieved from fsse.indiana.edu.
Does the survey measure what it is intended to measure?
In a general sense, validity is the degree to which a test measures what it is intended to measure. According to Messick (1989) it is “the degree to which evidence and theory support the interpretations of test scores entailed by the proposed uses of tests.” Therefore, validity is about the inferences we make from the evidence, and not a property of the instrument itself. Below are forms of validity and links to FSSE's corresponding studies.
RESPONSE PROCESS VALIDITY
Do respondents understand the questions to mean what we intend them to mean?
Response process validity is the extent to which the actions and thought processes of survey responders demonstrate that they understand the construct in the same way it is defined by the researchers. There is no statistical test for this type of validity, but rather it is observed through respondent observation, interviews, and feedback.
- Write-in analysis 2018
- Measurement invariance: 2018
How well does this group of items measure the theoretical concept?
Construct validity is the extent to which a measure correlates with the theorized construct that it purports to measure. The measure is intended to operationalize the concept by gathering observable details that reflect the underlying phenomenon.
KNOWN GROUPS VALIDITY
Do the results of various subgroups differ or not as expected?
Known groups validity is the extent to which a measurement is sensitive to differences and similarities in various groups (e.g., men and women, faculty in different disciplines, or faculty employed at different types of institutions). Known groups validity is demonstrated when a measure can discriminate between two groups known to differ on the variable of interest or, similarly, using groups of individuals with differing levels/severities of a trait.
Reliability refers to the consistency or stability of measurement. The reliability evidence presented here assesses the extent to which items within a scale are internally consistent or homogenous and the extent to which results are similar across periods of time or different forms of the FSSE survey. Use of a reliable instrument or scale implies that data and results are reproducible.
Do the items within a scale correlate well with each other?
Internal consistency is the extent to which a group of items measure the same construct, as evidenced by how well they vary together, or intercorrelate.
How stable are the results for institutions upon repeated administrations?
Temporal stability, as the name implies, refers to the consistency of scores over time, as evidenced by the correlation of the score on two occasions. This can be measured at the institution level where institutions receive aggregate results from one year to the next.
- Temporal stability: 2014-2015
Do results correlate well with those of a similar measure on the same population?
Equivalence reliability is measured by the correlation of scores between different versions of the same instrument, or between instruments that measure the same or similar constructs, such that one instrument can be reproduced by the other.
“Other Quality Indicators” includes procedures, standards, and other evaluations implemented by FSSE to reduce error and bias, and to increase the precision and rigor of the data. These studies assess FSSE’s adherence to the best practices in survey design, and cover various stages of the survey, including sampling, survey administration, and reporting.
Do faculty that respond to FSSE differ from those that choose not to respond to FSSE?
Nonresponse bias arises when people who choose to participate in the survey are systematically different from those who do not. Nonresponse bias could potentially reduce the generalizability of the results.
- Nonresponse bias: 2014
Are FSSE scores influenced by a desire to respond in a socially desirable manner?
Social desirability refers to the tendency of respondents to provide answers that they think are more socially acceptable, even if they are not true. Social desirability bias is more likely to occur when the questions are sensitive.
- Social desirability: 2014
FSSE is committed to transparency and sharing aggregate findings with the higher education research community. To support that commitment, the Content Summaries offered below compile brief reviews of findings from the 10 FSSE scales. In each Content Summary, counts, means, standard deviations, and factor loadings; as well as frequencies for each scale item are presented. In addition, statistical information about the creation of each scale, how instructional staff responses differ by discipline, correlations between all FSSE scales, and significant instructional staff and course characteristic predictors for behaviors measured by the scale are included.
Challenging intellectual and creative work is central to student learning and collegiate quality. Colleges and universities promote high levels of student achievement by calling on students to engage in complex cognitive tasks requiring more than mere memorization of facts. This content area captures how much students' coursework emphasizes challenging cognitive tasks such as application, analysis, judgment, and synthesis.
- Higher-Order Learning: 2014-2017
REFLECTIVE AND INTEGRATIVE LEARNING
Personally connecting with course material requires students to relate their understanding and experiences to the content at hand. Instructors emphasizing reflective and integrative learning motivate students to make connections between their learning and the world around them, reexamining their own beliefs and considering issues and ideas from others’ perspectives.
- Reflective and Integrative Learning: 2013-2015
College students enhance their learning and retention by actively engaging with and analyzing course material rather than approaching learning as absorption. Examples of effective learning strategies include identifying key information in readings, reviewing notes after class, and summarizing course material. Instructors emphasizing these learning strategies in their courses help students encode key information to build long-term memory and retention.
- Learning Strategies 2013-2015
Learning is collaborative work. Collaborative learning requires students to mutually raise questions, seek understandings, and search for solutions in interactive group settings. Instructors emphasizing collaborative learning motivate students to learn from each other through peer teaching and knowledge exchange.
- Collaborative Learning: 2013-2015
Interactions with faculty can positively influence the cognitive growth, development, and persistence of college students. Through their formal and informal roles as teachers, advisors, and mentors, faculty members model intellectual work, promote mastery of knowledge and skills, and help students make connections between their studies and their future plans.
- Student-Faculty Interaction: 2013-2015
Providing students with opportunities and support across a variety of domains including the cognitive, interpersonal, and physical, is a trait of institutions committed to student success. Instructors are able to give insight into the extent to which institutions emphasize services and activities that support student learning and development.
- Supportive Environment: 2013-2015