Blog Post Author: Derek Hottell
Date: November 2014
Surveys and questionnaires are ubiquitous in our modern lives. From consumer product reports to political polls to workplace assessments, our behavioral patterns, values, and internal states are continually assessed. The most cynical explanation for the prevalence of such instruments sees them as a means for capitalist forces to manipulate and control marketplace choices, but surveys are utilized for far less Machiavellian purposes. Surveys and questionnaires are used by marginalized groups to seek social justice, researchers to understand complex phenomena, and educators to inform practice.
A cursory review of higher education-related academic and trade journals highlights the usage of surveys to better understand college student experiences and outcomes. The topics studied through surveys range from the average number of hours students study to the number of alcoholic beverages consumed during an average weekend to the frequency of sexual assaults and other violent crimes occurring on campus. In fact, almost all of the information about student outcomes and environments is derived from self-reported survey instruments. Surveys are relatively low-cost and require little time to implement, which makes them attractive options for collecting data about the student experience. With increasing focus from public figures about implementing evidence-based practices, surveys are seen as essential tools by many administrators and academics for being able to attain this evidence. But, what does the information garnered from these instruments actually reveal about the student experience?
Reports from students about their behavioral frequency patterns are often taken at face-value, as valid indicators of what students actually do with their time, but recent studies from Brenner (2012) and others have started to illuminate the key role identity plays in the survey response process. Beyond simply recalling and reporting, individuals utilize a reconstructive process when supplying information, and this process is influenced by how individuals understand and perceive themselves. With self-reported surveys, participants are asked to accurately observe and report their own behavior, but any information garnered through such a process involves some amount of filtering by the respondent who is acting as the observer. Consequently, how individuals understand their personal identities in relation to their particular environments will in some way inform how they respond on surveys not because they are intentionally being duplicitous, but because we all have a need to maintain and uphold our personally held identity or conception of self. Additionally, we may be more or less attuned to certain events in our lives based upon our identities.
While many events are likely to be encoded for future memory retrieval if the individual perceives them as landmark life events (e.g., weddings, incidents of severe trauma, etc.), the types of experiences most often of interest to higher education researchers and practitioners are common, everyday occurrences, such as studying. As an example, think about how many hours you spent reading for leisure in the last week. If you were considering this question, you would have to determine what qualified as reading for leisure, what the time parameter of the last week entailed, and then attempt to recall all time intervals matching those descriptors. For many of us, the author certainly included, we would struggle to accurately report this number, so why would we expect student responses to be any different when answering how many hours they spent studying, engaging in co-curricular activities, or working for pay?
Since this recall process is often a struggle for individuals, respondents rely upon strategies to attempt to ascertain a probable answer (i.e., what likely occurred). What individuals probably did is inextricably linked to how they see themselves as a person. Am I the type of person who would read for leisure for several hours in an average week? Furthermore, respondents also utilize response options to situate themselves based upon what they perceive as norm-references. In a commonly cited study, the number of hours spent watching television reported by respondents was dramatically different based upon the response options presented to them. When lower range options were presented, the respondents overwhelmingly reported watching less than two and a half hours of television a night, but when the range was raised, almost all respondents reported watching over two and half hours of television. Why? Did these individuals television viewership change? Doubtful – likely the individuals norm-referenced themselves. They asked themselves, “Do I watch an average amount of television?” Then, they mapped their responses onto the available categories.
Similarly, Brenner (2012) found significant differences in how individuals report church-going behavior in the United States based upon the method of instrumentation. According to time-use diaries, approximately 24% of individuals go to church every Sunday, but stylized surveys would place this figure closer to 50% of US Americans attending religious services every week. Why the discrepancy? Brenner (2012) suggests these differences are related to how individuals respond on surveys – not with what they do, but with how they see themselves. They are not answering whether they literally sit in the pew every week, but rather whether they are the type of people who think such attendance is important.
Instead of viewing such differential response patterns as evidence of survey response bias, researchers may be well-advised to consider how such response patterns are evidence of culturally situated behavior. Mark Twain suggested, “To a man with a hammer, everything looks like a nail,” and I would suggest many higher education researchers have fallen into the same trap of viewing surveys as the best tool to study every issue and accepting these instruments as “objective” measures. Surveys provide a great deal of valuable information, and they have been beneficial in shaping our understanding of how and why students persist to graduation and succeed academically in higher education. However, the information surveys provide may be less evidence of what students do than how they see themselves within the institutional environment. Contextualizing and deepening our understanding of the information provided by surveys will only help practitioners, as they attempt to implement evidence-based practices.
Brenner, P. (2012). Investigating the effect of bias in survey measures of church attendance. Sociology of Religion: A Quarterly Review, 74(1), 361-383. doi:10.1093/socrel/srs042
Bronfenbrenner, U. (2005). Making human beings human: Bioecological perspectives on human development. Thousand Oaks, CA: Sage Publications.
Converse, J. M. (2010). Survey research in the United States: Root and emergency 1890 – 1960. Berkeley, CA: University of California Press.
Fowler, Jr., F. J. (1995). Improving survey questions: Design and evaluation. Thousand Oaks, CA: Sage Publications.
Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. In P. D. Umbach (Ed.), Survey Research: Emerging Issues (pp. 73-90). San Francisco: Wiley Periodicals.
Tourangeau, R., Rips, L J., & Rasinski, K. (2000). The psychology of survey response. New York, NY: Cambridge University Press.
Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage Publications.