Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-21T15:20:35.729Z Has data issue: false hasContentIssue false

16 - Question and Questionnaire Design

from Part III - Self-Report Measures

Published online by Cambridge University Press:  12 December 2024

John E. Edlund
Affiliation:
Rochester Institute of Technology, New York
Austin Lee Nichols
Affiliation:
Central European University, Vienna
Get access

Summary

The process of questionnaire design has been done intuitively by investigators for decades despite a large literature being available to guide the process to yield maximally reliable and valid measurement tools. This chapter offers two conceptual frameworks involving (1) the cognitive processes involved in answering questions optimally, and (2) conversational conventions that govern everyday communication. We use these frameworks to explain a range of empirical evidence documenting the impact of question manipulations on responses. Topics covered include open vs. closed questions, rating vs. ranking, rating scale length and scale point labels, acquiescence response bias, multiple select questions, response order effects, treatment of non-substantive response options, social desirability response bias, question wording and order, questionnaire length, and considerations for internet surveys. In all, we provide a set of best practices that should be useful to all researchers.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alwin, D. F., & Krosnick, J. A. (1985). The measurement of values in surveys: A comparison of ratings and rankings. Public Opinion Quarterly, 49, 535552.CrossRefGoogle Scholar
Backor, K., Golde, S., & Nie, N. (2007). Estimating survey fatigue in time use study. Paper presented at the 2007 International Association for Time Use Research Conference, Washington, DC.Google Scholar
Becker, S. L. (1954). Why an order effect. Public Opinion Quarterly, 18, 271278.CrossRefGoogle Scholar
Beckstead, J. W. (2014). On measurements and their quality: Verbal anchors and the number of response options in rating scales. International Journal of Nursing Studies, 51, 807814.CrossRefGoogle ScholarPubMed
Belli, R. F., Traugott, M. W., Young, M., & McGonagle, K. A. (1999). Reducing vote overreporting in surveys: Social desirability, memory failure, and source monitoring. Public Opinion Quarterly, 63, 90108.CrossRefGoogle Scholar
Berent, M. K., Krosnick, J. A., & Lupia, A. (2016). Measuring voter registration and turnout in surveys. Public Opinion Quarterly, 80, 597621.CrossRefGoogle Scholar
Berg, I. A., & Rapaport, G. M. (1954). Response bias in an unstructured questionnaire. Journal of Psychology, 38, 475481.CrossRefGoogle Scholar
Billiet, J. B., & Davidov, E. (2008). Testing the stability of an acquiescence style factor behind two interrelated substantive variables in a panel design. Sociological Methods & Research, 36, 542562.CrossRefGoogle Scholar
Bishop, G. F., Hippler, H. J., Schwarz, N., & Strack, F. (1988). A comparison of response effects in self-administered and telephone surveys. In Groves, R. M., Biemer, P. P., Lyberg, L. E., Massey, J. T., Nicholls, W. L. II, & Waksberg, J. (eds.), Telephone Survey Methodology (pp. 321340). Wiley.Google Scholar
Blair, G., & Imai, K. (2012). Statistical analysis of list experiments. Political Analysis, 20, 4777.CrossRefGoogle Scholar
Bowling, N. A., Gibson, A. M., Houpt, J. W., & Brower, C. K. (2021). Will the questions ever end? Person-level increases in careless responding during questionnaire completion. Organizational Research Methods, 24, 718738.CrossRefGoogle Scholar
Brace, I. (2018). Questionnaire Design: How to Plan, Structure, and Write Survey Material for Effective Market Research, 4th ed. Kogan Page.Google Scholar
Brailovskaia, J., & Margraf, J. (2020). How to measure self-esteem with one item? Validation of the German Single-Item Self-Esteem Scale (G-SISE). Current Psychology, 39, 21922202.CrossRefGoogle Scholar
Budescu, D. V., & Wallsten, T. S. (1985). Consistency in interpretation of probabilistic phrases. Organizational Behavior and Human Decision Processes, 36, 391405.CrossRefGoogle Scholar
Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119, 197253.CrossRefGoogle Scholar
Cannell, C. F., Miller, P.V., & Oksenberg, L. (1981). Research on interviewing techniques. Sociological Methodology, 11, 389437.CrossRefGoogle Scholar
Carp, F. M. (1974). Position effects in single trial free recall. Journal of Gerontology, 29, 581587.CrossRefGoogle Scholar
Chan, J. C. (1991). Response-order effects in Likert-type scales. Educational and Psychological Measurement, 51, 531540.CrossRefGoogle Scholar
Chyung, S. Y., Swanson, I., Roberts, K., & Hankinson, A. (2018). Evidence-based survey design: The use of continuous rating scales in surveys. Performance Improvement, 57, 3848.CrossRefGoogle Scholar
Cialdini, R. B. (1993). Influence: Science and Practice, 3rd ed. HarperCollins.Google Scholar
Clayton, K., Horrillo, J., and Sniderman, P. (2021). The BIAT and the AMP As measures of racial prejudice in political science: A methodological assessment. SSRN. https://doi.org/10.2139/ssrn.3744338.CrossRefGoogle Scholar
Converse, J. M., & Presser, S. (1986). Survey Questions: Handcrafting the Standardized Questionnaire. SAGE Publications.CrossRefGoogle Scholar
Converse, P. E. (1964). The nature of belief systems in mass publics. In Apter, D. E. (ed.), Ideology and Discontent (pp. 206261). Free Press.Google Scholar
de Vaus, D. (2016). Survey research. In Greenfield, T. & Greener, S. (eds.), Research Methods for Postgraduates, 3rd ed. (pp. 202213). Wiley & Sons.CrossRefGoogle Scholar
Dickinson, T. L., & Zellinger, P. M. (1980). A comparison of the behaviorally anchored rating mixed standard scale formats. Journal of Applied Psychology, 65, 147154.CrossRefGoogle Scholar
Dillman, D. A. (2000). Mail and Internet Surveys: The Tailored Design Method, 2nd ed. Wiley.Google Scholar
Ditonto, T. M., Lau, R. R., & Sears, D. O. (2013). AMPing racial attitudes: Comparing the power of explicit and implicit racism measures in 2008. Political Psychology, 34(4), 487510.CrossRefGoogle Scholar
Dolnicar, S., Rossiter, J. R., & Grün, B. (2012). “Pick any” measures contaminate brand image studies. International Journal of Market Research, 54, 821834.CrossRefGoogle Scholar
Dumitrescu, D., & Martinsson, J. (2016). Surveys as a social experience: The lingering effects of survey design choices on respondents’ survey experience and subsequent optimizing behavior. International Journal of Public Opinion Research, 28, 534561.CrossRefGoogle Scholar
Edwards, A. L. (1957). Techniques of Attitude Scale Construction. Appleton-Century-Crofts.CrossRefGoogle Scholar
Finkel, S. E., Guterbock, T. M., & Borg, M. J. (1991). Race-of-interviewer effects in a pre-election poll: Virginia 1989. Public Opinion Quarterly, 55, 313330.CrossRefGoogle Scholar
Fonda, C. P. (1951). The nature and meaning of the Rorschach white space response. Journal of Abnormal Social Psychology, 46, 367377.CrossRefGoogle ScholarPubMed
Funke, F., Reips, U., & Thomas, R. K. (2011). Sliders for the smart: Type of rating scale on the Web interacts with educational level. Social Science Computer Review, 29, 221231.CrossRefGoogle Scholar
Givon, M. M., & Shapira, Z. (1984). Response to rating scales: A theoretical model and its application to the number of categories problem. Journal of Marketing Research, 21, 410419.CrossRefGoogle Scholar
Goffman, E. (1959). The Presentation of Self in Everyday Life. Doubleday/Anchor.Google Scholar
Gordon, R. A. (1987). Social desirability bias: A demonstration and technique for its reduction. Teaching of Psychology, 14, 4042.CrossRefGoogle Scholar
Green, P. E., & Rao, V. R. (1970). Rating scales and information recovery: How many scales and response categories to use? Journal of Marketing, 34, 3339.Google Scholar
Grice, P. (1975). Logic and Conversation. In Cole, P. & Morgan, J. (eds.), Syntax and Semantics, vol. 3: Speech Acts (pp. 4158). Academic Press.Google Scholar
Gummer, T., & Kunz, T. (2021). Using only numeric labels instead of verbal labels: Stripping rating scales to their bare minimum in Web surveys. Social Science Computer Review, 39, 10031029.CrossRefGoogle Scholar
Holbrook, A. L., & Krosnick, J.A. (2010). Measuring voter turnout by using the randomized response technique: Evidence calling into question the method’s validity. Public Opinion Quarterly, 74, 328343.CrossRefGoogle Scholar
Holbrook, A. L., Krosnick, J. A., Carson, R. T., & Mitchell, R. C. (2000). Violating conversational conventions disrupts cognitive processing of attitude questions. Journal of Experimental Social Psychology, 36, 465494.CrossRefGoogle Scholar
Javeline, D. (1999). Response effects in polite cultures: A test of acquiescence in Kazakhstan. Public Opinion Quarterly, 63, 128.CrossRefGoogle Scholar
Johanson, G. A., Gips, C. J., & Rich, C. E. (1993). If you can’t say something nice: A variation on the social desirability response set. Evaluation Review, 17, 116122.CrossRefGoogle Scholar
Kalton, G., Collins, M., & Brook, L. (1978). Experiments in wording opinion questions. Applied Statistics, 27, 149161.CrossRefGoogle Scholar
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213236.CrossRefGoogle Scholar
Krosnick, J. A. (1992). The impact of cognitive sophistication and attitude importance on response order effects and question order effects. In Schwarz, N. & Sudman, S. (eds.), Order Effects in Social and Psychological Research (pp. 203218). Springer.Google Scholar
Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537567.CrossRefGoogle ScholarPubMed
Krosnick, J. A. (2002). The causes of no-opinion responses to attitude measures in surveys: They are rarely what they appear to be. In Groves, R. M., Dillman, D. A., Eltinge, J. N., & Little, R. J. A. (eds.), Survey Nonresponse (pp. 88100). Wiley-Interscience.Google Scholar
Krosnick, J. A., & Alwin, D. F. (1987). An evaluation of a cognitive theory of response-order effects in survey measurement. Public Opinion Quarterly, 51, 201219.CrossRefGoogle Scholar
Krosnick, J. A., & Alwin, D. F. (1988). A test of the form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. Public Opinion Quarterly, 52, 526538.CrossRefGoogle Scholar
Krosnick, J. A., & Berent, M. K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 37, 941964.CrossRefGoogle Scholar
Krosnick, J. A., Boninger, D. S., Chuang, Y. C., Berent, M. K., & Carnot, C. G. (1993). Attitude strength: One construct or many related constructs? Journal of Personality and Social Psychology, 65, 11321151.CrossRefGoogle Scholar
Krosnick, J. A., & Fabrigar, L. R. (1999). Designing Good Questionnaires: Insights from Psychology. Oxford University Press.Google Scholar
Krosnick, J. A., & Fabrigar, L. R. (forthcoming). The Handbook of Questionnaire Design. Oxford University Press.Google Scholar
Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., et al. (2002). The impact of “no opinion” response options on data quality: Non-attitude reduction or invitation to satisfice? Public Opinion Quarterly, 66, 371403.CrossRefGoogle Scholar
Krosnick, J. A., Narayan, S. S., & Smith, W. R. (1997). Satisficing in surveys: Initial evidence. In Braverman, M. T. & Slater, J. K. (eds.), Advances in Survey Research (pp. 2944). Jossey-Bass.Google Scholar
Leech, G. N. (1983). Principles of Pragmatics. Longman.Google Scholar
Lehmann, D. R., & Hulbert, J. (1972). Are three-point scales always good enough? Journal of Marketing Research, 9, 444446.CrossRefGoogle Scholar
Lelkes, Y., Krosnick, J. A., Marx, D. M., Judd, C. M., & Park, B. (2012). Complete anonymity compromises the accuracy of self-reports. Journal of Experimental Social Psychology, 48, 12911299.CrossRefGoogle Scholar
Lenski, G. E., & Leggett, J. C. (1960). Caste, class, and deference in the research interview. American Journal of Sociology, 65, 463467.CrossRefGoogle Scholar
Lenzner, T. (2012). Effects of survey question comprehensibility on response quality. Field Methods, 24, 409428.CrossRefGoogle Scholar
Lichtenstein, S., & Newman, J. R. (1967). Empirical scaling of common verbal phrases associated with numerical probabilities. Bulletin of the Psychonomic Society, 9, 563564.CrossRefGoogle Scholar
Lindzey, G. G., & Guest, L. (1951). To repeat – check lists can be dangerous. Public Opinion Quarterly, 15, 355358.CrossRefGoogle Scholar
Lissitz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte Carlo approach. Journal of Applied Psychology, 60, 1013.CrossRefGoogle Scholar
Liu, M., & Conrad, F. G. (2019). Where should i start? On default values for slider questions in Web surveys. Social Science Computer Review, 37, 248269.CrossRefGoogle Scholar
Martin, W. S. (1973). The effects of scaling on the correlation coefficient: A test of validity. Journal of Marketing Research, 10, 316318.CrossRefGoogle Scholar
Matejka, J., Glueck, M., Grossman, T., & Fitzmaurice, G. (2016). The effect of visual appearance on the performance of continuous sliders and visual analogue scales. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ‘16) (pp. 54215432). ACM.Google Scholar
Matell, M. S., & Jacoby, J. (1971). Is there an optimal number of alternatives for Likert scale items? Study I: Reliability and validity. Educational and Psychological Measurement, 31, 657674.CrossRefGoogle Scholar
Mathers, N., Fox, N., & Hunn, A. (2007). Surveys and Questionnaires. Trent RDSU.Google Scholar
Mavletova, A. (2013). Data quality in PC and mobile Web surveys. Social Science Computer Review, 31, 725743.CrossRefGoogle Scholar
McCarty, J. A., & Shrum, L. J. (1997). Measuring the importance of positive constructs: A test of alternative rating procedures. Marketing Letters, 8, 239250.CrossRefGoogle Scholar
McCarty, J. A., & Shrum, L. J. (2000). The measurement of personal values in survey research: A test of alternative rating procedures. Public Opinion Quarterly, 64, 271298.CrossRefGoogle ScholarPubMed
McClendon, M. J. (1986). Response-order effects for dichotomous questions. Social Science Quarterly, 67, 205211.Google Scholar
Menold, N. (2020). Rating-scale labeling in online surveys: An experimental comparison of verbal and numeric rating scales with respect to measurement quality and respondents’ cognitive processes. Sociological Methods & Research, 49, 79107.CrossRefGoogle Scholar
Miethe, T. D. (1985). Validity and reliability of value measurements. Journal of Psychology, 119, 441453.CrossRefGoogle Scholar
Mingay, D. J., & Greenwell, M. T. (1989). Memory bias and response-order effects. Journal of Official Statistics, 5, 253263.Google Scholar
Mondak, J. J. (2001). Developing valid knowledge scales. American Journal of Political Science, 45, 224238.CrossRefGoogle Scholar
O’Muircheartaigh, C., Krosnick, J. A., & Helic, A. (2000). Middle alternatives, acquiescence, and the quality of questionnaire data. Working Papers 0103, Harris School of Public Policy Studies, University of Chicago.Google Scholar
Oppenheim, A. N. (1992). Questionnaire Design, Interviewing, and Attitude Measurement. Pinter.Google Scholar
Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46, 598609.CrossRefGoogle Scholar
Paulhus, D. L. (1986). Self-deception and impression management in test responses. In Angleitner, A. & Wiggins, J. (eds.), Personality Assessment via Questionnaires: Current Issues in Theory and Measurement (pp. 143165). Springer.CrossRefGoogle Scholar
Peters, D. L., & McCormick, E. J. (1966). Comparative reliability of numerically anchored versus job-task anchored rating scales. Journal of Applied Psychology, 50(1), 9296.CrossRefGoogle ScholarPubMed
Poffenberger, A. T. (1932). Psychology in Advertising. McGraw-Hill.Google Scholar
Quinn, S. B., & Belson, W. A. (1969). The Effects of Reversing the Order of Presentation of Verbal Rating Scales in Survey Interviews. Survey Research Center.Google Scholar
Riordan, B. C., Cody, L., Flett, J. A. M., Conner, T. S., Hunter, J., & Scarf, D. (2018). The development of a single item FoMO (fear of missing out) scale. Current Psychology, 39, 12151220.CrossRefGoogle Scholar
Robins, R. W., Hendin, H. M., & Trzesniewski, K. H. (2001). Measuring global self-esteem: Construct validation of a single-item measure and the Rosenberg Self-Esteem Scale. Personality and Social Psychology Bulletin, 27, 151161.CrossRefGoogle Scholar
Rosenberg, M. (1965). Society and the Adolescent Self-Image. Princeton University Press.CrossRefGoogle Scholar
Rosenberg, N., Izard, C. E., & Hollander, E. P. (1955). Middle category response: Reliability and relationship to personality and intelligence variables. Educational and Psychological Measurement, 15, 281290.CrossRefGoogle Scholar
Salant, P., & Dillman, D. A. (1994). How to Conduct Your Own Survey. John Wiley.Google Scholar
Saris, W. E., & Gallhofer, I. N. (2014). Design, Evaluation, and Analysis of Questionnaires for Survey Research. John Wiley & Sons.CrossRefGoogle Scholar
Saris, W. E., Revilla, M., Schaeffer, E., & Krosnick, J. A. (2010). Comparing questions with agree/disagree response options to questions with item-specific response options. Survey Research Methods, 4, 6179.Google Scholar
Shaeffer, E. M., Krosnick, J. A., Langer, G. E., & Merkle, D. M. (2005). Comparing the quality of data obtained by minimally balanced and fully balanced attitude questions. Public Opinion Quarterly, 69, 417428.CrossRefGoogle Scholar
Schlenker, B. R., & Weingold, M. F. (1989). Goals and the self-identification process: Constructing desired identities. In Pervin, L. A. (ed.), Goal Concepts in Personality and Social Psychology (pp. 243290). Lawrence Erlbaum Associates.Google Scholar
Schuman, H., & Presser, S. (1981). Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording and Context. Academic Press.Google Scholar
Schuman, H., & Scott, J. (1987). Problems in the use of survey questions to measure public opinion. Science, 236, 957959.CrossRefGoogle ScholarPubMed
Schwarz, N., & Hippler, H. J. (1991). Response alternatives: The impact of their choice and presentation order. In Biemer, P. P, Groves, R. M., Lyberg, L. E., Mathiowetz, N. A., & Sudman, S. (eds.), Measurement Errors in Surveys (pp. 4156). Wiley & Sons.Google Scholar
Schwarz, N., Hippler, H. J., Deutsch, B., & Strack, F. (1985). Response scales: Effects of category range on reported behavior and subsequent judgments. Public Opinion Quarterly, 49, 388395.CrossRefGoogle Scholar
Schwarz, N., Hippler, H. J., & Noelle-Neumann, E. (1992). A cognitive model of response-order effects in survey measurement. In Schwarz, N. & Sudman, S. (eds.), Context Effects in Social and Psychological Research (pp. 187201). Springer.CrossRefGoogle Scholar
Schwarz, N., Knäuper, B., Hippler, H. J., Noelle-Neumann, E., & Clark, L. (1991). Rating scales: Numeric values may change the meaning of scale labels. Public Opinion Quarterly, 55, 570582.CrossRefGoogle Scholar
Smith, T. W. (1988). Context effects in the general social survey. In Biemer, P. P., Groves, R. M., Lyberg, L. E., Mathiowetz, N. A., & Sudman, S. (eds.), Measurement Errors in Surveys (pp. 5772). Wiley & Sons.Google Scholar
Smyth, J. D., Christian, L. M., & Dillman, D. A. (2008). Does “yes or no” on the telephone mean the same as “check-all-that-apply” on the Web? Public Opinion Quarterly, 72, 103113.CrossRefGoogle Scholar
Smyth, J. D., Israel, G. D., Newberry, M. G., & Hull, R. G. (2019). Effects of stem and response order on response patterns in satisfaction ratings. Field Methods, 31, 260276.CrossRefGoogle Scholar
Smyth, J. D., Olson, K., & Burke, A. (2018). Comparing survey ranking question formats in mail surveys. International Journal of Market Research, 60, 502516.CrossRefGoogle Scholar
Sturgis, P., Allum, N., & Smith, P. (2008). An experiment on the measurement of political knowledge in surveys. Public Opinion Quarterly, 72, 90102.CrossRefGoogle Scholar
Theil, M. (2002). The role of translations of verbal into numerical probability expressions in risk management: A meta-analysis. Journal of Risk Research, 5, 177186.CrossRefGoogle Scholar
Tourangeau, R., Conrad, F. G., Couper, M. P., & Ye, C. (2014). The effects of providing examples in survey questions. Public Opinion Quarterly, 78, 100125.CrossRefGoogle Scholar
Tourangeau, R., Maitland, A., & Yan, H. Y. (2016). Assessing the scientific knowledge of the general public: The effects of question format and encouraging or discouraging don’t know responses. Public Opinion Quarterly, 80, 741760.CrossRefGoogle Scholar
Tourangeau, R., & Rasinski, K. A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin, 3, 299314.CrossRefGoogle Scholar
Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The Psychology of Survey Response. Cambridge University Press.CrossRefGoogle Scholar
Tourangeau, R., Sun, H., Conrad, F. G., & Couper, M. P. (2017). Examples in open-ended survey questions. International Journal of Public Opinion Research, 29, 690702.Google Scholar
Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859883.CrossRefGoogle ScholarPubMed
Truebner, M. (2021). The dynamics of “neither agree nor disagree” answers in attitudinal questions. Journal of Survey Statistics and Methodology, 9, 5172.CrossRefGoogle Scholar
Visser, P. S., Krosnick, J. A., Marquette, J. F., & Curtin, M. F. (2000). Improving election forecasting: Allocation of undecided respondents, identification of likely voters, and response order effects. In Lavrakas, P. L. & Traugott, M. (eds.), Election Polls, the News Media, and Democracy (pp. 224260). Chatham House.Google Scholar
Wallsten, T. S., Budescu, D. V., & Zwick, R. (1993). Comparing the calibration and coherence of numerical and verbal probability judgments. Management Science, 39, 176190.CrossRefGoogle Scholar
Wang, R., & Krosnick, J. A. (2020). Middle alternatives and measurement validity: A recommendation for survey researchers. International Journal of Social Research Methodology, 23, 169184.CrossRefGoogle Scholar
Warner, S. L. (1965). Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60, 6369.CrossRefGoogle ScholarPubMed
Wedell, D. H., Parducci, A., & Lane, M. (1990). Reducing the dependence of clinical judgment on the immediate context: Effects of number of categories and type of anchors. Journal of Personality and Social Psychology, 58, 319329.CrossRefGoogle ScholarPubMed
Wegener, D. T., Downing, J., Krosnick, J. A., & Petty, R. E. (1995). Measures and manipulations of strength-related properties of attitudes: Current practice and future directions. In Petty, R. E. & Krosnick, J. A. (eds.), Attitude Strength: Antecedents and Consequences (pp. 455487). Lawrence Erlbaum Associates.Google Scholar
Willis, G. B. (2005). Cognitive Interviewing: A Tool for Improving Questionnaire Design. SAGE Publications.CrossRefGoogle Scholar
Wright, M., Citrin, J., & Wand, J. (2012). Alternative measures of American national identity: Implications for the civic-ethnic distinction. Political Psychology, 33, 469482.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×