1932

Abstract

We rely on classic as well as recently published sources to offer a review of theory, research design, and measurement issues that should be considered prior to conducting any empirical study. First, we examine theory-related issues that should be addressed before research design and measurement considerations. Specifically, we discuss how to make meaningful theoretical progress including the use of inductive and deductive approaches, address an important issue, and conduct research with a practical end in mind. Second, we offer recommendations regarding research design, including how to address the low statistical power challenge, design studies that strengthen inferences about causal relationships, and use control variables appropriately. Finally, we address measurement issues. Specifically, we discuss how to improve the link between underlying constructs and their observable indicators. Our review offers a checklist for use by researchers to improve research quality prior to data collection and by journal editors and reviewers to evaluate the quality of submitted manuscripts.

Associated Article

There are media items related to this article:
Improving Research Quality Before Data Collection
Loading

Article metrics loading...

/content/journals/10.1146/annurev-orgpsych-031413-091231
2014-03-21
2024-03-29
Loading full text...

Full text loading...

/deliver/fulltext/orgpsych/1/1/annurev-orgpsych-031413-091231.html?itemId=/content/journals/10.1146/annurev-orgpsych-031413-091231&mimeType=html&fmt=ahah

Literature Cited

  1. Acad. Manag 2008. The questions we ask, Anaheim, California, August 18–13, 2008. Progr. Guide, Acad. Manag., Briarcliff Manor, NY. http://program.aomonline.org/2008/pdf/AOM_2008_Annual_Meeting_Program.pdf
  2. Aguinis H. 2004. Regression Analysis for Categorical Moderators New York: Guilford
  3. Aguinis H. 2013. Performance Management Upper Saddle River, NJ: Pearson Prentice Hall, 3rd ed.
  4. Aguinis H, Adams SKR. 1998. Social-role versus structural models of gender and influence use in organizations: a strong inference approach. Group Organ. Manag. 23:414–46 [Google Scholar]
  5. Aguinis H, Beaty JC, Boik RJ, Pierce CA. 2005. Effect size and power in assessing moderating effects of categorical variables using multiple regression: a 30-year review. J. Appl. Psychol. 90:94–107 [Google Scholar]
  6. Aguinis H, Culpepper SA, Pierce CA. 2010a. Revival of test bias research in preemployment testing. J. Appl. Psychol. 95:648–80 [Google Scholar]
  7. Aguinis H, Dalton DR, Bosco FA, Pierce CA, Dalton CM. 2011. Meta-analytic choices and judgment calls: implications for theory building and testing, obtained effect sizes, and scholarly impact. J. Manag. 37:5–38 [Google Scholar]
  8. Aguinis H, Forcum LE, Joo H. 2013a. Using market basket analysis in management research. J. Manag. 39:1799–824 [Google Scholar]
  9. Aguinis H, Gottfredson RK, Culpepper SA. 2013b. Best-practice recommendations for estimating cross-level interaction effects using multilevel modeling. J. Manag. 39:1490–528 [Google Scholar]
  10. Aguinis H, Pierce CA, Bosco FA, Muslin IS. 2009. First decade of Organizational Research Methods: trends in design, measurement, and data-analysis topics. Organ. Res. Methods 12:69–112 [Google Scholar]
  11. Aguinis H, Werner S, Abbott JL, Angert C, Park JH, Kohlhausen D. 2010b. Customer-centric science: reporting significant research results with rigor, relevance, and practical impact in mind. Organ. Res. Methods 13:515–39 [Google Scholar]
  12. Aiken LS, West SG, Millsap RE. 2008. Doctoral training in statistics, measurement, and methodology in psychology: replication and extension of the Aiken, West, Sechrest, and Reno (1990) survey of Ph.D. programs in North America. Am. Psychol. 63:32–50 [Google Scholar]
  13. Aiken LS, West SG, Sechrest LB, Reno RR. 1990. Graduate training in statistics, methodology, and measurement in psychology: an international survey. Am. Psychol. 45:721–34 [Google Scholar]
  14. Anderson N. 2007. The practitioner-researcher divide revisited: strategic-level bridges and the roles of IWO psychologists. J. Occup. Organ. Psychol. 80:175–83 [Google Scholar]
  15. Ashkanasy NM. 2010. Publishing today is more difficult than ever. J. Organ. Behav. 31:1–3 [Google Scholar]
  16. Atinc G, Simmering MJ, Kroll MJ. 2012. Control variable use and reporting in macro and micro management research. Organ. Res. Methods 15:57–74 [Google Scholar]
  17. Bagozzi RP, Edwards JR. 1998. A general approach to construct validation in organizational research: application to the measurement of work values. Organ. Res. Methods 1:45–87 [Google Scholar]
  18. Bagozzi RP, Yi Y, Phillips LW. 1991. Assessing construct validity in organizational research. Adm. Sci. Q. 36:421–58 [Google Scholar]
  19. Barnett ML. 2007. (Un)learning and (mis)education through the eyes of Bill Starbuck: an interview with Pandora’s playmate. Acad. Manag. Learn. Educ. 6:114–27 [Google Scholar]
  20. Bartunek JM, Rynes SL, Ireland RD. 2006. What makes management research interesting, and why does it matter?. Acad. Manag. J. 49:9–15 [Google Scholar]
  21. Becker TE. 2005. Potential problems in the statistical control of variables in organizational research: a qualitative analysis with recommendations. Organ. Res. Methods 8:274–89 [Google Scholar]
  22. Bentein K, Vandenberghe C, Vandenberg R, Stinglhamber F. 2005. The role of change in the relationship between commitment and turnover: A latent growth modeling approach. J. Appl. Psychol. 90:468–82 [Google Scholar]
  23. Bergh DD. 2008. The developmental editor: assessing and detecting manuscript contribution. Opening the Black Box of Editorship Baruch Y, Konrad AM, Aguinis H, Starbuck WH. 114–23 San Francisco: Jossey Bass [Google Scholar]
  24. Binning JF, Barrett GV. 1989. Validity of personnel decisions: a conceptual analysis of the inferential and evidential bases. J. Appl. Psychol. 74:478–94 [Google Scholar]
  25. Bosker RJ, Snijders TAB, Guldemond H. 2003. PINT (Power IN Two-Level Designs): Estimating Standard Errors of Regression Coefficients in Hierarchical Linear Models for Power Calculations, Version 2.1 Groningen, Neth.: Rijksuniv. Groningen
  26. Boyd BK, Bergh DD, Ireland RD, Ketchen DJ. 2013. Constructs in strategic management. Organ. Res. Methods 16:3–14 [Google Scholar]
  27. Breaugh JA. 2006. Rethinking the control of nuisance variables in theory testing. J. Bus. Psychol. 20:429–43 [Google Scholar]
  28. Campbell DT, Fiske DW. 1959. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bull. 56:81–105 [Google Scholar]
  29. Campbell DT, Stanley JC. 1963. Experimental and Quasi-experimental Designs for Research Chicago: Rand McNally
  30. Carlson KD, Wu J. 2012. The illusion of statistical control: control variable practice in management research. Organ. Res. Methods 15:413–35 [Google Scholar]
  31. Cascio WF, Aguinis H. 2008. Research in industrial and organizational psychology from 1963 to 2007: changes, choices, and trends. J. Appl. Psychol. 93:1062–81 [Google Scholar]
  32. Cascio WF, Zedeck S. 1983. Open a new window in rational research planning: Adjust alpha to maximize statistical power. Pers. Psychol. 83:517–26 [Google Scholar]
  33. Churchill GA. 1979. A paradigm for developing better measures of marketing constructs. J. Mark. Res. 16:64–73 [Google Scholar]
  34. Certo ST, Sirmon DG, Brymer R. 2010. Competition and knowledge creation in management: investigating changes in scholarship from 1988 to 2007. Acad. Manag. Learn. Educ. 9:591–606 [Google Scholar]
  35. Cohen J. 1988. Statistical Power Analysis for the Behavioral Sciences Hillsdale, NJ: Lawrence Erlbaum, 2nd ed.
  36. Cohen J. 1994. The earth is round (p < 0.05). Am. Psychol. 49:997–1003 [Google Scholar]
  37. Colquitt JA, Zapata-Phelan CP. 2007. Trends in theory building and theory testing: a five-decade study of the Academy of Management Journal. Acad. Manag. J. 50:1281–303 [Google Scholar]
  38. Cook TD, Campbell DT. 1979. Quasi-experimentation: Design and Analysis Issues for Field Settings Chicago: Rand McNally
  39. Cronbach L, Meehl P. 1955. Construct validity in psychological tests. Psychol. Bull. 52:281–302 [Google Scholar]
  40. Dalton DR, Aguinis H. 2013. Measurement malaise in strategic management studies: the case of corporate governance research. Organ. Res. Methods 16:88–99 [Google Scholar]
  41. Davis GF. 2010. Do theories of organizations progress?. Organ. Res. Methods 13:690–709 [Google Scholar]
  42. Davis GF, Marquis C. 2005. Prospects for organization theory in the early 21st century: institutional fields and mechanisms. Organ. Sci. 16:332–43 [Google Scholar]
  43. Davis MS. 1971. That’s interesting! Towards a phenomenology of sociology and a sociology of phenomenology. Philos. Soc. Sci. 1:309–44 [Google Scholar]
  44. Dunnette MD. 1990. Blending the science and practice of industrial and organizational psychology: Where are we and where are we going?. Handbook of Industrial and Organizational Psychology Vol. 1 Dunnette MD, Hough LM. 1–27 Palo Alto, CA: Consult. Psychol., 2nd ed. [Google Scholar]
  45. Eby LT, Hurst CS, Butts MM. 2009. Qualitative research: the redheaded stepchild in organizational and social research?. Statistical and Methodological Myths and Urban Legends: Received Doctrine, Verity, and Fable in the Organizational and Social Sciences Lance CE, Vandenberg RJ. 219–46 New York: Routledge [Google Scholar]
  46. Edwards JR. 2003. Construct validation in organizational behavior research. Organizational Behavior: The State of the Science Greenberg J. 327–71 Mahwah, NJ: Lawrence Erlbaum, 2nd ed. [Google Scholar]
  47. Edwards JR. 2008. To prosper, organizational psychology should...overcome methodological barriers to progress. J. Organ. Behav. 29:469–91 [Google Scholar]
  48. Edwards JR. 2010. Reconsidering theoretical progress in organizational and management research. Organ. Res. Methods 13:615–19 [Google Scholar]
  49. Edwards JR, Berry JW. 2010. The presence of something or the absence of nothing: increasing theoretical precision in management research. Organ. Res. Methods 13:668–89 [Google Scholar]
  50. Fisher RA. 1925. Statistical Methods for Research Workers Edinburgh, UK: Oliver & Boyd
  51. Fisher RA. 1938. Presidential address. Sankhyā: Indian J. Stat. 4:14–17 [Google Scholar]
  52. Gigerenzer G. 1998. We need statistical thinking, not statistical rituals. Behav. Brain Sci. 21:199–200 [Google Scholar]
  53. Glaser BG. 1992. Basics of Grounded Theory Analysis: Emerging Versus Forcing Mill Valley, CA: Sociol. Press
  54. Grant AM, Wall TD. 2009. The neglected science and art of quasi-experimentation: why-to, when-to, and how-to advice for organizational researchers. Organ. Res. Methods 12:653–86 [Google Scholar]
  55. Gray PH, Cooper WH. 2010. Pursuing failure. Organ. Res. Methods 13:620–43 [Google Scholar]
  56. Hakel MD, Sorcher M, Beer M, Moses JL. 1982. Making It Happen: Designing Research with Implementation in Mind Beverly Hills, CA: Sage
  57. Hambrick DC. 1994. 1993 Presidential address: What if the Academy actually mattered?. Acad. Manag. Rev. 19:11–16 [Google Scholar]
  58. Hambrick DC. 2007. The field of management’s devotion to theory: too much of a good thing?. Acad. Manag. J. 50:1346–52 [Google Scholar]
  59. Hamming R. 1986. You and your research. Presented at Bell Commun. Res. Colloq. Semin., Mar. 7, Morristown, NJ
  60. Hinkin TR. 1998. A brief tutorial on the development of measures for use in survey questionnaires. Organ. Res. Methods 1:104–21 [Google Scholar]
  61. Hollenbeck JR. 2008. The role of editing in knowledge development: consensus shifting and consensus creation. Opening the Black Box of Editorship Baruch Y, Konrad AM, Aguinis H, Starbuck WH. 16–26 San Francisco: Jossey Bass [Google Scholar]
  62. Hunter JE. 1997. Needed: a ban on the significance test. Psychol. Sci. 8:3–7 [Google Scholar]
  63. James LR, Mulaik SA, Brett JM. 1982. Causal Analysis: Assumptions, Models and Data Beverly Hills, CA: Sage
  64. Kacmar KM, Whitfield JM. 2000. An additional rating method for journal articles in the field of management. Organ. Res. Methods 3:392–406 [Google Scholar]
  65. Kruschke JK, Aguinis H, Joo H. 2012. The time has come: Bayesian methods for data analysis in the organizational sciences. Organ. Res. Methods 15:722–52 [Google Scholar]
  66. Lakatos I. 1978. Falsification and the methodology of scientific research programmes. The Methodology of Scientific Research Programmes Vol. 1 Worrall J, Currie G. 170–96 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  67. Lance CE. 2011. More statistical and methodological myths and urban legends. Organ. Res. Methods 14:279–86 [Google Scholar]
  68. Le H, Schmidt FL, Putka DJ. 2009. The multifaceted nature of measurement artifacts and its implications for estimating construct-level relationships. Organ. Res. Methods 12:165–200 [Google Scholar]
  69. Leavitt K, Mitchell RR, Peterson J. 2010. Theory pruning: strategies to reduce our dense theoretical landscape. Organ. Res. Methods 13:644–67 [Google Scholar]
  70. Little J. 2001. Understanding statistical significance: a conceptual history. J. Tech. Writ. Commun. 31:363–72 [Google Scholar]
  71. Locke EA. 2007. The case for inductive theory building. J. Manag. 33:867–90 [Google Scholar]
  72. Lykken DT. 1968. Statistical significance in psychological research. Psychol. Bull. 70:151–59 [Google Scholar]
  73. MacKenzie SB. 2002. The dangers of poor construct conceptualization. J. Acad. Mark. Sci. 31:323–26 [Google Scholar]
  74. Mathieu JE, Aguinis H, Culpepper SA, Chen G. 2012. Understanding and estimating the power to detect cross-level interaction effects in multilevel modeling. J. Appl. Psychol. 97:951–66 [Google Scholar]
  75. McKenny AF, Short JC, Payne GT. 2012. Using computer-aided text analysis to elevate constructs: an illustration using psychological capital. Organ. Res. Methods 16:152–84 [Google Scholar]
  76. Maxwell SE. 2004. The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychol. Methods 9:147–63 [Google Scholar]
  77. Meehl PE. 1971. High school yearbooks: a reply to Schwartz. J. Abnorm. Psychol. 77:143–48 [Google Scholar]
  78. Meehl PE. 1978. Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. J. Consult. Clin. Psychol. 46:806–34 [Google Scholar]
  79. Meyer JP, Stanley LA, Vandenberg RJ. 2013. A person-centered approach to the study of commitment. Hum. Resour. Manag. Rev. 23:190–202 [Google Scholar]
  80. Mone MA, Mueller GC, Mauland W. 1996. The perceptions and usage of statistical power in applied psychology and management research. Pers. Psychol. 49:103–20 [Google Scholar]
  81. Murphy KR, Myors B. 1998. Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests Mahwah, NJ: Lawrence Erlbaum
  82. Nunnally JC, Bernstein IH. 1994. Psychometric Theory New York: McGraw-Hill, 3rd ed.
  83. Orlitzky M. 2012. How can significance tests be deinstitutionalized?. Organ. Res. Methods 15:199–228 [Google Scholar]
  84. Pfeffer J. 1993. Barriers to the advance of organizational science: paradigm development as a dependent variable. Acad. Manag. Rev. 18:599–620 [Google Scholar]
  85. Pfeffer J. 2007. A modest proposal: how we might change the process and product of managerial research. Acad. Manag. J. 50:1334–45 [Google Scholar]
  86. Pfeffer J, Fong CT. 2005. Building organization theory from first principles: the self-enhancement motive and understanding power and influence. Organ. Sci. 16:372–88 [Google Scholar]
  87. Pierce JR, Aguinis H. 2013. The too-much-of-a-good-thing effect in management. J. Manag. 39:313–38 [Google Scholar]
  88. Ployhart R, Vandenberg RJ. 2010. A review and synthesis of longitudinal data analytical procedures in the organizational sciences. J. Manag. 36:94–120 [Google Scholar]
  89. Podsakoff PM, Dalton DR. 1987. Research methodology in organizational studies. J. Manag. 13:419–41 [Google Scholar]
  90. Podsakoff NP, Podsakoff PM, MacKenzie SB, Klinger RL. 2013. Are we really measuring what we say we’re measuring? Using video techniques to supplement traditional construct validation procedures. J. Appl. Psychol. 98:99–113 [Google Scholar]
  91. Pollack I. 2012. Taming textual data: the contribution of corpus linguistics to computer-aided text analysis. Organ. Res. Methods 15:263–87 [Google Scholar]
  92. Popper K. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge New York: Harper Torchbooks
  93. Rosenthal R. 1994. Science and ethics in conducting, analyzing, and reporting psychological research. Psychol. Sci. 5:127–34 [Google Scholar]
  94. Rosnow RL, Rosenthal R. 1989. Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44:1276–84 [Google Scholar]
  95. Rousseau DM. 2007. A sticky, levering, and scalable strategy for high-quality connections between organizational practice and science. Acad. Manag. J. 50:1037–42 [Google Scholar]
  96. Ruback RA, Innes CA. 1988. The relevance and irrelevance of psychological research: the example of prison crowding. Am. Psychol. 43:683–93 [Google Scholar]
  97. Ryan AM, Schmitt N, Spector P, Zedeck S, Rogelberg S. 2014. Inductive research in organizations. Special issue, J. Bus. Psychol. In press
  98. Rynes SL, Colbert AE, Brown KG. 2002. HR professionals’ beliefs about effective human resource practices: correspondence between research and practice. Hum. Resour. Manag. 41:149–74 [Google Scholar]
  99. Scandura TA, Williams EA. 2000. Research methodology in management: current practices, trends, and implications for future research. Acad. Manag. J. 43:1248–64 [Google Scholar]
  100. Schmidt FL. 1992. What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. Am. Psychol. 47:1173–81 [Google Scholar]
  101. Schriesheim CA, Powers KJ, Scandura TA, Gardiner CC, Lankau MJ. 1993. Improving construct measurement in management research: comments and a quantitative approach for assessing the theoretical content adequacy of paper-and-pencil survey-type instruments. J. Manag. 19:385–417 [Google Scholar]
  102. Shadish WR, Cook TD, Campbell DT. 2002. Experimental and Quasi-experimental Designs for Generalized Causal Inference Boston: Houghton Mifflin
  103. Shapiro DL, Kirkman BL, Courtney HG. 2007. Perceived causes and solutions of the translation problem in management research. Acad. Manag. J. 50:249–66 [Google Scholar]
  104. Shepherd DA, Sutcliffe KM. 2011. Inductive top-down theorizing: a source of new theories of organization. Acad. Manag. Rev. 36:361–80 [Google Scholar]
  105. Short JC, Broberg JC, Cogliser CC, Brigham KH. 2010. Construct validation using computer-aided text analysis (CATA): an illustration using entrepreneurial orientation. Organ. Res. Methods 13:320–47 [Google Scholar]
  106. Simon H. 1996 (1969). The Sciences of the Artificial Boston: MIT Press, 3rd ed.
  107. Spector PE, Brannick MT. 2011. Methodological urban legends: the myth of statistical control variables. Organ. Res. Methods 14:287–305 [Google Scholar]
  108. Spector PE, Zapf D, Chen PY, Frese M. 2000. Why negative affectivity should not be controlled in job stress research: Don’t throw out the baby with the bath water. J. Organ. Behav 21:79–95 [Google Scholar]
  109. Spybrook J, Bloom H, Congdon R, Hill C, Martinez A, Raudenbush S. 2011. Optimal Design Plus Empirical Evidence: Documentation for the “Optimal Design” Software (Version 3.0) New York: William T. Grant Found http://www.wtgrantfoundation.org/resources/consultation-service-and-optimal-design
  110. Stanley LA, Vandenberghe C, Vandenberg RJ, Bentein K. 2013. Commitment profiles and employee turnover. J. Vocat. Behav. 82:176–87 [Google Scholar]
  111. Starbuck WH. 2004. Why I stopped trying to understand the real world. Organ. Stud. 25:1233–54 [Google Scholar]
  112. Stone-Romero EF. 1994. Construct validity issues in organizational behavior research. Organizational Behavior: The State of the Science Greenberg J. 155–79 Hillsdale, NJ: Lawrence Erlbaum [Google Scholar]
  113. Tett RP, Walser B, Brown C, Simonet DV, Tonidandel S. 2013. The 2011 SIOP graduate program benchmarking survey part 3: curriculum and competencies. Ind.-Organ. Psychol. 50:469–89 [Google Scholar]
  114. Tushman M, O’Reilly C III. 2007. Research and relevance: implications of Pasteur’s quadrant for doctoral programs and faculty development. Acad. Manag. J. 50:769–74 [Google Scholar]
  115. Van Aken JE, Romme AGL. 2012. A design science approach to evidence-based management. The Oxford Handbook of Evidence-Based Management Rousseau DM. 43–57 New York: Oxford Univ. Press [Google Scholar]
  116. Walsh JP. 2011. Presidential address: embracing the sacred in our secular scholarly world. Acad. Manag. Rev. 36:215–34 [Google Scholar]
  117. Weick KE. 1989. Theory construction as disciplined imagination. Acad. Manag. Rev. 14:516–31 [Google Scholar]
  118. Williams LJ, Vandenberg RJ, Edwards JR. 2009. Structural equation modeling in management research: a guide for improved analysis. Acad. Manag. Ann. 3:543–604 [Google Scholar]
  119. Woehr DJ, Putka DJ, Bowler MC. 2012. An examination of G-theory methods for modeling multitrait–multimethod data: clarifying links to construct validity and confirmatory factor analysis. Organ. Res. Methods 15:134–61 [Google Scholar]
/content/journals/10.1146/annurev-orgpsych-031413-091231
Loading
/content/journals/10.1146/annurev-orgpsych-031413-091231
Loading

Data & Media loading...

Supplemental Material

In this lecture, Dr. Aguinis and Dr. Vandenberg discuss the various steps that researchers in organizational science can take to ensure that their work is of high quality and makes a lasting impact.

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error