Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-22T01:07:36.819Z Has data issue: false hasContentIssue false

Start Small, not Random: Why does Justifying your Time-Lag Matter?

Published online by Cambridge University Press:  13 September 2021

Yannick Griep*
Affiliation:
Radboud Universiteit (The Netherlands) Stockholm University (Sweden)
Ivana Vranjes
Affiliation:
Tilburg University (The Netherlands)
Johannes M. Kraak
Affiliation:
KEDGE Business School (France)
Leonie Dudda
Affiliation:
Radboud Universiteit (The Netherlands)
Yingjie Li
Affiliation:
Radboud Universiteit (The Netherlands)
*
Correspondence concerning this article should be addressed to Yannick Griep. Radboud Universiteit. Behavioural Science Institute. Thomas van Aquinostraat 4, 6525GD. Nijmegen (The Netherlands). Stockholm University. Stress Research Institute. Stockholm (Sweden). Phone: +31-02436115524. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Repeated measurement designs have been growing in popularity in the fields of Organizational Behavior and Work and Organizational Psychology. This brings up questions regarding the appropriateness of time-lag choices and validity of justification used to make time-lag decisions in the current literature. We start by explaining how time-lag choices are typically made and explain issues associated with these approaches. Next, we provide some insights into how an optimal time-lag decision should be made and the importance of time-sensitive theory building in helping guide these decisions. Finally, we end with some brief suggestions as to how authors can move forward by urging them to explicitly address temporal dynamics in their research, and by advocating for descriptive studies with short time-lags, which are needed to uncover how the changes happen over time.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2021. Published by Cambridge University Press

It has become increasingly common in the fields of Organizational Behavior (OB; see Bolger et al., Reference Bolger, Davis and Rafaeli2003; Fisher & To, Reference Fisher and To2012; Spector & Meier, Reference Spector and Meier2014;) and Work and Organizational Psychology (WOP; see Beal & Weiss, Reference Beal and Weiss2003; Klumb et al., Reference Klumb, Elfering and Herre2009; Ohly et al., Reference Ohly, Sonnentag, Niessen and Zapf2010; van Eerde et al., Reference van Eerde, Holman and Totterdell2005) to conduct some type of repeated measurement study. Probably one of the most common said designs are diary and experience sampling studies in which respondents are required to respond to a questionnaire at specific measurement times that are set at intervals such as weeks, days or even several times a day. These types of designs allow researchers to study novel research questions such as how fluctuations in a psychological phenomenon occur within the same person over time. However, important new methodological questions arise: What time-lags between measurements are appropriate and how can these time-lags be justified?

Although the selection of variables and their proposed relationships are often driven by theory, the length of time-lags is not. In their seminal article, Mitchell and James (Reference Mitchell and James2001, p. 533; see also Cole & Maxwell, Reference Cole and Maxwell2003 for a similar argument) already noted the problematic nature of not having time-sensitive psychological theories to justify the selection of time-lags: “With impoverished theory about issues such as when events occur, when they change, or how quickly they change, the empirical researcher is in a quandary. Decisions about when to measure and how frequently to measure critical variables are left to intuition, chance, convenience, or tradition. None of these are particularly reliable guides.” Given the arbitrary nature of this decision, it may come as no surprise that the duration of time-lags varies considerably across studies aiming to answer the same research question. Additionally, and equally problematic, the same arbitrary time-lags (e.g., six months) are used to study widely different phenomena, going from emotions (e.g., Vranjes et al., Reference Vranjes, Baillien, Vandebosch, Erreygers and De Witte2018) to mental states (e.g., Paez et al., Reference Paez, Delfino, Vargas-Salfate, Liu, Gil De Zúñiga, Khan and Garaigordobil2020). The lack of a systematic method to determine when and how frequently to measure a phenomenon is highly problematic because time-lags and effect sizes are strongly related to one another. Effect sizes are assumed to fluctuate as a gradual linear or polynomial function of time (e.g., Cohen et al., Reference Cohen, Cohen, West and Aiken2003; Dormann & Griffin, Reference Dormann and Griffin2015; Voelkle et al., Reference Voelkle, Oud, Davidov and Schmidt2012), ultimately contributing to wildly different conclusions across studies. Since sampling in research is merely taking a snapshot of an existing continuous process, both “too short” time lags and “too long” time lags would give us misleading information about the effect (see Dormann et al., Reference Dormann, Guthier and Cortina2020); ultimately hiding the “true shape” of a phenomenon or the relationship between different phenomena.

In this short paper, we will briefly review some of the issues associated with choosing appropriate time-lags. Next, we will discuss the currently available rules of thumb for selecting time-lags and discuss the issues associated with this approach. Finally, we will derive recommendations that may serve as a guideline for future repeated measurement studies and for the creation of more time-sensitive theory and methods.

What Are the Issues?

Although repeated measurement studies are on the rise, little attention (for some exceptions with regard to traditional longitudinal studies see Dormann & Griffin, Reference Dormann and Griffin2015; Zapf et al., Reference Zapf, Dormann and Frese1996) has been devoted to the question of how long time-lags should be. As previously mentioned, statements such as “not too short or too long” are very common (Boker & Nesselroade, Reference Boker and Nesselroade2002) but they are not very specific and lack theoretical grounding. As a field, we have come to develop our own set of “reasons” for selecting and “justifying” the duration of time-lags, but these “reasons” are still merely rules of thumb and/or arguments by proxy based on the empirical work of others who have used a similar unjustified time-lag. Combined, these “reasons” can be summarized (see also Dormann & van de Ven, Reference Dormann, van de Ven, Dollard, Shimazu, Nordin, Brough and Tuckey2014) as being related to (a) the phenomena under investigation or the way that these phenomena are operationalized, (b) the mechanisms under study, (c) the methodology used, (d) the epistemological stance taken, or (e) researcher preferences or omissions. All of these are a-theoretical and they are often added as a post-hoc afterthought to provide “a reason” as to why specific time-lags were used.

The first group of arguments relates to the phenomenon or its operationalization and is either tied into the phenomenon one tries to measure (e.g., a 1-year time-lag to assess the effects of annual performance appraisals) or to the very nature of the measurement (e.g., a 6-month time-lag to assess the effects of workplace bullying as per the definition of bullying). The second reason is related to the mechanisms under study. An often-used justification in this regard makes reference to the idea that it requires time for an effect to unfold and hence a longer time-lag is justified (e.g., Gorgievski-Duijvesteijn et al., Reference Gorgievski-Duijvesteijn, Bakker, Schaufeli and van der Heijden2005; Sacco & Schmitt, Reference Sacco and Schmitt2005). However, the duration of time-lags varies greatly and is an idiosyncratic choice based on the researcher’s personal opinion. Third, there might be various methodological reasons for the selection of certain time-lags, such as having the ability to control for auto-regressive effects (e.g., Griep et al., Reference Griep, Germeys and Kraak2021; Selig & Little, Reference Selig, Little, Laursen, Little and Card2012), and inclusion of within-day or within-person fluctuations (e.g., de Lange et al., Reference de Lange, Taris, Kompier, Houtman and Bongers2003). Fourth—and related to the hype of repeated measurement studies investigating similar research questions as previous cross-sectional or longitudinal studies—there might be epistemological reasons to select specific time-lags. Authors justify such choices by stressing the novelty of the approach to push the field beyond what was already known about this phenomenon under a different time-lag (e.g., a 6-month time-lag which was not yet studied in the relationship between exhaustion, safety working conditions, and injury frequency and severity; Halbesleben, Reference Halbesleben2010). Finally, there might be researcher preferences or omissions. In this regard, researchers often want to (a) demonstrate the sustainability of effects to show that they are theoretically or practically important over a longer period of time (selecting long time-lags; Kinnunen et al., Reference Kinnunen, Kokkonen, Kaprio and Pulkkinen2005), or (b) they have already conducted the study and need a reference to justify their selected time-lags. This often leads researchers to reference a study that used a similar time-lag in a somewhat similar domain (e.g., Griep et al., Reference Griep, Vantilborgh, Baillien and Pepermans2016 referencing the Reference Bakker and Bal2010’s study by Bakker and Bal on the use of a weekly time-lag).

Although this brief overview suggests that there are a multitude of reasons to “justify” almost any time-lags for any particular research question, the fields of OB and WOP currently lack a sound scientific and theoretical basis for choosing adequate time-lags. Indeed, Cole and Maxwell (Reference Cole and Maxwell2003) observed that the timing of measurement in the Social Sciences was nearly consistently determined by convenience and tradition rather than theory.

When to Measure: Defining the Optimal Time-Lag

Although there is indeed no systematic research investigating what constitutes appropriate time-lags in repeated measurement studies, some authors have explored the question of optimal time-lags in the past. For example, Dwyer (Reference Dwyer1983) demonstrated that longer time-lags resulted in an underestimation of the effect between X and Y. More recently, Cole and Maxwell (Reference Cole and Maxwell2003, Reference Cole and Maxwell2009) demonstrated that lagged effects vary with time, and that the shape of the distribution of these effects over time also varies in such a way that researchers “will grossly underestimate the relation between risk and the outcome” (Cole & Maxwell, Reference Cole and Maxwell2009, p. 50) when they select anything other than the optimal time-lag between X and Y. Furthermore, Voelkle and colleagues (Reference Voelkle, Oud, Davidov and Schmidt2012) argued that the optimal time-lag between X and Y is often far shorter than those frequently found in the literature.

The advice of these studies can be summarized under the following rule of thumb: “Effects decline as time-lags become longer and effects increase as time-lags become shorter”. However, this rule of thumb is still very simplistic and leaves too many degrees of freedom open to interpretation: “What is short and what is long?”, “Is short for one psychological phenomenon also short for another psychological phenomenon?”, “Does the specific duration of short or long depend on the relationship that is being studied between different psychological phenomena?”. It therefore seems that, despite the apparent appeal of having such rules of thumb, researchers currently have little guidance for defining, selecting, and theoretically justifying optimal time-lags in repeated measurement studies. Indeed, Cohen and colleagues (Reference Cohen, Cohen, West and Aiken2003) concluded that no generalizations could be made about the optimal interval for examining causal effects of one variable on another based on the currently available rules of thumb.

We agree with Cohen and colleagues (Reference Cohen, Cohen, West and Aiken2003) and argue that, in order to move away from these generic rules of thumb, we need (a) a clear description of what is meant by optimal time-lags, and (b) time-sensitive psychological theories that incorporate, among others, a description of the time dynamics underlying a phenomenon or the relationship between phenomena as it unfolds over time. First, following the seminal work of Dormann and Griffin (Reference Dormann and Griffin2015, p. 3) optimal time-lags should be defined as “the lag that is required to yield the maximum effect of X predicting Y at a later time, while statistically controlling for prior values of Y”. The optimal time-lags thus represent the unit of objective clock time that should elapse between the occurrence of X and the subsequent occurrence of Y (Collins, Reference Collins2006). This unit of objective elapsed clock time can be determined from a theoretical understanding of how fast Y is expected to change in relation or function of X. Second, selecting optimal time-lags should be considered within the broader question of “when events occur, when they change, and how quickly they change” (Mitchell & James, Reference Mitchell and James2001, p. 533). To understand and justify the selection of time-lags requires the presence of time-sensitive theories in which the element of time is inherently present. We currently lack these theories. Indeed, theories within OB and WOP have been rather ambiguous in their use of the word “time” and “temporal dynamics”. This leads to the following question: “Which elements do ‘good’ time-sensitive theories need so that researchers are able to theoretically justify their time-lags?”

Current State of Affairs and the Next Goal: The Time is Now for Time-Sensitive Theory!

The literature’s current state of affairs is one in which scholars have neglected the role of time in theory building, measurement, data analyses, interpretation and discussion of results, and proposition of theoretical and practical relevance (e.g., Albert, Reference Albert2013; Ancona et al., Reference Ancona, Goodman, Lawrence and Tushman2001; George & Jones, Reference George and Jones2000; Griep & Hansen, Reference Griep and Hansen2020; Mitchell & James, Reference Mitchell and James2001; Zacher & Rudolp, Reference Zacher, Rudolph, Griep and Hansen2020). Most theories in OB and WOP—and related domains—deny the role of time, either explicitly (i.e., reject the role of time or embrace the notion of stability) or implicitly (i.e., ignore the possible effect of time or prefer to develop a “one size fits all” theory; see also Griep & Zacher, Reference Griep and Zacher2021). This includes some of the well-known “process theories”: Vroom’s Valence-Expectancy Theory of Motivation (Reference Vroom1964), Locke and Latham’s Goal-Setting Theory (Reference Locke and Latham1990), Ajzen’s Theory of Planned Behavior (Reference Ajzen1991), Kelley’s Causal Attribution Theory (Reference Kelley1973).

Such theories either do not reference time and/or explicitly reference the “ahistorical” (i.e., static) nature of their propositions. The assumption that phenomena are stable over time (i.e., once formed there is relatively little change within persons over time) is very common. Think for example about theories dealing with topics such as personality traits, personnel selection, self-regulation, motivation, goal orientations, work design, and leadership (e.g., Bandura, Reference Bandura1991; Bass & Avolio, Reference Bass and Avolio1993; Deci & Ryan, Reference Deci, Ryan, van Lange, Kruglanski and Higgins2012; Locke & Latham, Reference Locke and Latham1990; Oldham & Hackman, Reference Oldham and Hackman2010; Tsaousis & Nikolaou, Reference Tsaousis and Nikolaou2001), which typically assume that these phenomena are stable, either within a person/situation or across persons/situations. As a corollary, these theories—and the empirical studies derived from them—make no explicit mention as to when the phenomenon happens, how long it lasts, how and why it changes. Consequently, they do not offer any guidelines to researchers to theoretically justify their time-lags. In contrast, we argue that it is of crucial importance to the future success of the fields of OB and WOP to include the element of time, the information that is directly relevant to the selection and justification of a time-lag (e.g., the total length or duration of the time span: 1 week, 6 months, three decades, etc.), and the regularity or irregularity of time intervals at which measurements are collected (i.e., minutes, months, years). As previously mentioned, such decisions strongly influence our power to observe, and to more accurately describe, different effects in OB and WOP research. Moreover, theories that are more precise help us make more accurate predictions about the unfolding of events over time, which is a crucial part of applied organizational sciences.

So, what would be the next step? In their review of the literature on the role of time in the field of WOP, Griep and Zacher (Reference Griep and Zacher2021) have argued that the literature continues to struggle with defining the necessary aspects and characteristics of time-sensitive theory, which undermines scholars’ ability to theoretically justify their choice of time-lag. Specifically, they propose four essential elements of time-sensitive theory, which when incorporated, will allow scholars to select the optimal and theoretically justified time-lags:

  1. 1. Psychological phenomena should be defined with reference to the time window within which the phenomenon is expected to fluctuate and/or change (e.g., seconds, minutes, hours, days, weeks).

  2. 2. The unfolding nature of relationships between phenomena should be defined in relation to time (e.g., the relationship between phenomenon X and phenomenon Y is expected to unfold over the course of a week and dissipate after approximately three weeks).

  3. 3. Temporal features should be defined and described in detail: When do phenomena occur (i.e., what is the starting point), how long are they expected to last and how fast are they expected to change (e.g., seconds, minutes, hours, days, weeks), which developmental form are they expected to take (e.g., linear upon exposure, delayed exposure, curvilinear, lingering effect), which type of change are they expected to follow (e.g., incremental versus discontinuous or stabilization versus destabilization), are phenomena expected to follow phases, rhythms, cycles, spirals, or other more complex forms of change.

  4. 4. Temporal metrics should be defined with reference to the specific time-scales, time frames, and time-lags to be used to measure the phenomenon (i.e., a theoretical reference to what time frames and time-lags should be used to measure the proposed theoretical model in order to facilitate falsification and/or temporal correction of the proposed model).

Combining Time-Sensitive Theory and Research

Now that we are aware of the issues with arbitrary time-lag choices and the importance of time-sensitive theory, how can we use this knowledge to further both theory and research. A first step, as described above, is the development of time-sensitive theories which can serve as a basis for further empirical research. When such theory building is not feasible, we argue that at a minimum, each study applying any type of repeated measurement design should contain explicit arguments specifying the reasoning behind the temporal choices made. Second, to aid this process, we argue that empirical studies with short time-lags are needed to ensure that we do not miss the first (presumably rapid) increase of the effect, and do catch the maximum effect moment. Considering that in psychology, the time lag of (both local and global) maximum effect is unknown, the optimal time-lags can vary between individuals, and the act of measuring can affect the phenomenon of interest, our best strategy is to take the shortest time-lag needed to capture a particular effect, with the least possible intrusion to the natural process. This is also in line with the conclusion of Dormann and Griffin (Reference Dormann and Griffin2015) who previously found that optimal time-lags for panel designs are usually quite short, called for more “shortitudinal” studies in panel research. The accumulation of such empirical knowledge regarding different short-term effects can help inform future theory development, creating a mutually reinforcing process benefitting the fields of OB and WOP both in terms of their theoretical and practical relevance. There are indeed several promising directions for the fields of OB and WOP to pay more attention to the role of optimal time-lags in future research. First, research designs should adhere to a synergy of theory, method, and application. For example, when researchers aim to investigate a process, they should avoid cross-sectional and/or between-person designs. Moreover, it is strongly advisable to test mediation models only when scholars have collected data on all phenomena at three (or ideally more) separate time points from the same individuals. At the same time, it is important to use reliable and valid (multi-item) measures that are “time-invariant,” meaning that, for instance, the indicator loadings of latent variables do not differ significantly across measurement occasions, and are thus directly connected to the length of the time-lag that is being used (Griep & Zacher, Reference Griep and Zacher2021). Second, when scholars aim to develop a new theory or refine an existing theory, it is imperative that they explicitly address temporal dynamics—including changes in phenomena over time as well as the mechanisms and boundary conditions of proposed change in said phenomena over time—into their theory. For example, scholars should explicitly state why, when, and for how long certain changes in phenomena occur. In doing so, they provide an initial theoretically justified indication of the optimal time-lags needed for empirical research.

Conclusion

We argue that it is time for researchers in the fields of OB, WOP, and beyond, to explicitly acknowledge the issue of time in their research. When researchers develop a new theory, or refine an existing theory, it is imperative that they explicitly address and incorporate temporal dynamics—including changes in key constructs over time as well as the mechanisms and boundary conditions of proposed changes—into theory. To do so, descriptive studies with short time-lags are needed to uncover how psychological changes happen over time. Such time-sensitive approach will not only aid theory development, but will strongly improve the predictive power of the theoretical models, increasing the practical relevance of our field along the way.

Footnotes

Funding Statement: This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Conflicts of Interest: None

Author Note: Both Yannick Griep and Ivana Vranjes contributed equally to this manuscript.

References

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179211. http://doi.org/10.1016/0749-5978(91)90020-T.CrossRefGoogle Scholar
Albert, S. (2013). When: The art of perfect timing. Jossey-Bass.Google Scholar
Ancona, D. G., Goodman, P. S., Lawrence, B. S., & Tushman, M. L. (2001). Time: A new research lens. Academy of Management Review, 26(4), 645663. http://doi.org/10.5465/amr.2001.5393903.CrossRefGoogle Scholar
Bakker, A. B., & Bal, M. P. (2010). Weekly work engagement and performance: A study among starting teachers. Journal of Occupational and Organizational Psychology, 83(1), 189206. https://doi.org/10.1348/096317909X402596.CrossRefGoogle Scholar
Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior and Human Decision Processes, 50(2), 248287. https://doi.org/10.1016/0749-5978(91)90022-L.CrossRefGoogle Scholar
Bass, B. M., & Avolio, B. J. (1993). Transformational leadership and organizational culture. Public Administration Quarterly, 17(1), 112121.Google Scholar
Beal, D. J., & Weiss, H. M. (2003). Methods of ecological momentary assessment in organizational research. Organizational Research Methods, 6(4), 440464. https://doi.org/10.1177/1094428103257361.CrossRefGoogle Scholar
Boker, S. M., & Nesselroade, J. R. (2002). A method for modeling the intrinsic dynamics of intraindividual variability: Recovering the parameters of simulated oscillators in multi-wave panel data. Multivariate Behavioral Research, 37(1), 127160. https://doi.org/10.1207/S15327906MBR3701_06.CrossRefGoogle ScholarPubMed
Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing life as it is lived. Annual Review of Psychology, 54(1), 579616. https://doi.org/10.1146/annurev.psych.54.101601.145030.CrossRefGoogle ScholarPubMed
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Erlbaum.Google Scholar
Cole, D. A., & Maxwell, S. E. (2003). Testing mediational models with longitudinal data: Questions and tips in the use of structural equation modeling. Journal of Abnormal Psychology, 112(4), 558577. https://doi.org/10.1037/0021-843X.112.4.558.CrossRefGoogle ScholarPubMed
Cole, D. A., & Maxwell, S. E. (2009). Statistical methods for risk-outcome research: Being sensitive to longitudinal structure. Annual Review of Clinical Psychology, 5, 7196. https://doi.org/10.1146/annurev-clinpsy-060508-130357.CrossRefGoogle ScholarPubMed
Collins, L. M. (2006). Analysis of longitudinal data: The integration of theoretical model, temporal design, and statistical model. Annual Review of Psychology, 57, 505528. http://doi.org/10.1146/annurev.psych.57.102904.190146.CrossRefGoogle ScholarPubMed
de Lange, A. H., Taris, T. W., Kompier, M. A. J., Houtman, I. L. D., & Bongers, P. M. (2003). "The very best of the millennium": Longitudinal research and the demand-control-(support) model. Journal of Occupational Health Psychology, 8(4), 282305. https://doi.org/10.1037/1076-8998.8.4.282.CrossRefGoogle ScholarPubMed
Deci, E. L., & Ryan, R. M. (2012). Self-determination theory. In van Lange, P. A., Kruglanski, A. W., & Higgins, E. T. (Eds.), Handbook of theories of social psychology: Volume 1 (pp. 416436). Sage Publications Ltd. http://doi.org/10.4135/9781446249215.n21.CrossRefGoogle Scholar
Dormann, C., & Griffin, M. A. (2015). Optimal time lags in panel studies. Psychological Methods, 20, 489505. http://doi.org/10.1037/met0000041.CrossRefGoogle ScholarPubMed
Dormann, C., & van de Ven, B. (2014). Timing in methods for studying psychosocial factors at work. In Dollard, M., Shimazu, A., Nordin, R. B., Brough, P., & Tuckey, M. (Eds.), Psychosocial factors at work in the Asia Pacific (pp. 89116). Springer. http://doi.org/10.1007/978-94-017-8975-2_4.Google Scholar
Dormann, C., Guthier, C., & Cortina, J. M. (2020). Introducing Continuous Time Meta-Analysis (CoTiMA). Organizational Research Methods, 23(4), 620650. https://doi.org/10.1177%2F1094428119847277.CrossRefGoogle Scholar
Dwyer, J. E. (1983). Statistical models for the social and behavioral sciences. Oxford University Press.Google Scholar
Fisher, C. D., & To, M. L. (2012). Using experience sampling methodology in organizational behavior. Journal of Organizational Behavior, 33(7), 865877. https://doi.org/10.1002/job.1803.CrossRefGoogle Scholar
George, J. M., & Jones, G. R. (2000). The role of time in theory and theory building. Journal of Management, 26(4), 657684. http://doi.org/10.1177/014920630002600404.CrossRefGoogle Scholar
Gorgievski-Duijvesteijn, M. J., Bakker, A. B., Schaufeli, W. B., & van der Heijden, P. G. M. (2005). Finances and well-being: A dynamic equilibrium model of resources. Journal of Occupational Health Psychology, 10(3), 210224. https://doi.org/10.1037/1076-8998.10.3.210.CrossRefGoogle ScholarPubMed
Griep, Y., & Hansen, S. D. (Eds.). (2020). Handbook on the temporal dynamics of organizational behavior. Edward Elgar. https://doi.org/10.4337/9781788974387.CrossRefGoogle Scholar
Griep, Y., & Zacher, H. (2021). Temporal dynamics in organizational psychology. In Oxford Research Encyclopedia of Psychology. Oxford University Press. https://doi.org/10.1093/acrefore/9780190236557.013.32.Google Scholar
Griep, Y., Germeys, L., & Kraak, J. M. (2021). Unpacking the relationship between organizational citizenship behavior and counterproductive work behavior: Moral licensing and temporal focus. Group & Organization Management. Advance online publication. https://doi.org/10.1177/1059601121995366.CrossRefGoogle Scholar
Griep, Y., Vantilborgh, T., Baillien, E., & Pepermans, R. (2016). The mitigating role of leader–member exchange when perceiving psychological contract violation: A diary survey study among volunteers. European Journal of Work and Organizational Psychology, 25(2), 254271. https://doi.org/10.1080/1359432X.2015.1046048.CrossRefGoogle Scholar
Halbesleben, J. R. (2010). The role of exhaustion and workarounds in predicting occupational injuries: A cross-lagged panel study of health care professionals. Journal of Occupational Health Psychology, 15(1), 116. https://doi.org/10.1037/a0017634.CrossRefGoogle ScholarPubMed
Kelley, H. H. (1973). The processes of causal attribution. American Psychologist, 28(2), 107128. https://doi.org/10.1037/h0034225.CrossRefGoogle Scholar
Kinnunen, M.-L., Kokkonen, M., Kaprio, J., & Pulkkinen, L. (2005). The associations of emotion regulation and dysregulation with the metabolic syndrome factor. Journal of Psychosomatic Research, 58(6), 513521. https://doi.org/10.1016/j.jpsychores.2005.02.004.CrossRefGoogle ScholarPubMed
Klumb, P., Elfering, A., & Herre, C. (2009). Ambulatory assessment in industrial/organizational psychology: Fruitful examples and methodological issues. European Psychologist, 14(2), 120131. https://doi.org/10.1027/1016-9040.14.2.120.CrossRefGoogle Scholar
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting & task performance. Prentice Hall.Google Scholar
Mitchell, T. R., & James, L. R. (2001). Building better theory: Time and the specification of when things happen. Academy of Management Review, 26(4), 530547. http://doi.org/10.5465/amr.2001.5393889.CrossRefGoogle Scholar
Ohly, S., Sonnentag, S., Niessen, C., & Zapf, D. (2010). Diary studies in organizational research. Journal of Personnel Psychology, 9(2), 7993. https://doi.org/10.1027/1866-5888/a000009.CrossRefGoogle Scholar
Oldham, G. R., & Hackman, J. R. (2010). Not what it was and not what it will be: The future of job design research. Journal of Organizational Behavior, 31 (2–3), 463479. http://doi.org/10.1002/job.678.CrossRefGoogle Scholar
Paez, D., Delfino, G., Vargas-Salfate, S., Liu, J. H., Gil De Zúñiga, H., Khan, S., & Garaigordobil, M. (2020). A longitudinal study of the effects of internet use on subjective well-being. Media Psychology, 23(5), 676710. https://doi.org/10.1080/15213269.2019.1624177.CrossRefGoogle Scholar
Sacco, J. M., & Schmitt, N. (2005). A dynamic multilevel model of demographic diversity and misfit effects. Journal of Applied Psychology, 90(2), 203231. https://doi.org/10.1037/0021-9010.90.2.203.CrossRefGoogle ScholarPubMed
Selig, J. P., & Little, T. D. (2012). Autoregressive and cross-lagged panel analysis for longitudinal data. In Laursen, B., Little, T. D., & Card, N. A. (Eds.), Handbook of developmental research methods (pp. 265278). The Guilford Press.Google Scholar
Spector, P. E., & Meier, L. L. (2014). Methodologies for the study of organizational behavior processes: How to find your keys in the dark. Journal of Organizational Behavior, 35(8), 11091119. https://doi.org/10.1002/job.1966.CrossRefGoogle Scholar
Tsaousis, I., & Nikolaou, I. E. (2001). The stability of the five‐factor model of personality in personnel selection and assessment in Greece. International Journal of Selection and Assessment, 9(4), 290301. http://doi.org/10.1111/1468-2389.00181.CrossRefGoogle Scholar
van Eerde, W., Holman, D., & Totterdell, P. (2005). Editorial. Journal of Occupational and Organizational Psychology, 78(2), 151154. https://doi.org/10.1348/096317905X40826.CrossRefGoogle Scholar
Voelkle, M. C., Oud, J. H. L., Davidov, E., & Schmidt, P. (2012). A SEM approach to continuous time modeling of panel data: Relating authoritarism and anomia. Psychological Methods, 17, 176192. http://doi/org/10.1037/a0027543.CrossRefGoogle Scholar
Vranjes, I., Baillien, E., Vandebosch, H., Erreygers, S., & De Witte, H. (2018). Kicking someone in cyberspace when they are down: Testing the role of stressor evoked emotions on exposure to workplace cyberbullying. Work & Stress, 32(4), 379399. https://doi.org/10.1080/02678373.2018.1437233.CrossRefGoogle Scholar
Vroom, V. H. (1964). Work and motivation. Wiley.Google Scholar
Zacher, H., & Rudolph, C. W. (2020). How a dynamic way of thinking can challenge existing knowledge in organizational behavior. In Griep, Y. & Hansen, S. D. (Eds.), Handbook on the temporal dynamics of organizational behavior (pp. 825). Edward Elgar. https://doi.org/10.4337/9781788974387.CrossRefGoogle Scholar
Zapf, D., Dormann, C., & Frese, M. (1996). Longitudinal studies in organizational stress research: A review of the literature with reference to methodological issues. Journal of Occupational Health Psychology, 1(2), 145169. https://psycnet.apa.org/doi/10.1037/1076-8998.1.2.145.CrossRefGoogle ScholarPubMed