Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-22T07:07:51.347Z Has data issue: false hasContentIssue false

The Short Maximization Inventory

Published online by Cambridge University Press:  01 January 2023

Michal Ďuriník
Affiliation:
Faculty of Economics and Administration, Masaryk University; Macquarie Graduate School of Management
Jakub Procházka
Affiliation:
Faculty of Social Studies and Faculty of Economics and Administration, Masaryk University
Hynek Cígler
Affiliation:
Faculty of Social Studies, Masaryk University
Rights & Permissions [Opens in a new window]

Abstract

We developed the Short Maximization Inventory (SMI) by shortening the Maximization Inventory (Turner, Rim, Betz & Nygren, 2012) from 34 items to 15 items. Using the Item Response Theory framework, we identified and removed the items of the Maximization Inventory that contributed least to the performance of the original scale. The construct validity assessed for SMI is similar to the full MI and is in line with the predictions from the literature: the Satisficing subscale is positively related to the indices of well-being, while the Decision Difficulty and Alternative Search subscales are negatively related to well-being. The new scale retains the good psychometric properties of the original scale. Furthermore, its brevity allows researchers to use the scale in studies in which maximization is not the primary focus. Although the SMI lacks the “High Standards” subscale, as did the original MI, we believe that SMI is a step towards developing a reliable and conceptually sound measure of maximizing that can be used in various research designs.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2018] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

In economics and other social sciences, humans are often modeled as homo economicus. Homo economicus is an all-knowing individual who is flawless in calculating expected utilities from individual alternatives and choosing the one that provides the highest utility. Homo economicus maximizes. Simon (1955), however, argues that we are often unable to fulfill this goal of perfect optimization. Instead, we satisfice: we choose options that meet a certain threshold of acceptability. When our sub-perfect knowledge and abilities prevent us from opting for the best, we resort to choosing what is “good enough”.

Half a century later, Schwartz, Ward, Monterosso, Lyubomirsky, White and Lehman (2002) revisited Simon’s work and proposed maximizing to be a stable personality trait. According to Schwartz et al., each individual falls somewhere on a continuous scale between being a Maximizer (one who tries to find the best of all alternatives) and a Satisficer (one who is comfortable selecting a “good enough” alternative). Reference Nenkov, Morrin, Ward, Hulland and SchwartzNenkov, Morrin, Ward, Hulland and Schwartz (2008) later proposed that maximizing has three dimensions: Decision Difficulty (the extent to which one experiences difficulty selecting from a range of options), Alternative Search (the tendency to exert effort and time exploring available alternatives) and High Standards (the tendency to hold high standards for oneself and one’s choices). Recently, Cheek and Schwartz (2016) proposed maximizing to have two components: the goal of choosing the best and the strategy of alternative search.

Extensive literature has found that maximizers are more likely than satisficers to report low self-esteem (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002) and less likely to feel happy (Reference Larsen and McKibbanLarsen & McKibban, 2008; Reference PolmanPolman, 2010; Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002) and to be satisfied with their lives (Reference Dahling and ThompsonDahling & Thompson, 2012; Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002). Maximization has also been linked to depression (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002), regret (Reference Moyano-Díaz, Martínez-Molina and PonceMoyano-Díaz, Martínez-Molina & Ponce, 2014; Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002), ruminating on past events (Reference Paivandy, Bullock, Reardon and KellyPaivandy, Bullock, Reardon & Kelly, 2008) and other maladaptive traits and behaviors (see Reference Cheek and SchwartzCheek & Schwartz, 2016, for a more extensive list).

The results mentioned above were collected using the 13-item unidimensional Maximization Scale (MS; Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002). MS is the first and most widely used maximization measure, but it has received significant criticism. An item response analysis conducted by Reference Rim, Turner, Betz and NygrenRim, Turner, Betz and Nygren (2011), together with previous classical test theory analyses (e.g., Reference Diab, Gillespie and HighhouseDiab, Gillespie & Highhouse, 2008; Giacopelli, Simpson, Dalal, Randoplh & Holland, 2013; Reference Nenkov, Morrin, Ward, Hulland and SchwartzNenkov et al., 2008), found MS to have poor psychometric properties. The main points of criticism towards MS are its composite scoring (although analyses indicate it possesses a three-factor structure of Alternative Search, Decision Difficulty, and High Standards); weak internal consistency (Cronbach’s alpha at the lower bound of acceptability for use in research); and the presence of items that are either conceptually too distant from the construct of maximizing (e.g., “I often fantasize about living in ways that are quite different from my actual life”) or focus on overly specific behaviors (e.g., “Renting videos is really difficult. I’m always struggling to pick the best one”). In addition, Rim et al. (2011) note that satisficing in MS is measured only indirectly, through a lack of maximizing, and they argue that a direct measure of satisficing could be a useful contribution.

Since the publication of MS (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002), the scale has been shortened (Reference Nenkov, Morrin, Ward, Hulland and SchwartzNenkov et al., 2008) and modified (Reference LaiLai, 2010; Reference Weinhardt, Morse, Chimeli and FisherWeinhardt, Morse, Chimeli & Fisher, 2012), and new scales to measure maximization have been developed (Reference Diab, Gillespie and HighhouseDiab et al., 2008; Reference Misuraca, Faraci, Gangemi, Carmeci and MiceliMisuraca, Faraci, Gangemi, Carmeci & Miceli, 2015; Reference Turner, Rim, Betz and NygrenTurner et al., 2012). See Reference Cheek and SchwartzCheek and Schwartz (2016) for a list and discussion of the existing maximization scales. Some authors, notably Reference Diab, Gillespie and HighhouseDiab, Gillespie, and Highhouse (2008) and Reference Giacopelli, Simpson, Dalal, Randolph and HollandGiacopelli, Simpson, Dalal, Randolph, and Holland (2013) note that various measures of maximizing yield different correlations with indices of well-being, indicating that the scale selection is likely to influence the results observed in a study.

Following Rim et al.’s (2011) analyses, Turner et al. (2012) developed the 34-item Maximization Inventory (MI). This relatively new scale has been used by a number of researchers since its publication (e.g., Djulbegovic et al., 2014; Reference MillerMiller, 2014; Reference Moyano-Díaz, Martínez-Molina and PonceMoyano-Díaz et al., 2014; Reference Patalano, Weizenbaum, Lolli and AndersonPatalano, Weizenbaum, Lolli & Anderson, 2015; Reference RimRim, 2017; Reference RoggeRogge, 2016; Reference Sharif, Spiller, J. and S.Sharif & Spiller, 2014).

MI is the first scale to measure satisficing directly, as a separate subscale, instead of indirectly through low maximizing scores. Weinhardt et al. (2012) highlight the presence of the Satisficing scale as an important advancement, as “the data do not support the assumption that maximizing and satisficing are on opposite ends of a continuum and therefore developing a satisficing measure is extremely important” (p. 655). Reference Cheek and SchwartzCheek and Schwartz (2016) acknowledge the possible benefits of measuring satisficing directly, but challenge the content validity and face validity of MI’s Satisficing subscale. They suggest that, although the subscale shows internal consistency, some of its items appear to relate to other constructs than satisficing. Two other subscales of MI are Decision Difficulty and Alternative Search.

As reported by Turner et al. (2012), Decision Difficulty was correlated with negative indices of well-being, while Alternative Search was unrelated to them. Meanwhile, Satisficing was associated with adaptive decision making and good mental health indices (Reference Turner, Rim, Betz and NygrenTurner et al., 2012). Psychometric properties of MI were shown by its authors to be superior to MS, using both classical test theory and item response analysis. Weinhardt et al. (2012) note the use of general statements in MI as a significant advantage over MS, which uses specific statements.

Another maximization scale, the Maximization Tendency Scale (MTS, Diab et al., 2008), consists mostly of High Standards items. As Weinhardt et al. (2012) propose, MI is to be perceived as a measure of maximization behavior, while MTS as a measure of maximization goal (Reference Cheek and SchwartzCheek & Schwartz, 2016).

A High Standards subscale, which is a standard component of other maximization-related scales, is not present in MI. High Standards (HS) items were present in the original pool of items, and an HS subscale was considered for MI. However, both exploratory and confirmatory factor analysis, together with IRT, failed to provide support for High Standards as a separate factor (Reference Turner, Rim, Betz and NygrenTurner et al., 2012). In their recent review of maximization measures (published after our analysis was conducted), Reference Cheek and SchwartzCheek and Schwartz (2016) point out that MI does not contain a High Standards dimension (p. 132). However, later in their review, they argue that “it is not actually having high standards that defines the goal of maximization” (p. 135), as Satisficers can also have high standards.Footnote 1 Having high standards is essential to maximizing, but is not exclusive to it. Rather than having high standards, Cheek and Schwartz define maximizing through the desire to choose the best option, the “maximum”. We acknowledge that MI (and consequently SMI) lacks a measure of this maximization goal, yet we see MI (especially its Alternative Search subscale) as a useful measure of behavior relevant to the goal of maximizing.

Reference Cheek and SchwartzCheek and Schwartz (2016) propose a two-component model of maximization, distinguishing between maximization goal (choose the best) and maximization strategy (extensive alternative search).Footnote 2 For maximization goal measurements, they recommend Dalal, Diab, Zhu and Hwang’s (2015) 7-Item Maximization Tendency Scale, as it has good psychometric properties and focuses on the goal of choosing the best. For maximization strategy measurements, Cheek and Schwartz tentatively recommend the use of MI’s Alternative Search subscale. However, they encourage further refinement of this measure by future researchers. In this paper, we contribute to such refinement.

Turner et al. (2012) report satisfactory psychometric properties of the overall MI model with three subscales (Cronbach’s alphas ≥ 0.73; RMSEA=0.063). Upon closer inspection, however, some MI items display low factor loadings. Turner et al. (2012) report λ <0.3 for items 5, 7 and 9 of the Satisficing scale and λ ≤0.4 for 13 out of the total of 34 items). Applying classical test theory criteria on MI using data reported by Turner et al. (2012) is a challenging task, as some important statistics are absent (e.g., CFA Chi-squared and CFI/TLI statistics). Item response theory (IRT) analysis can provide more insight into individual item performance, and Turner et al. (2012) present some IRT analysis results in their report. The item discrimination parameter for item 24 is 0.59 (according to Baker, 2001, discriminability lower than 0.65 is considered low). For items 5, 7, 9, 15, 17 and 21, item discrimination parameters are lower than 0.9. In total, Turner et al. (2012) report item discrimination parameters lower than 1.0 for 12 items of MI. Items low in this parameter have flatter item information curves and, relatively to items high in this parameter, contribute poorly to the total test information. They enhance the total test information and thus lower errors of latent trait estimates. At the same time, however, these items also influence (usually increase) the variance of estimated latent traits and can thus decrease the test reliability.Footnote 3 Additionally, Moyano-Díaz et al. (2014), who used (a Spanish translation of) MI in their research, reported poor performance of the Satisficing subscale. The internal consistency of the subscale was low (Cronbach’s alpha = 0.64) and the authors suggested a two-factor solution for this subscale. They also noted that the meanings of some Satisficing items overlap with other dimensions of MI.

The three subscales of MI contain a total of 34 items. While a scale this large is perfectly acceptable for studies in which maximizing is the focal construct, its rather large size might discourage researchers from using MI as a supplementary method. When researchers compose a battery of scales to measure several different constructs, they face a trade-off between brevity and better psychometric properties. We believe that one of the reasons for the Maximization Scale’s (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002) popularity is its conciseness and ease of use.

Based on these indices, we conjecture that an appropriate shortening of the Maximization Inventory might produce a scale that is concise, creates a much smaller burden on participants and provides results which are as reliable and valid as those from the original scale. Furthermore, developing a short version of MI is an opportunity to flag and remove problematic items, should any be identified, resulting in higher-quality measurement per item.

Turner et al. (2012) conducted multiple studies on MI, but all of them used samples consisting of undergraduate students enrolled in a psychology course. Such samples differ from the general population in terms of age distribution, intelligence, and academic achievement. Moreover, some items may display lower discriminability because of the lower response variability in a homogenous sample. Examining MI’s psychometric properties with a different and more heterogeneous sample is thus desirable. This paper contributes by administering MI to a diverse sample of subjects (aged 18 to 83, with education levels ranging from elementary to postgraduate).

In addition, by recruiting subjects from the Czech Republic, this paper expands maximization research to a new cultural environment. So far, maximization has been studied in the U.S. (e.g., Rim et al., 2011; Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002; Reference Turner, Rim, Betz and NygrenTurner et al., 2012), Italy (Reference Misuraca, Faraci, Gangemi, Carmeci and MiceliMisuraca et al., 2015), Norway (Reference LaiLai, 2010), the Netherlands, Belgium, China (Reference Roets, Schwartz and GuanRoets, Schwartz & Guan, 2012) and Chile (Reference Moyano-Díaz, Martínez-Molina and PonceMoyano-Díaz et al., 2014). Roets et al. (2012) found in their cross-cultural study that maximizers in the U.S. and Western Europe report lower well-being than satisficers. In China, a collectivist culture with a strong long-term orientation (Reference HofstedeHofstede, 2016) where choice is not as abundant as in the U.S. and Western Europe, the relationship between maximization and well-being was insignificant. Compared to the U.S. (Reference HofstedeHofstede, 2016), Czech culture is higher in uncertainty avoidance and long-term orientation and is lower in individualism. These differences, together with the fact that the Czech nation faced limited (both consumer and political) choice opportunities under the communist regime, might be reflected in Czechs’ decision-making and well-being correlates. Following previous research on maximizing, we use the well-being indices of Happiness (Reference Lyubomirsky and LepperLyubomirsky & Lepper, 1999), Optimism (Reference Scheier, Carver and BridgesScheier, Carver & Bridges, 1994), Self-Efficacy (Reference Schwarzer and JerusalemSchwarzer & Jerusalem, 1995) and Regret (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002). Although it is not our main motivation, this research provides the opportunity to investigate whether maximizing has the same correlates and factor structure in the Czech sample as in the U.S. sample. The primary focus of the correlation analysis is to provide evidence for construct validity of the Short Maximization Inventory.

In this paper, we first analyze the Maximization Inventory as administered to the Czech sample. We replicate the vast majority of MI’s psychometric properties and well-being indices correlations reported in Turner et al. (2012). We also report more complete statistics for individual items of MI. We replicate the three-factor structure proposed in Turner et al. (2012); however, using classical test theory and item response theory, we find multiple items with sub-standard properties. We proceed to develop a short version of MI, following the goal of creating a concise scale with solid psychometric properties. Our main criteria were the overall fit of the model and exclusion of items that did not substantially contribute towards the model’s good properties. Using a different set of participants, we then demonstrate the favorable psychometric properties of the new scale.

By removing poorly performing items and refining both the Alternative Search and Satisficing subscales, we partially address the suggestions offered by Reference Cheek and SchwartzCheek and Schwartz (2016) and Weinhardt et al. (2012). The resulting Short Maximization Inventory is a compact yet powerful measurement tool that might benefit the whole field, as it facilitates further research on maximizing.

2 Part 1: Development of the Short Maximization Inventory

The purpose of this study was to assess the psychometric properties of the Maximization Inventory (Reference Turner, Rim, Betz and NygrenTurner et al., 2012) and to develop a shortened version of MI.

2.1 Method

2.1.1 Scale translation

With the permission of one of its authors, the Maximization Inventory was first translated into Czech following guidelines proposed by Reference Beaton, Bombardier, Guillemin and FerrazBeaton, Bombardier, Guillemin and Ferraz (2000). The process included three independent translations, back-translations and an expert committee assessment. As an additional step, two think-aloud cognitive interviews and two concurrent verbal probing cognitive interviews (Reference WillisWillis, 1999) were conducted to ensure that the items were clear and easy to comprehend. Finally, 12 Masaryk University students participated in the online pilot testing of the translated scale and reported no difficulties understanding and responding to the items. The translation and adaptation of the scale into Czech is described in Ďuriník (2016).

2.1.2 Participants

A total of 902 adult individuals participated in this study. Originally, 913 responses were collected. After screening the raw data for suspicious answer patterns (e.g., 1-2-3-4-5-1-2-3-4-5), too-short response times (less than one second per item) and invalid responses (e.g., a reported age of 11,000 years), the responses from 11 participants were removed.

A total of 77 Masaryk University students completed the scale after being invited to do so via e-mail. 835 members of the general public were also recruited via www.vyplnto.cz, an online platform for survey participant recruitment. Of the total sample, 29% were male and 71% were female. The mean age was 35.4 (SD = 13.62). Each respondent participated voluntarily, and no reward was promised or given for participation.

We randomly assigned approximately two-thirds of the respondents (see the online supplement for the code) to Data Set 1 (n = 603; 66.9 %) for exploratory purposes; the rest of the respondents formed Data Set 2 (n = 299; 33.1%).

2.1.3 Procedures

Participants rated their degree of agreement with 34 items of the Maximization Inventory on a standard 5-point scale with anchors (1 = Strongly Disagree; 5 = Strongly Agree). Next, four other scales were administered (see Part 2 of this paper). With Data Set 1, we assessed the performance of the 34-item Maximization Inventory and developed the shortened version. With Data Set 2, we verified the factor structure of the shortened scale.

2.1.4 Data Analysis

All the analyses were carried out using R environment (R Core Team, 2017). We worked under Item Response Theory parametrization – as the measurement model, we used the confirmatory multidimensional Graded Response Model fitted using the mirt package (Reference ChalmersChalmers, 2012). Model fit was evaluated using M2* statistics (Reference Maydeu-Olivares and JoeMaydeu-Olivares & Joe, 2006) with collapsing over response categories (Reference Cai and HansenCai & Hansen, 2013). This allowed us to see if the proposed model (three dimensions with each item loaded on just one factor) describes the observed data sufficiently well.Footnote 4 We inspected the standardized residual matrices and p-values for local dependencies using the LDG 2 statistic (Reference Chen and ThissenChen & Thissen, 1997). LDG2 is based on a bivariate table with predicted and observed item response frequencies. The significant p-value (e.g., below 0.05) associated with the LDG 2 statistic suggests a local dependence of two items that is not predicted by the IRT model. As the LDG 2 statistic is chi-squared distributed, the effect size of residual relations can be expressed using Cramer’s V as in other chi-squared tests.

The signed chi-squared test (S-X2; Reference Orlando and ThissenOrlando & Thissen, 2000), which is also based on the difference between observed and predicted response frequencies, was used as an item fit statistic. The significant values of S-X2 suggest that the observed responses to a particular item do not comply with the IRT model. Reliability was estimated using latent trait estimates and their associated standard errors; this is the reliability of latent trait estimated using the IRT model. We also used Cronbach’s alpha for item sums under classical test theory, as in the original study (Reference Turner, Rim, Betz and NygrenTurner et al., 2012).

2.2 Results

First, we used Data Set 1 to fit the multidimensional graded response model. The model fitted the data well, M2* = 1303.5, df = 422, p < 0.001; RMSEA = 0.059 with 95% CI [0.055, 0.063], TLI = 0.932, SRMSR = 0.083.Footnote 5 Item discrimination parameters and item fit are shown in Table 1.

Table 1: Maximization Inventory items – results of Item Response Theory analysis. N=603

Note: With factor correlation S–DD r = −0.312 p < .001, S–AS r = 0.119, p < 0.01, and DD–AS r = 0.223, p < .001.

rxx is IRT latent trait estimation reliability (Reference Kim and FeldtKim & Feldt, 2010).

Reliability estimates using Cronbach’s alpha were similar to the original study by Turner et al. (2012; reported as α original in Table 1). IRT reliability estimates were higher for all three subscales, as reported in Table 1.

Although the model had a good fit, there was a substantial number of locally dependent items. The LDG 2 test was significant at p < 0.01 for the 239 item pairs (43%), of which 146 pairs (26%) were more dependent than one could expect based on the model. This suggests that the responses to many pairs of items are not driven only by the three measured dimensions, but to a small extent also by another hidden factor. This can be wording, other unmeasured traits, etc.

Items 2, 6, 8 and 10Footnote 6 had high skewness, kurtosis and high mean raw scores (above 4 on a scale ranging 1 to 5), which led to very high thresholds (especially the d4 threshold between responses 4 and 5). All of these items are general statements about the nature of life and decision makingFootnote 7 and do not refer to specific decision-making situations in life. Judging by their content, it is easy to understand why most respondents chose extreme values of 4 or 5 when responding to these items. We flagged these as potentially problematic; with these items, respondents tend to select the highest values available, and items thus have low discrimination ability or low item information.

Items 13, 28 and 31 did not fit an IRT model at p < 0.05; however, the actual discrepancies were small. Items 7–10 and 21–24 had small discrimination parameters (below 1.0). The discrimination parameter of item 10 was not significantly different from 0 (95% CI = [−0.055, 0.313]). This means that this item does not significantly discriminate between people with a higher and lower level of the satisficing trait.

A residual matrix inspection revealed the tendency of item 10 (Satisficing subscale) to have a high residual correlation with items from the Decision Difficulty factor (Cramer’s V > 0.12, Md = 0.14), as well as the high residual correlation of item 5 (Satisficing subscale) with items from the Alternative Search factor (Cramer’s V > 0.10, Md = 0.14). As the first part of item 5 is essentially a definition of alternative search,Footnote 8 this was not surprising. We also found high correlated residuals between items 25 and 26 (Cramer’s V = 0.26) and between items 23 and 5Footnote 9 (Cramer’s V = 0.24). These pairs of items are essentially re-wordings of each other and artificially inflate the measured model fit. Regardless of the calculated psychometric properties, we consider it redundant to include two items that ask the same question. Other major inter-item correlations not explained by the factor were between items 29 and 30 (V = 0.20), 23 and 31 (V = 0.19), 16 and 18 (V = 0.15), 7 and 8 (V = 0.16), and 15 and 16 (V = 0.15).

The results presented in this section provide strong support for our original conjecture: Maximization Inventory could benefit from having its poorly performing items removed. The newly developed Short Maximization Inventory has the potential to display psychometric properties at least as good as those of the original MI, with the added benefit of greater conciseness.

We removed the problematic items and kept the best items in terms of discrimination ability, factor loading, and correlated residuals. Based on the criteria of very low discrimination ability, we removed items 7–8, 10, 21 and 23. Item 2 was excluded based on its low difficulty and thus small item information. Other items were excluded based on dual loadings, sometimes combined with small discrimination parameters.

This led to a final solution with three factors of five items each. This shortened scale fit the data from Data Set 1 very well, M2* = 85.6, df = 42, p < 0.001; RMSEA = 0.042 with 95% CI [0.029, 0.054], TLI = 0.979, SRMSR = 0.047. We cross-validated this model on Data Set 2, where the fit was excellent as well, M2* = 69.5, df = 42, p = 0.005; RMSEA = 0.047 with 95% CI [0.026, 0.067], TLI = 0.971, SRMSR = 0.061.

We then performed a series of multigroup IRT analyses to test scale invariance. The results in Table 2 indicate that there are no significant differences between Data Set 1 and Data Set 2. Fixing the parameters did not enhance the model. Furthermore, the more constrained model had better fit statistics (BICFootnote 10, TLI, RMSEA) than the less constrained models. Therefore, we used all the data from Data Sets 1 and 2 for subsequent analyses. We refer to this scale as the Short Maximization Inventory (SMI). A list of all 15 items is presented in Table 3.

Table 2: The Short Maximization Inventory model fit statistics

Table 3: Short Maximization Inventory items

All SMI items, except for item 11, have discrimination parameters greater than 1. The IRT model parameters of the Short Maximization Inventory are presented in Table 4. Three items (2, 6 and 10) differ from the Graded Response Model significantly at p < 0.05; however, the total model fit is very good, as one can see in Table 2. Furthermore, the shortened version of the scale no longer displays significant correlation between the Alternative Search and Satisficing subscales. The construct validity is the same as for the full inventory. Correlations of latent trait estimates for the whole sample (merged Data Sets 1 and 2) between the Maximization Inventory and Short Maximization Inventory are quite high (Satisficing r = 0.944, Decision Difficulty r = 0.937 and Alternative Search r = 0.950, all p < 0.001).

Table 4: Short Maximization Inventory parameters of the multidimensional IRT model: discrimination parameters, thresholds, item fit, and reliabilities. N=902

Note: With-factor correlation S-DD r = −0.397, p < .001 S-AS r = 0.059 and DD-AS r = 0.249, p < .001. rxx − IRT latent trait estimation reliability; ω — Raykov’s omega; α – Cronbach’s alpha.

We also estimated the reliability for these three scales using IRT reliability based on latent trait estimates and their associated errors of estimation, and using conventional Cronbach’s alpha to assure the comparability with previous research. Furthermore, we also used Raykov’s omega from ordinal confirmatory factor analysis,Footnote 11 which provided similar results to the multidimensional IRT. Raykov’s omegas can be understood as the squared correlation between the sum of items and the latent trait. Researchers who wish to use IRT latent trait scores should use rxx estimations from Table 4. Researchers who wish to work with raw scores (e.g., sums or means of items) should use Raykov’s omegas (ω coefficients in Table 4), as Cronbach’s alphas slightly underestimate the true reliability as they assume tau-equivalence and the interval scale of items.

3 Part 2: Correlation study

The purpose of these analyses was to provide evidence about the construct validity of the Short Maximization Inventory (SMI). We correlated the SMI scales with measures of constructs that should be, according to the theory, related to maximization dimensions. We also correlated the SMI scales with full MI scales to show that the short scales provide results similar to those of the original scales.

3.1 Method

3.1.1 Measures, Participants, Procedures

Maximization

The Maximization Inventory (Reference Turner, Rim, Betz and NygrenTurner et al., 2012) consists of three subscales (number of items): Satisficing (10), Decision Difficulty (12) and Alternative Search (12). The Short Maximization Inventory, presented in Table 2, consists of three subscales (number of items): Satisficing (5), Decision Difficulty (5) and Alternative Search (5). SMI responses were obtained by extracting responses to respective items of the MI.

Self-Efficacy

To assess self-efficacy, we used the General Self-Efficacy Scale (Reference Schwarzer and JerusalemSchwarzer & Jerusalem, 1995) translated and validated by Křivohlavý, Schwarzer and Jerusalem (1993). This 10-item self-reported scale is intended to measure a general sense of perceived self-efficacy. Responses are indicated on a 4-point scale ranging from “not true at all” to “exactly true”. Schwarzer and Jerusalem report Cronbach’s alphas ranging from 0.76 to 0.9 over samples from 23 nations. In our sample, the Cronbach’s alpha was 0.9.

Happiness

To measure subjective happiness, we used the Subjective Happiness Scale (Reference Lyubomirsky and LepperLyubomirsky & Lepper, 1999) in a translation developed by Kresanová (2015). The scale consists of 4 items, with the fourth item reverse-scored. Responses are obtained on 7-point scales with anchors. Lyubomirsky and Lepper report Cronbach’s alphas ranging from 0.79 to 0.94 across 14 samples. In our sample, the Cronbach’s alpha was 0.83.

Optimism

To measure optimism, we used the Life Orientation Test – Revised (Reference Scheier, Carver and BridgesScheier et al., 1994) as translated by Bek (2007). This ten-item scale contains four filler items that are not scored and six scored items, of which three are reverse-scored. Respondents indicated their responses on a five-point scale ranging from “Strongly agree” to “Strongly disagree”. Scheier et al. report a Cronbach’s alpha of 0.78; in our sample, Cronbach’s alpha was 0.86.

Regret

To measure regret, we used the Regret Scale (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002). This scale consists of five items, one of which is reverse-scored. Participants respond to items on a 7-point scale (1 = completely disagree, 7 = completely agree). We developed our own translation of the scale via independent translations, back-translation, and expert committee discussion. Schwartz et al. report a Cronbach’s alpha of 0.67; in our sample, it was 0.77.

A total of 902 participants were recruited online. The sample is the same sample used in Study 1. After taking the Maximization Inventory, participants were administered the Life Orientation Test — Revised, General Self-Efficacy Scale, Subjective Happiness Scale and Regret Scale.

3.1.2 Data analysis

First, we analyzed raw scores, defined as the sums of items. Then, we performed ordinal confirmatory factor analysis (CFA).Footnote 12 We performed unidimensional CFA for each scale of the Maximization Inventory to check the structure of each scale. Then we performed a multidimensional CFA for all these scales and for the full and the shortened version of MI. Reliabilities were estimated using Revelle’s omega. This measure outperforms Cronbach’s alpha as it does not assume tau-equivalent items (the same factor loadings for all items).

We used ordinary fit statistics with common cut-values.

3.2 Results

3.2.1 Raw scores

We investigated the relationship of the Short Maximization Inventory’s subscales to the original full-length subscales as well as to other measures. To do so, we summed responses for each (sub)scale for each participant and then used Pearson correlations to assess the relationship intensity. The descriptive statistics are reported in Table 5.

Table 5: Descriptive statistics for scales used in Study 2

First, we examined the correlations of the original Maximization Scale subscale to their corresponding shortened versions. In all three cases, the correlations are strong: Satisficing (r = 0.87, p < 0.01), Decision Difficulty (r = 0.93, p < 0.01) and Alternative Search (r = 0.94, p < 0.01). This indicates that the shortened MI scales measure the same constructs as the full scales.

The Satisficing scale of SMI was positively correlated with the indices of well-being: happiness (r = 0.53, p < 0.01), optimism (r = 0.56, p < 0.01) and self-efficacy (r = 0.61, p < 0.01). These findings are in line with the relationships reported by Turner et al. (2012), as well as with Schwartz’s (2007) and Schwartz et al.’s (2002) proposed relation of satisficing to individual well-being. Additionally, satisficing was moderately negatively related to regret (r = −0.26, p < 0.01).

The Decision Difficulty scale was negatively related to all three well-being indices: happiness (r = −0.45, p < 0.01), optimism (r = −0.44, p < 0.01) and self-efficacy (r = −0.51, p < 0.01). Decision difficulty was positively related to regret (r = 0.57, p < 0.01). Turner et al. (2012) and Rim et al. (2011) report no significant relationship between decision difficulty and happiness. This inconsistency cannot be explained by the shortening of the scale (as the full-sized Decision Difficulty subscale administered to our sample was also negatively correlated with happiness, r = −0.44, p < 0.01). We argue that the different correlations are related to the nature of the samples used (U.S. vs. Czech sample). This argument is further developed in the discussion part of this paper.

We found the Alternative Search scale of SMI to be weakly negatively related to happiness (r = −0.11, p < 0.01) and optimism (r = −0.14, p < 0.01), unrelated to self-efficacy (r = 0.03, p > 0.05) and weakly positively related to regret (r = 0.21, p < 0.01).

Table 6 provides correlations of SMI scales with each other, as well as with well-being measures. The correlations for the full-sized 34-item MI we administered are provided in brackets. It is evident that the shortened scale correlates with measures of well-being similarly to the full scale. The correlations found are similar to those reported by Turner et al. (2012), with the exception of the relationship between Decision Difficulty and Happiness, mentioned above. We did not compare the correlations with the Regret scale, as Turner et al. (2012) used the Decision-Making Style Inventory (Reference Nygren and WhiteNygren & White, 2002) for regret assessment, whereas we used the Regret Scale (Reference Schwartz, Ward, Monterosso, Lyubomirsky, White and LehmanSchwartz et al., 2002).

Table 6: Correlations of SMI and MI with measures of well-being

3.2.2 Construct validity (latent traits)

The model fit for Maximization Inventory was presented in Study 1 (there we presented the result of the IRT model, the fit of the ordinal CFA was similar). For all the other scales, the model fit the data well.

SHS:

χ 2(2) = 20.97, p < 0.001, TLI = 0.987, RMSEA = 0.103 with 95% CI [0.066; 0.144], SRMR = 0.015. Although the RMSEA is very high, for CFAs with small degrees of freedom it is not a reliable indicator of fit (Reference Kenny, Kaniskan and McCoachKenny, Kaniskan & McCoach, 2015). Reliability was good, ω  = 0.832.

LOT-R:

χ 2(9) = 255.41, p < 0.001, TLI = 0.944, RMSEA = 0.174 (95% CI = [0.156; 0.193]), SRMR = 0.054. The same RMSEA issue holds here as holds for SHS; reliability was good, ω  = 0.872.

GSES:

χ 2(35) = 513.55, p < 0.001, TLI = 0.939, RMSEA = 0.123 (95% CI = [0.114; 0.133]), SRMR = 0.055 with good reliability ω  = 0.901

Regret:

χ 2(5) = 247.7, p < 0.001, TLI = 0.854, RMSEA = 0.232 (95% CI = [0.208; 0.257]), SRMR = 0.068. Because the fit was not good, we inspected the residual correlation matrix and discovered a high residual correlation between items 4 and 5 (r = 0.159). We therefore allowed for residual covariances between these items, which improved the fit, Δ χ 2(1) = 115.9, p < 0.001. The final model fit the data very well (except RMSEA; see above), χ 2(4) = 67.84, p < 0.001, TLI = 0.952, RMSEA = 0.133 (95% CI = [0.106; 0.162]), SRMR = 0.034. Reliability was acceptable,Footnote 13 ω  = 0.732.

For both MI and SMI, the correlations with latent traits (Table 7) are quite high. We performed two CFAs over all the scales for both versions of MI. The fit of the full model with the Short Maximization Inventory was acceptable, χ 2(718) = 3153.27, p < 0.001, TLI = 0.915, RMSEA = 0.061 (95% CI = [0.059; 0.064]), SRMR = 0.058. The fit with the full Maximization Inventory was poorer, χ 2(1630) = 6526.63, p < 0.001, TLI = 0.883, RMSEA = 0.058 (95% CI = [0.056; 0.059]), SRMR = 0.072; the TLI was particularly low.

Table 7: Construct validity for Maximization Inventory and Short Maximization Inventory. Correlations of latent traits. N = 902

Correlations above ± .145 are significant on α = .001 and all correlations above .117 are significant on α = .01

Below the diagonal are the results for the model with the Short Maximization Inventory, above the diagonal are the results for the original Maximization Inventory.

3.3 Conclusions

Strong correlations between the original subscales and their short versions indicate that the Short Maximization Inventory is a compact measurement tool that is equivalent to the original Maximization Inventory. Concerning correlations with well-being indices, the results we found for SMI are similar to what we, in line with Turner et al. (2012), found for the full MI: decision difficulty and alternative search are negatively related to the indices of well-being and positively related to regret. Satisficing is positively related to the indices of well-being and negatively related to regret, suggesting that satisficing is related to positive adaptation. The validity of SMI is thus supported by two pillars: the correlations found for SMI are in line with our theoretical predictions, and they replicate the correlations found for the full MI.

4 General Discussion

In this paper we pointed out several problems with the original Maximization Inventory (Reference Turner, Rim, Betz and NygrenTurner et al., 2012). Eliminating problematic items from the MI, we developed a 15-item Short Maximization Inventory (SMI). This newly developed SMI performs well in measuring individual dimensions related to maximization. It also displays psychometric properties that are comparable or better than those of the original MI. Finally, thanks to its brevity, SMI is less taxing on respondents.

After administering the MI to 902 participants, we found that several of its items display a ceiling effect. These items were mostly general statements that are easy to relate to and agree with (e.g., MI item 8: “All decisions have pros and cons”). Items with heavily skewed responses have low discriminatory power, as most subjects selected “Strongly Agree”.

Highly correlated residuals indicated item overlap. Overlapping items are, in effect, merely paraphrases of each other, and their presence does not improve scale performance. We found and excluded several such cases (e.g., item 25, “I will continue shopping for an item until it reaches all of my criteria,” and item 26, “I usually continue to search for an item until it reaches my expectations”).

Some items of the MI tend to load onto more than one factor (e.g., item 5: “I try to gain plenty of information before I make a decision, but then I go ahead and make it” is connected with both Satisficing and Alternative Search). We developed the Short Maximization Inventory by excluding problematic items from the original MI while retaining the items with satisfactory item discrimination, high factor loading, and low correlated residuals. SMI consists of three subscales of five items each.

In general, a scale with more items allows for finer discrimination among respondents and potentially captures very high and very low levels of trait better. On the other hand, presenting subjects with long questionnaires may result in fewer responses and lower response quality (Reference Galesic and BosnjakGalesic & Bosnjak, 2009). Therefore, MI’s size (34 items) might be discouraging for researchers who intend to use it as a supplementary method in their research alongside other scales. In developing Short Maximization Inventory, we removed from MI the items with the lowest discrimination and with the lowest factor loadings. This minimizes the loss of favorable properties associated with scale shortenings. Furthermore, our analysis shows that SMI measures the same construct as MI and discriminates between respondents well. SMI allows researchers to use a measure of maximization that has good psychometric properties yet is compact and convenient to administer.

The Short Maximization Inventory model showed a good fit with the data we used to develop it, as well as with an independent sample of subjects. The scales of SMI correlate very strongly with the scales of the full MI, indicating they are measures of the same constructs.

Turner et al. (2012) provided evidence for the construct validity of MI’s three scales by correlating them with measures of well-being: happiness, optimism, and self-efficacy. With SMI, we found the same relationships between maximization dimensions and well-being that Turner et al. (2012) found with the full MI. The only exception was that we found a significant negative correlation between Decision Difficulty and happiness, while Turner et al. (2012) reported no significant relationship. However, this difference cannot be attributed to the scale reduction as, in our sample, the full 12-item Decision Difficulty scale also correlated negatively with happiness. The difference between our result and Turner et al.’s (2012) may be caused by cultural differences between the U.S. sample used in the earlier study and the Czech sample we used. According to Hofstede (2016), Czechs are significantly higher than Americans in uncertainty avoidance. High uncertainty avoidance corresponds to more negative feelings related to uncertainty and ambiguity. Therefore, Czech people who perceive their decisions to be difficult are likely to experience more negative feelings and lower happiness levels than Americans.

Based on Reference Parker, Bruine de Bruin and FischhoffParker, Bruine de Bruin and Fischhoff (2007); Rim et al. (2011); Schwartz et al. (2002); and Turner et al. (2012), we expected high Satisficing scores to be associated with the positive indices of well-being, and high Decision Difficulty and Alternative Search scores to be associated with the negative indices of well-being. Our correlation analysis provides evidence for SMI’s construct validity: Satisficing displays significant positive correlations with the indices of well-being, while Alternative Search and Decision Difficulty show negative correlations with well-being. In line with Schwartz et al.’s (2002) reasoning, we find regret to be negatively related to Satisficing and positively related to Alternative Search and Decision Difficulty. That said, we do not consider SMI’s (or MI’s) Satisficing subscale to be perfect, as we reflect in the Limitations section of the discussion.

As reviewed by Reference Cheek and SchwartzCheek and Schwartz (2016), there are 11 maximization-related scales available at this time. The reason we have chosen to introduce yet another one is that we recognize the Maximization Inventory’s (Reference Turner, Rim, Betz and NygrenTurner et al., 2012) solid psychometric properties relative to other scales, and our short version further improves on this quality. The Short Maximization Inventory displays excellent properties, as judged from both the Classical Test Theory and Item Response Theory viewpoint. Although Reference Cheek and SchwartzCheek and Schwartz (2016) offer some criticism of the Maximization Inventory, they tentatively recommend the use of its Alternative Search subscale in research. Moreover, they encourage researchers to further refine the measurement, which we have done by formulating SMI.

4.1 Limitations of the study

The primary purpose of the Confirmatory Factor Analysis is the confirmation of an already existing model, not the development of a new one. Although our use of CFA in shortening the scale can be identified as a limitation of the study, our intention was not to develop a new model but to simplify one that already existed. We thus adopted an approach similar to that used by Nenkov et al. (2008), who shortened the original Maximization Scale. Once the short scale was developed, we used CFA again with an independent data set to verify our new model in a pure, confirmation-only setting.

SMI, just like the original Maximization Inventory, does not contain a High Standards subscale. Although items relating to having high standards were originally considered when developing MI, Turner et al. (2012) did not include these items. Subsequently, a measure of high standards or the desire to choose the best is absent from SMI too. Reference Cheek and SchwartzCheek and Schwartz (2016), however, present strong arguments that the goal of choosing the best, together with the strategy of alternative search, is an essential component of maximizing. We recognize this and, following Reference Cheek and SchwartzCheek and Schwartz (2016), recommend using the 7-Item Maximizing Tendency Scale (MTS-7) developed by Dalal et al. (2015) to measure the maximizing goal of choosing the best. The MTS-7 together with SMI may provide a complex measurement of the maximization construct. However, future research should focus on the incremental validity of MTS-7 over SMI (or MI) subscales and on the existence of the single high standards factor within the maximization model.

A novel feature of MI (and consequently SMI) is the presence of the Satisficing subscale. Turner et al. (2012) argue that satisficing is not simply the lack of maximizing, but an adaptive trait of its own. Although the Satisficing subscale of both MI and SMI shows good psychometric properties, concerns have been raised about its content validity (Reference Cheek and SchwartzCheek & Schwartz, 2016), incremental validity (Reference Moyano-Díaz, Martínez-Molina and PonceMoyano-Díaz et al., 2014) and reliability (Reference Dewberry, Juanchich and NarendranDewberry, Juanchich & Narendran, 2013). We acknowledge these concerns. Some of the Satisficing subscale items are difficult to interpret. Consider for example item 1 “I usually try to find a couple of good options and then choose between them”. Agreement with this item signifies satisficing, but what does disagreeing with this item mean? Maybe the respondent considers many options in an effort to pick the best one, or maybe he accepts the first alternative he comes across. The Satisficing subscale of MI (and SMI) is internally consistent and correlates with the indices of well-being as predicted by the theory. On the other hand, its face validity is dubious (Reference Cheek and SchwartzCheek and Schwartz, 2016, relate some of its items to uncertainty tolerance and to “make the best of the situation” approach, rather than to satisficing). That said, satisficing conceptualized as a construct distinct from maximizing may be worth studying in the future, if the concept of satisficing as anything other than “not maximizing” can itself be clarified.

SMI was not administered to participants as a separate scale. Instead, we administered the full MI and then extracted the items that compose SMI. This approach is identical to that of Nenkov et al.’s (2008) Analysis 3. Although this is not likely, item responses may have been influenced by the context of other items presented (Reference Knowles and CondonKnowles & Condon, 2000). Related to this issue, Smith, McCarthy and Anderson (2000) note that this approach is likely to result in overestimated correlations between the short form and the full form of the scale. We acknowledge this, but we still consider our results valid; we not only report high correlations between the full MI and SMI but also find correlations with the indices of well-being similar to those reported by Turner et al. (2012) for a different dataset. Following this, a suggested direction for future research is to conduct a study using the Short Maximization Inventory as a separate scale.

Examining the test-retest validity of SMI would provide useful information on the stability of results obtained with this scale over time. We also encourage researchers to contrast SMI results with behavioral measures associated with maximizing and satisficing to shed more light on the topic.

Data for our convergent validity investigation were collected from all subjects, for all constructs, using the same scales. This poses the risk of inflated correlations due to common-method bias (Reference Podsakoff, MacKenzie, Lee and PodsakoffPodsakoff, MacKenzie, Lee & Podsakoff, 2003). On the other hand, a similar approach was used for assessing the convergent validity of the original MI (Reference Turner, Rim, Betz and NygrenTurner et al., 2012), and the authors reported no issues related to common-method bias. Our aim was to demonstrate that the SMI produces correlations to the indices of well-being similar to those of other scales. We did not want to research in depth the relationship between maximization and other constructs.

Administering the scales in Czech translation poses the threat of shifts in the meanings the items convey. We exercised great care to mitigate this risk by following (and exceeding) Beaton et al.’s (2000) guidelines on cross-cultural adaptation of scales. We obtained three independent translations and back-translations of the items, commissioned an expert committee to assess the translations and to select the most appropriate ones. We also conducted two types of cognitive interviews and pilot-tested the translated scales. Compared to other studies using non-English measures of maximizing, we dedicated more effort to ensuring that the translation was correct with no loss or distortion of meaning of the items (compare, e.g., with Roets et al., 2012, who had one person translate the scale and “double-checked the final translation with other colleagues” or Lai, 2010, who used only iterated translation and back-translation). Our correlation study results, similar to those reported by Turner et al. (2012), indicate that the translation process was successful and that our study does not suffer from significant cultural differences.

The aim of this paper was to verify the psychometric properties of MI and to provide researchers with its shorter yet well-performing version. We believe this has been accomplished. We consider the results to be robust, given our sample size of 902 (comparable to N=828 in Turner et al., 2012). To achieve a balance between our model’s fit with the data and its predictive power, we split responses randomly into two data sets. We demonstrate very good fit with both data sets.

The main contribution this paper reports on is the development of the Short Maximization Inventory (SMI). SMI contains 15 (5+5+5) best-performing items of the Maximization Inventory (Reference Turner, Rim, Betz and NygrenTurner et al., 2012), which has 34 (10+12+12) items. As demonstrated in this paper, SMI is an effective yet concise tool for assessing maximization as an individual trait. We expect it, or at least its subscales for Decision Difficulty and Alternative Search (given the need for further conceptual clarification of satisficing itself) will be well received by researchers who wish to investigate maximization as a supplementary measure in their research projects. This compact yet powerful tool for maximization measurement will allow researchers to expand their research scope without dramatically inflating the number of items presented to subjects. To measure the two-component construct of maximization, as it is presented by Reference Cheek and SchwartzCheek and Schwartz (2016), the Alternative Search subscale of SMI together with MTS-7 (Reference Dalal, Diab, Zhu and HwangDalal et al., 2015) appears the most appropriate.

Footnotes

We wish to thank the editor and the reviewers for the insightful and valuable comments they provided. This paper is part of the Masaryk University Specific Research Project MUNI/A/1021/2015. While finishing this paper, Michal Ďuriník was a holder of Macquarie University Research Excellence Scholarship.

1 Consider two people who both have high standards: one is a maximizer, the other one is a satisficer. The maximizer tries to find and evaluate all options available to make sure he selects the best one. The satisficer stops the search upon finding the first option that meets his (high) standards.

2 As noted by the editor, there are specific scenarios in which an active search is not possible, yet the goal of maximization may still be relevant. When selecting from job candidates, one usually does not search actively, but simply waits for applications to arrive. A maximizer will wait until he is reasonably sure that no better candidate will apply. A satisficer will accept the first candidate that meets the criteria.

3 Test reliability in Item Response Theory is usually estimated using the equation r = VAR(EAP)/[VAR(EAP) + MSE], where VAR(EAP) is the variance of expected a-posteriori latent trait estimates and MSE is the mean of error variance of these estimates. The resulting reliability thus depends on the mean error variance (negatively) and the variance of estimated latent traits (positively).

4 The M2* procedure provides asymptotic chi-squared statistics of model fit, which can be used directly, or to compute RMSEA (root mean square error of approximation) and interpreted in the same way as in a confirmatory factor analysis: RMSEA of well-fitting model approaches 0. If the M2* statistics are also computed for the null model (in which all item discrimination parameters are fixed to 0), incremental fit indices such as CFI or TLI can be computed too and values close to 1 (e.g., above 0.90) are usually considered good. The last fit statistic we used is SRMSR (standardized root mean squared residual) which can be interpreted as the squared root of the mean difference between model-predicted and observed item correlations (similarly to SRMR statistic in factor analysis). SRMSR approaches zero in a well-fitting model.

5 Note: M2* is the value of chi-squared model fit test with the appropriate number of degrees of freedom.

6 In this paper, we use Turner et al.’s (2012; Table 3) numbering of items. The Satisficing subscale consists of items 1-10, the Decision Difficulty subscale consists of items 11-22, and the Alternative Search subscale consists of items 23-34.

7 Item 2: “At some point you need to make a decision about things.”

Item 6: “Good things can happen even when things don’t go right at first.”

Item 8: “All decisions have pros and cons.”

Item 10: “I accept that life often has uncertainty.”

8 Item 5: “I try to gain plenty of information before I make a decision, but then I go ahead and make it.”

9 Consider, for example, the similarity of item 25 (“I will continue shopping for an item until it reaches all of my criteria”) with item 26 (“I usually continue to search for an item until it reaches my expectations”).

10 Bayesian information criteria based on likelihood function, which can be used for comparing nested models (differently invariant models are nested). Lower value suggests better fitting model.

11 We used the WLSMV estimator (diagonally weighted least squares estimator used to estimate model parameters and full weighted matrix used to compute robust standard errors) of the polychoric correlation matrix in the lavaan package (Reference RosseelRosseel, 2012).

12 CFA was performed in the lavaan package (Reference RosseelRosseel, 2012) with WLSMV estimation based on polychoric correlation matrices.

13 Note that Revelle’s omega accounts properly for residual correlations, which therefore do not bias the reliability estimate.

References

Baker, F. B. (2001). The basics of item response theory. Washington, DC: ERIC.Google Scholar
Beaton, D., Bombardier, C., Guillemin, F., & Ferraz, M. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186–91.CrossRefGoogle ScholarPubMed
Bek, V. (2007). Optimistický postoj k životu jako kognitivní styl (Master’s thesis). Masaryk University, Brno.Google Scholar
Cai, L., & Hansen, M. (2013). Limited-information goodness-of-fit testing of hierarchical item factor models. British Journal of Mathematical and Statistical Psychology, 66(2), 245276.CrossRefGoogle ScholarPubMed
Chalmers, R. P. (2012). mirt[202F?]: a multidimensional item response theory package for the r environment. Journal of Statistical Software, 48(6), 129.CrossRefGoogle Scholar
Cheek, N., & Schwartz, B. (2016). On the meaning and measurement of maximization. Judgment and Decision Making, 11(2), 126-146).CrossRefGoogle Scholar
Chen, W. H., & Thissen, D. (1997). Local dependence indexes for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22(3), 265289.CrossRefGoogle Scholar
Dahling, J., & Thompson, M. (2012). Detrimental relations of maximization with academic and career attitudes. Journal of Career Assessment, 21(2), 278294.CrossRefGoogle Scholar
Dalal, D. K., Diab, D. L., Zhu, X. S., & Hwang, T. (2015). Understanding the construct of maximizing tendency: a theoretical and empirical evaluation. Journal of Behavioral Decision Making, 28(5), 437450.CrossRefGoogle Scholar
Dewberry, C., Juanchich, M., & Narendran, S. (2013). Decision-making competence in everyday life: the roles of general cognitive styles, decision-making styles and personality. Personality and Individual Differences, 55(7), 783788.CrossRefGoogle Scholar
Diab, D., Gillespie, M., & Highhouse, S. (2008). Are maximizers really unhappy? the measurement of maximizing tendency. Judgment and Decision Making, 3(5), 364370.CrossRefGoogle Scholar
Djulbegovic, B., Beckstead, J. W., Elqayam, S., Reljic, T., Hozo, I., Kumar, A., … Paidas, C. (2014). Evaluation of physicians’ cognitive styles. Medical Decision Making, 34(5), 627637.CrossRefGoogle ScholarPubMed
Ďuriník, M. (2016). Translating maximization inventory into Czech language. In Š., Majtán et al. (Ed.), Aktuálne problémy podnikovej sféry 2016 Conference Proceedings (pp. 195200). Bratislava: Ekonom.Google Scholar
Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly, 73(2), 349360.CrossRefGoogle Scholar
Giacopelli, N. M., Simpson, K. M., Dalal, R. S., Randolph, K. L., & Holland, S. J. (2013). Maximizing as a predictor of job satisfaction and performance: A tale of three scales. Judgment and Decision Making, 8(4), 448469.CrossRefGoogle Scholar
Hofstede, G. (2016). Country comparison. retrieved from https://geert-hofstede.com/countries.html on March 3rd, 2017.Google Scholar
Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486507.CrossRefGoogle Scholar
Kim, S., & Feldt, L. S. (2010). The estimation of the IRT reliability coefficient and its lower and upper bounds, with comparisons to CTT reliability statistics. Asia Pacific Education Review, 11(2), 179188.CrossRefGoogle Scholar
Knowles, E. S., & Condon, C. A. (2000). Does the rose still smell as sweet? Item variability across test forms and revisions. Psychological Assessment, 12(3), 245252.CrossRefGoogle ScholarPubMed
Kresanová, J. (2015). Štěstí: metody měření a agregace (Bachelor’s thesis). Masaryk University, Brno.Google Scholar
Křivohlavý, J., Schwarzer, R., & Jerusalem, M. (1993). Czech adaptation of the general self-efficacy scale. Retrieved from http://userpage.fu-berlin.de/~health/czec.htm on July 3rd 2015.Google Scholar
Lai, L. (2010). Maximizing without difficulty: A modified maximizing scale and its correlates. Judgment and Decision Making, 5(3), 164175.CrossRefGoogle Scholar
Larsen, J. T., & McKibban, A. R. (2008). Is happiness having what you want, wanting what you have, or both? Psychological Science, 19(4), 371377.CrossRefGoogle ScholarPubMed
Lyubomirsky, S., & Lepper, H. S. (1999). A measure of subjective happiness: preliminary reliability and construct validation. Social Indicators Research, 46(2), 137155.CrossRefGoogle Scholar
Maydeu-Olivares, A., & Joe, H. (2006). Limited information goodness-of-fit testing in multidimensional contingency tables. Psychometrika, 71(4), 713732.CrossRefGoogle Scholar
Miller, S. A. (2014). Assessing the sensitivity, composition, and effects of information distortion (Dissertation). The Ohio State University.Google Scholar
Misuraca, R., Faraci, P., Gangemi, A., Carmeci, F. a., & Miceli, S. (2015). The Decision Making Tendency Inventory: A new measure to assess maximizing, satisficing, and minimizing. Personality and Individual Differences, 85, 111116.CrossRefGoogle Scholar
Moyano-Díaz, E., Martínez-Molina, A., & Ponce, F. P. (2014). The price of gaining: maximization in decision-making, regret and life satisfaction. Judgment and Decision Making, 9(5), 500509.CrossRefGoogle Scholar
Nenkov, G. Y., Morrin, M., Ward, A., Hulland, J., & Schwartz, B. (2008). A short form of the Maximization Scale[202F?]: Factor structure, reliability and validity studies. Judgment and Decision Making, 3(5), 371388.CrossRefGoogle Scholar
Nygren, T. E., & White, R. J. (2002). assessing individual differences in decision making styles: analytical vs. intuitive. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 46(12), 953957.CrossRefGoogle Scholar
Orlando, M., & Thissen, D. (2000). Likelihood-based item-fit indices for dichotomous item response theory models. Applied Psychological Measurement, 24(1), 5064.CrossRefGoogle Scholar
Paivandy, S., Bullock, E. E., Reardon, R. C., & Kelly, F. D. (2008). the effects of decision-making style and cognitive thought patterns on negative career thoughts. Journal of Career Assessment, 16(4), 474488.CrossRefGoogle Scholar
Parker, A. M., Bruine de Bruin, W., & Fischhoff, B. (2007). Maximizers versus satisficers: Decision-making styles, competence, and outcomes. Judgment and Decision Making, 2, 342350.CrossRefGoogle Scholar
Patalano, A. L., Weizenbaum, E. L., Lolli, S. L., & Anderson, A. (2015). Maximization and search for alternatives in decision situations with and without loss of options. Journal of Behavioral Decision Making, 28(5), 411423.CrossRefGoogle Scholar
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879903.CrossRefGoogle ScholarPubMed
Polman, E. (2010). Why are maximizers less happy than satisficers? Because they maximize positive and negative outcomes. Journal of Behavioral Decision Making, 23(2), 179190.CrossRefGoogle Scholar
R Core Team. (2017). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing,Google Scholar
Rim, H. Bin. (2017). Impacts of maximizing tendencies on experience-based decisions. Psychological Reports, 120(3), 460474.CrossRefGoogle ScholarPubMed
Rim, H. Bin, Turner, B. M., Betz, N. E., & Nygren, T. E. (2011). Studies of the dimensionality, correlates, and meaning of measures of the maximizing tendency. Judgment and Decision Making, 6(6), 565579.CrossRefGoogle Scholar
Roets, A., Schwartz, B., & Guan, Y. (2012). The tyranny of choice: a cross-cultural investigation of maximizing-satisficing effects on well-being. Judgment and Decision Making, 7(6), 689704.CrossRefGoogle Scholar
Rogge, N. (2016). love is blind: how our love for more choice costs time. Psychology & Marketing, 33(5), 358371.CrossRefGoogle Scholar
Rosseel, Y. (2012). lavaan: an R package for structural equation modeling. Journal of Statistical Software, 48(2), 136.CrossRefGoogle Scholar
Scheier, M. F., Carver, C. S., & Bridges, M. W. (1994). Distinguishing optimism from neuroticism (and trait anxiety, self-mastery, and self-esteem): a reevaluation of the Life Orientation Test. Journal of Personality and Social Psychology, 67(6), 10631078.CrossRefGoogle ScholarPubMed
Schwartz, B. (2004). The Paradox of Choice: Why More is Less. New York: Harper Perennial.Google Scholar
Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of Personality and Social Psychology, 83(5), 11781197.CrossRefGoogle ScholarPubMed
Schwarzer, R., & Jerusalem, M. (1995). General Self-efficacy Scale. Measures in Health Psychology: A User’s Portfolio. Causal and Control Beliefs, (2008), 3537.Google Scholar
Sharif, M. A., & Spiller, S. A. (2014). Indecisive consumers and opportunity cost Consideration. In J., Cotte & S., Wood (Eds.), NA - Advances in Consumer Research Volume 42 (pp. 210214). Duluth, MN: Association for Consumer Research.Google Scholar
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99118.CrossRefGoogle Scholar
Smith, G. T., McCarthy, D. M., & Anderson, K. G. (2000). On the sins of short-form development. Psychological Assessment, 12(1), 102111.CrossRefGoogle ScholarPubMed
Turner, B. M., Rim, H. B., Betz, N. E., & Nygren, T. E. (2012). The Maximization Inventory. Judgment and Decision Making, 7(1), 4860.CrossRefGoogle Scholar
Weinhardt, J. M., Morse, B. J., Chimeli, J., & Fisher, J. (2012). An item response theory and factor analytic examination of two prominent maximizing tendency scales. Judgment and Decision Making, 7(5), 644658.CrossRefGoogle Scholar
Willis, G. B. (1999). Cognitive interviewing. A “how to” guide. Evaluation, 1(1), 137.Google Scholar
Figure 0

Table 1: Maximization Inventory items – results of Item Response Theory analysis. N=603

Figure 1

Table 2: The Short Maximization Inventory model fit statistics

Figure 2

Table 3: Short Maximization Inventory items

Figure 3

Table 4: Short Maximization Inventory parameters of the multidimensional IRT model: discrimination parameters, thresholds, item fit, and reliabilities. N=902

Figure 4

Table 5: Descriptive statistics for scales used in Study 2

Figure 5

Table 6: Correlations of SMI and MI with measures of well-being

Figure 6

Table 7: Construct validity for Maximization Inventory and Short Maximization Inventory. Correlations of latent traits. N = 902

Supplementary material: File

Ďuriník et al. supplementary material

Ďuriník et al. supplementary material 1
Download Ďuriník et al. supplementary material(File)
File 110.3 KB
Supplementary material: File

Ďuriník et al. supplementary material

Ďuriník et al. supplementary material 2
Download Ďuriník et al. supplementary material(File)
File 262 Bytes
Supplementary material: File

Ďuriník et al. supplementary material

Ďuriník et al. supplementary material 3
Download Ďuriník et al. supplementary material(File)
File 11.1 KB