Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-22T00:31:13.216Z Has data issue: false hasContentIssue false

Can Digitally Transformed Work Be Virtuous?

Published online by Cambridge University Press:  19 January 2024

Alejo José G. Sison*
Affiliation:
Universidad de Navarra, Spain
Rights & Permissions [Opens in a new window]

Abstract

This essay inquires whether digitally transformed work can be virtuous and under what conditions. It eschews technological determinism in both utopian and dystopian versions, opting for the premise of free human agency. This work is distinctive in adopting an actor-centric and explicitly ethical analysis based on neo-Aristotelian, Catholic social teaching (CST), and MacIntyrean teachings on the virtues. Beginning with an analysis of digital disruption, it identifies the most salient human advantages vis-à-vis technology in digitally transformed work and provides philosophical anthropological explanations for each. It also looks into external, organizational characteristics on both the macro and the micro levels of digitally transformed work, underscoring their ambivalence (efficiency and profits vs. exclusion and exploitation, flexibility and freedom vs. standardization and dependency) and the need to mitigate their polarizing effects for the sake of shared flourishing. The article presents standards for virtuous work according to neo-Aristotelian, CST, and MacIntyrean frames and applies them to digitally transformed work, giving rise to five fundamental principles. These basic guidelines indicate, on one hand, actions to be avoided and, on the other, actions to be pursued, together with their rationales.

Type
2023 Society for Business Ethics Presidential Address
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of the Society for Business Ethics

From a virtue ethics perspective, technology would be of interest not so much in itself as in how it affects human activity, particularly work, in the understanding that virtue or excellence in work directly impacts the achievement of flourishing as the final end. Faced with the fundamental changes digital technology brings to the world of work, a basic question virtue ethicists ask is whether digitally transformed work can be virtuous, and if it can, how.

There are basically two camps on this issue. First are the technopessimists or dystopians, for whom conditions for virtue in digitally transformed work do not obtain. What we have, above all, is an algocracy (the rule of algorithms) and work-place precarity (no job security, dismal wages, devalued work) (Cherry Reference Cherry2016), representing almost insurmountable stumbling blocks to virtuous human agency. Some dystopians fear that there won’t be enough meaningful work (Bankins and Formosa Reference Bankins and Formosa2023). Others are preparing for a postwork future, imagining institutions that will distribute income beyond labor markets and current welfare systems (Korinek and Juelfs Reference Korinek and Juelfs2022). Second are the techno-optimists or utopians, for whom digitally transformed work favors virtue, underscoring the power of individual talent and autonomy (Rodríguez-Lluesma et al. Reference Rodríguez-Lluesma, García-Ruiz and Pinto-Garay2021). On the basis of the “work from home” experience during the COVID-19 pandemic, certain authors defend that people have learned and adjusted, perceiving digitally transformed labor as a secure source of income and job satisfaction, with greater autonomy and flexibility (Nagel Reference Nagel2020). Still others think that risks of digital technological dominance may be managed through constructivist learning techniques, a better understanding of the psychological processes behind expertise development, and improvements in human–machine collaboration through heightened philosophical-ethical discourse (Sutton et al. Reference Sutton, Arnold and Holt2018).

The future of digital work is not so much a fact as the result of conflicting narratives dependent on diverse economic, social, and political interests (Dries et al. Reference Dries, Luyckx and Rogiers2023). Nonetheless, we seek to position ourselves regarding the aptitude of digitally transformed work for virtue as follows. First, we adopt an actor-centric view of digital transformation, as opposed to the more common technology-centered one (Nadkarmi and Prügl Reference Nadkarmi and Prügl2021; den Hond and Moser Reference den Hond and Moser2023). Next, we carry out an explicit ethical analysis based on contemporary virtue ethics tradition (Vallor Reference Vallor2016; Hagendorff Reference Hagendorff2022), acknowledging principles mainly from neo-Aristotelianism (Snow Reference Snow and Snow2017), Catholic social teaching (CST) (Congregation for the Doctrine of the Faith 1986, 72), and the works of Alasdair MacIntyre ([1981] Reference MacIntyre2007, Reference MacIntyre1988, Reference MacIntyre1990). Although we use elements from sociology (Rodríguez-Lluesma et al. Reference Rodríguez-Lluesma, García-Ruiz and Pinto-Garay2021), social psychology, organizational behavior (Trenerry et al. Reference Trenerry, Chng, Wang, Suhaila, Lim, Lu and Oh2021), economics (Frey and Osborne Reference Frey and Osborne2017; Bowles Reference Bowles2014; Mokyr et al. Reference Mokyr, Vickers and Ziebarth2015; Arntz et al. Reference Arntz, Gregory and Zierahn2016, Reference Arntz, Gregory and Zierahn2017; Nedelkoska and Quintini Reference Nedelkoska and Quintini2018; Acemoglu and Restrepo Reference Acemoglu and Restrepo2018, Reference Acemoglu and Restrepo2020), and information and communications technology (ICT) systems (Verhoef et al. Reference Verhoef, Broekhuizen, Bart, Bhattacharya, Don, Fabian and Haenlein2021), they do not constitute our main vantage point. Our purpose is not simply to bring about a more efficient digital transformation (Trenerry et al. Reference Trenerry, Chng, Wang, Suhaila, Lim, Lu and Oh2021), and neither do we focus on the ethically liminal perspective of meaningful work (Kim and Scheller-Wolf Reference Kim and Scheller-Wolf2019; Mejia Reference Mejia2023).

Rather, and as a preview of our argument, we make the case that digitally transformed work can indeed be virtuous, albeit mindful that we’re considering a “best-case scenario,” because for many, particularly those low on digital skills, noncognitive or social skills, imagination, and creativity, the chances for achieving virtue-promoting conditions are dire. Digital transformation polarizes not only job availability and income but work conditions and quality as well. An important motivation, therefore, besides discovering the features of virtuous, digitally transformed work, is to extend these virtue-promoting conditions to those on the losing end. Thus we hope to contribute to “making the future of work” virtuous, instead of just waiting for it to happen (Dries et al. Reference Dries, Luyckx and Rogiers2023).

After this brief introduction and expression of motivations, the rest of the article proceeds as follows. The next section investigates the salient features of digitally transformed work due to the impact of technology and organization. Section 3 continues with an account of virtuous work from neo-Aristotelian, CST, and MacIntyrean theories, and section 4 explains how the criteria for virtuous work apply to digitally transformed work. A final section bears the limitations of our study and avenues for future research.

1. The Features and Organization of Digitally Transformed Work

The digital transformation of work often considers two main outcomes: first, the net effect of job creation and destruction, and second, how current jobs are “disrupted” by technology. We shall dwell preferably on the second (“disruption”), referring to the first (net jobs) only to the extent that it helps shape the new forms of work. The features of digitally transformed work constitute our primary interest, as they condition the exercise of virtuous work. Besides technology (Kane et al. Reference Kane, Palmer, Anh Nguyen Phillips, Kiron and Buckley2015), we shall also study the consequences of organization, a factor seldom taken into account sufficiently.

Building on previous research on information technology–enabled changes (Besson and Rowe Reference Besson and Rowe2012; Wessel et al. Reference Wessel, Baiyeri, Ologeanu-Taddei, Cha and Blegind-Jensen2020), Trenerry et al. (Reference Trenerry, Chng, Wang, Suhaila, Lim, Lu and Oh2021, 2) define digital transformation as “a process of deep, structural change that occurs through the integration of multiple technologies and [that] fundamentally redefines organizational value and identity.” It involves a cluster of “general purpose technologies” (Elondou et al. Reference Elondou, Manning, Mishkin and Rock2023) with potential to advance a “Fourth Industrial Revolution” (Marsh Reference Marsh2012; Schwab Reference Schwab2017; Knieps Reference Knieps, Montero and Finger2021), unprecedented in scale, speed, and scope (Matt et al. Reference Matt, Hess and Benlian2015). One part of the definition emphasizes the integration of “information, computing, communication, and connectivity technologies” (Vial Reference Vial2019, 121; see also Agarwal et al. Reference Agarwal, Gao, DesRoches and Jha2010; Berghaus and Back Reference Berghaus and Back2016). The other centers on human agents who decide shifts in organization, strategy, structures, products, processes, and overall business model (Hess et al. Reference Hess, Matt, Benlian and Wiesböck2016), impacting workers and employees (Venkatesh Reference Venkatesh2006; Venkatesh and Bala Reference Venkatesh and Bala2008; Kaasinen et al. Reference Kaasinen, Kymäläinen, Niemelä, Olsson, Kanerva and Ikonen2012). Despite increasing aggregate wealth, there are worries that digital transformation would widen the gap between rich and poor nations (Alonso et al. Reference Alonso, Kothari and Rehman2020) as well as between rich and poor populations within countries (Eubanks Reference Eubanks2018).

Verhoef et al. (Reference Verhoef, Broekhuizen, Bart, Bhattacharya, Don, Fabian and Haenlein2021, 891–92) distinguish among three different stages in digital transformation. The first, digitization, consists in transforming analog information into digital formats computers can store, process, and transmit, such as filing income tax returns online, which hardly creates new value per se. Next comes digitalization, when ICTs alter and optimize business processes, for instance, when customers connect with firms through digital platforms, permitting enhanced customer experiences and cost savings. This process can be extended to logistics, distribution, or relationship management. Last is digital transformation, when ICT triggers pervasive changes in a company’s business model; think of advanced telemedicine and surgery (Medeiros Reference Medeiros2023). Although there is no imperative to follow through, progress through each stage incurs corresponding demands on digital resources and modifications in firm structure and strategy. They represent different business goals as well as different ways and conditions of working.

Despite data from previous technological transformations in support of net job creation and rise in efficiency, anxiety over mass unemployment refuses to disappear (Federal Ministry for Economic Affairs and Climate Action [FMEACA] 2022). Perception of job loss is not the same as reality (Dahlin Reference Dahlin2022). Nonetheless, this is a very human reaction wrought by uncertainty. The adoption of new technology is considered a “stressor,” reducing short-term worker well-being and satisfaction (Gavin and Mason Reference Gavin and Mason2004); and although technology is an external, environmental factor, there is a limit to skill challenges workers can bear (Warr Reference Warr2007). Some authors fear that digital technologies like artificial intelligence (AI), automation, robotics, cloud computing, and the Internet of Things would displace workers altogether or substantially and quickly reduce their need (Acemoglu and Autor Reference Acemoglu, Autor and Ashenfelter2011; Frey and Osborne Reference Frey and Osborne2017; Brynjolfsson and McAfee Reference Brynjolfsson and McAfee2014), whereas others optimistically predict that there will be enough replacement jobs (Arntz et al. Reference Arntz, Gregory and Zierahn2017). Think of social media content creators, search engine optimization experts, professional e-sports players, and data science and predictive analytics specialists, none of whom existed scarcely a decade ago (Roose Reference Roose2021). Most are engaged in invisible, background work (Gray and Suri Reference Gray and Suri2019; Nardi and Engeström Reference Nardi and Engeström1999). Either way, it will be an accelerated process, even exponentially so (Paolillo et al. Reference Paolillo, Colella, Nosengo, Schiano, Stewart, Zambrano, Chappuis, Lalive and Floreano2022). The net effect on the number of jobs does not depend on technology alone because it varies according to task (Arntz et al. Reference Arntz, Gregory and Zierahn2016, Reference Arntz, Gregory and Zierahn2017), occupation (Nedelkoska and Quintini Reference Nedelkoska and Quintini2018), country (Lordan and Neumark Reference Lordan and Neumark2018), and region. Regarding the use of large language models (LLMs) such as ChatGPT, Felten et al. (Reference Felten, Raj and Seamans2023) find telemarketers and postsecondary English and literature teachers as the most exposed occupations, while legal services and investments in securities and commodities are the most threatened sectors. More generally, Elondou et al. (Reference Elondou, Manning, Mishkin and Rock2023) assess that 80 percent of US workers will have at least 10 percent of tasks impacted by LLMs, while almost 20 percent will see at least 50 percent of tasks affected; about 15 percent of tasks can be completed more quickly without loss of quality.

Among nontechnological or human drivers of job maintenance and creation are worker adaptation, occupational composition, workplace organization, and innovations in business models and occupations. For instance, although accounting has a 98 percent probability of automation (Frey and Osborne Reference Frey and Osborne2017), clerks can adapt by engaging in problem solving or consultancy (Arntz et al. Reference Arntz, Gregory and Zierahn2016; Nedelkoska and Quintini Reference Nedelkoska and Quintini2018). Manufacturing jobs have a higher substitution potential than services (Muro et al. Reference Muro, Maxim and Withon2019). Yet even in the same sector, such as in textile and leather, less than 50 percent of nonmanagerial, technical jobs are at risk in France, whereas in Poland, close to 70 percent are (Eurofound 2019) due to previous technological adoption. In Germany, thanks to works councils, companies favor the retention of firm-specific human capital, which translates into skills upgrading and staff retraining (FMEACA 2022). Also, the status of some German automotive firms as global leaders allows them to increase employment and efficiency while adopting advanced robotics. Digital transformation by itself does not spell job destiny.

What, then, are the features rendering jobs and tasks automation- or robot-proof? The best jobs create technologically enhanced long-term value, not only for workers, but also for firms and society. We shall look primarily at jobs in advanced stages of digital transformation, identifying common characteristics in worker requirements.

Autor et al.’s (Reference Autor, Levy and Murnane2003) initial study established that nonroutine, cognitive work would be safer than routine, manual work in surviving automation. However, as Frey and Osborne (Reference Frey and Osborne2017) argue, thanks to Big Data and machine learning techniques, AI systems now tackle nonroutine cognitive tasks, such as language translation, and, with the help of robotics, nonroutine manual tasks, such as driving. AI is increasingly engaged in high-level mental functions such as planning, prediction, and process optimization in accounting and finance, law, and medicine (Roose Reference Roose2021). Tasks that are nonroutine for humans are converted into routines by chunking them into simpler, algorithm-governed sequences, making “tacit knowledge” explicit. Although machines do things differently than humans, results often turn out comparable or even superior to human outputs. This, then, diminishes the need for workers with special, context-dependent skills (deskilling) obtained through practice and experience (Goldin and Katz Reference Goldin and Katz1998, Reference Goldin and Katz2009). Converting nonroutine into routine tasks becomes reiterative as better data sets, algorithms, and robots become available. Whereas previous waves of industrialization resulted in tasks being distributed among more, albeit less-skilled, workers, nowadays both skilled and unskilled workers are replaced by AI-empowered machines (Jaimovich and Siu Reference Jaimovich and Siu2019, Reference Jaimovich and Siu2020). Beginning in the 1990s, technological job displacements outnumbered reinstatements, signaling that workers could not cope with the demands of high-skilled jobs (Acemoglu and Restrepo Reference Acemoglu and Restrepo2019).

Frey and Osborne (Reference Frey and Osborne2017, 24–28) identify three engineering “bottlenecks” limiting labor substitution: perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks. Let us examine them closely from an anthropological perspective.

1.1 Perception and Manipulation

Unlike AI-empowered robots, humans are great at identifying objects and perceiving essence and individuality, thanks to external (sight, hearing, smell, taste, and touch) and internal (memory and imagination) senses, bodily integrated with reason, volition, and motor appendages (Sison and Redín Reference Sison and Redín2023). These abilities have evolved as humans looked for food, grew, worked, played, and reproduced for biological, social, and symbolic purposes in ever-changing environments.

Advantages in perception and manipulation derive from our condition as embodied animals, equipped with hands that are “the tool of all tools” (Aristotle Reference Ross1936, 4, 212a14–30). Thus even the best facial recognition systems are no match for babies in recognizing their mothers, and similarly, no robots beat the versatility of human hands. The human body has 244 planes of motion, whereas the typical industrial robot has only 6 (Economist 2023).

AI robotic systems cannot identify essences (“appleness”); they only detect certain material, quantifiable, and sensible characteristics (round, red, fifty grams) in this particular object, through sensors and manipulators. Although designed to perform fine and complicated movements, they lack versatility; specificity in perception and motion is inversely proportional to scope. Unlike living beings, robots do not automatically integrate different sensorial and nonsensorial information. Systems trained in sorting apples are useless in sorting bananas; they are designed to function in concrete, well-defined domains. And system rectification (for example, when robots put plastic instead of real apples in a crate) is difficult and costly.

An engineering solution would be to affix barcode stickers, skirting identification problems (Guizzo Reference Guizzo2008). But stickers can be badly printed or peel off (Robotics-VO 2013). In perception, machines are no match for humans (Tsang and Almirall Reference Tsang and Almirall2021). AI and robotic difficulties with perception and manipulation illustrate Moravec’s paradox (Stern Reference Stern2023): owing to a lack of prior knowledge (“ground truth”), computers excel in tasks humans find hard (math) but fail in those even toddlers find easy. Human perception requires more than just tracking objects or data (video and audio) through space and time (Patraucean et al. Reference Patraucean, Smaira, Gupta, Recasens, Yang, Malinowski and Doersch2022); it’s experiencing the world.

1.2 Creative Intelligence

Creativity can be defined partly as the ability to produce ideas or artifacts (literature, music, paintings, sculptures, plays) that are novel or original (Boden Reference Boden2003; Frey and Osborne Reference Frey and Osborne2017). This is often done by combining ideas or artifacts in ways not previously tried. AI possesses an advantage here, having access to a huge archive with instant recall power. Statistical formulas could generate unique, random combinations. Thus AI produces novel and original artifacts.

However, Boden (Reference Boden1998, 347–48) describes creativity as “a fundamental challenge for AI.” AI approaches creativity in three ways: by producing novel combinations (“combinational creativity”), exploring conceptual spaces (“exploratory creativity”), and transforming spaces to generate previously impossible ideas (“transformational creativity”). Computer models work best with exploratory creativity, as it is difficult for them to compete with human associative memory (combinational creativity) and they lack evaluative capacities for transformational creativity. Yet even when AI transforms spaces, results may have no interest or value: think of Dall-e outputs (Stokel-Walker Reference Stokel-Walker2022; Leivada et al. Reference Leivada, Murphy and Marcus2023) when prompted with “a peanut butter and jelly sandwich in the shape of a Rubik’s cube.” Such ideas may be novel but not “creative,” because they are absurd (Boden Reference Boden1998). Similar problems occur with automated text generation, which often results in low-quality content (Illia et al. Reference Illia, Colleoni and Zyglidopoulos2023).

Human creativity requires that artifacts “make sense,” especially to experts, and express “aesthetic value.” AI encounters major obstacles in complying with this. An extrinsic limitation is the lack of agreement regarding “symbolic meaning” or “aesthetic value,” such that there can be no definitive training set. But a more important intrinsic limitation is that “symbolic value” and “aesthetic value” are beyond AI’s purview. These problems cannot be “specified”; their “success criteria” cannot be objectively quantified or assessed. As a result, no algorithms can be designed (Acemoglu and Autor Reference Acemoglu, Autor and Ashenfelter2011). In this respect, they are similar to “moral worth” or “moral excellence,” which largely escape AI’s grasp (Sison and Redín Reference Sison and Redín2023).

Because they lack meaningful creativity, even “intelligent” robots do not have the capacity to innovate (Redín et al. Reference Redín, Cabaleiro-Cerviño, Rodriguez-Carreño and Scalzo2023; Botica Reference Botica, Vladu, Fotea and Thomas2017). At most, AI is an “efficient method of imitation” (Mitchell Reference Mitchell, Agrawal, Gans and Goldfarb2019, 146), probabilistically identifying patterns better, faster, and cheaper (Agrawal et al. Reference Agrawal, Gans and Goldfarb2018). But innovation requires contextualization and knowledge of existing needs, and AI cannot do this because, among other reasons, it lacks a social dimension.

At the root of human sense making and creativity are embodied reason and free will. “Symbolic meaning” and “aesthetic value” respond to “why” questions; AI systems only respond to “how” queries. That AI is algorithm-dependent shows that it is incapable of free, unfettered, or autonomous volition: it has no inclinations, desires, or preferences (Sison and Redín Reference Sison and Redín2023). Humans, on the other hand, are creative because they determine their own ends (“why”) and autonomously invent ways to reach them (“how”). There is no point in endowing AI with creative intelligence because, having no mind or body, it has no capacity for aesthetic satisfaction.

As in decision-making, AI is best used as a tool in the creative process (Haase and Hanel Reference Haase and Hanel2023). AI can augment worker creativity by freeing the worker from codifiable and repetitive tasks (Tsang and Almirall Reference Tsang and Almirall2021), allowing the worker to attend to more complex problem solving, as in the case of telemarketers (Jia et al. Reference Jia, Luo, Fang and Liao2023). Digitalization makes in silico experimentation or simulation easier and cheaper.

1.3 Social Intelligence

Also called “noncognitive skills” (Kautz et al. Reference Kautz, Heckman, Dirisi, ter Weel and Borghans2014; Sánchez-Puerta et al. Reference Sánchez-Puerta, Valerio and Gutiérrez-Bernalet2016), social intelligence refers to character traits key to management, such as leadership, persuasion, negotiation or bargaining, collaboration, empathy, and care. As AI does more “thinking” (“calculating” would be more precise), humans can focus on feeling, empathy, and emotions, the lubricants for interpersonal relationships (Rust and Huang Reference Rust and Huang2021). The value of the “feeling economy” is reflected in the pervasiveness of social media in response to people’s desires for connectedness. Increasingly, financial analysts and investment bankers dedicate more time to meeting, encouraging, and reassuring clients, leaving technical matters to AI (Rust and Huang Reference Rust and Huang2021). Inroads have been made in affective computing (Scherer et al. Reference Scherer, Banziger and Roesch2010; Picard Reference Picard2010) and social robotics (Ge Reference Ge2007; Broekens et al. Reference Broekens, Heerink and Rosendal2009) such that AI can now mimic some personal responses. But results have been far from satisfactory, even counterproductive, as described by the “uncanny valley” hypothesis (Mori Reference Mori, MacDorman and Kageki2012): there is a degree in computer-generated or robotic resemblance to humans that elicits negative responses.

There are intractable difficulties for AI to read the complex gamut of human emotions, and it becomes even more challenging to produce appropriate responses. No matter how sensitive, sensors are often unidimensional; they may capture the voice of customers saying “You’re a very efficient worker” but not the irony in their face. That’s why the “whole brain emulation” approach (Sandberg and Bostrom Reference Sandberg and Bostrom2008), focusing exclusively on the mental aspects of emotion, is inadequate. Even if AI were to determine the “genus” of the proper emotional response, for instance, being apologetic, it could not figure out the right “intensity” (Rust and Huang Reference Rust and Huang2021). Furthermore, AI developers tend to assume that emotions are universal (Ekman Reference Ekman2021), that they are expressed or interpreted in the same way. But this is strongly contested, if not disproven (Crawford Reference Crawford2021, 151ff.), which makes the choice of a training set extremely troublesome. Emotions have a constructed, cultural component that makes their interpretation complex and generalization difficult (Feldman-Barret Reference Feldman-Barret2017).

From childhood, humans are trained in recognizing emotions; they can internally mirror what interlocutors are going through. This enables them to decide on the appropriate emotional response and motor reaction. Moreover, embedded in sociocultural and historical communities, humans are familiar with the symbolic meaning of bodily expressions. Humans are by nature social and relational beings, thanks to which they communicate and share emotions. Unlike AI, humans experience emotions themselves, acquiring a knowledge base impossible to replicate. This allows humans to give authentic, intentional responses to stimuli, not canned or robotic ones, hence the convenience of complementing data science courses with emotional intelligence and collaboration training (Rust and Huang Reference Rust and Huang2021).

So far, we have considered digitally transformed work focusing on features for which humans hold an advantage. But workers also have to compete with other workers, for which it is essential that they possess superior digital skills (González et al. Reference Vázquez, Ignacio, Gomez, Napierala, Böttcher, Jonkers and Beldarrain2019). Digital competence refers to “information and data literacy, communication and collaboration, media literacy, digital content creation (including programming), safety (including digital well-being and competences related to cybersecurity), intellectual property related questions, problem solving and critical thinking” (González et al. 2019, 29; see also Vuorikari et al. Reference Vuorikari, Punie, Carretero and Van den Brande2016; Carretero et al. Reference Carretero, Vuorikari and Punie2017).

Digital skills allow workers to get a foot in the door in recruitment and to keep their jobs. However, workers should not be complacent with purely technological skills. Digital competencies are similar to linguistic competencies or expertise in quantitative methods: they are often instrumental. To the extent that these skills are technological, they are more susceptible to obsolescence and automation.

The enduring value of digital skills lies in helping humans make sense of AI systems. Wilson et al. (Reference Wilson, Daugherty and Morini-Bianzino2017) distinguishes among four such profiles: trainers, who manage data and design algorithms; explainers, who interpret AI outcomes; architects, who organize and adopt AI systems; and ethicists, who set guidelines for accountability. They act as interfaces between AI and society, exemplifying different ways of “tending” to AI, not only “minding” mundane tasks but also “managing” complex ones to amplify human agency (Langlois Reference Langlois2003; Bankins and Formosa Reference Bankins and Formosa2023).

The organization of digitally transformed work, albeit an external factor, likewise presents specific opportunities and challenges. Organizational culture affects AI deployments just as much as AI deployments affect organizational culture (Ransbotham et al. Reference Ransbotham, Candelon, Kiron, LaFountain and Khodabandeh2021).

On a macro level, the digitalization of work blurs boundaries between traditional classifications into employees and self-employed, full-time and part-time, permanent and temporary workers (González et al. 2019). Flexibility introduces new work forms (Cherry Reference Cherry2016), some employee oriented (employment sharing, job sharing, casual work, interim management), others self-employed oriented (independent contractors) mediated by virtual platforms (portfolio work, crowd work, collaborative work), and still others a mix of both (voucher-based work or ICT-based mobile work). Digitalization spells freedom for some, for others insecurity and dependency.

On a micro level, work digitalization has produced the following effects (González et al. 2019, 55; Katz and Krueger Reference Katz and Krueger2017; Goldschmidt and Schmieder Reference Goldschmidt and Schmieder2017; Aubert et al. Reference Aubert, Caroli and Roger2006; Bresnahan et al. Reference Bresnahan, Brynjolfsson and Hitt2002). First are greater standardization and disintermediation of tasks, while reducing monitoring and supervision (or surveillance) costs, especially for jobs that take place virtually. Examples vary from automated call centers to AI systems that manage welfare benefits, health care, loan applications, and criminal justice decisions (Roose Reference Roose2021). Although standardization optimizes efficiency, it kills variety and creativity, which are aesthetic and cultural values (Thompson Reference Thompson2022). Second is increased competition, requiring flexibility in human resource management and outsourcing of noncore activities. Third is making online (especially outsourced) work possible through platforms (ridesharing, food delivery). Fourth is increasing worker mobility, enabling work from anywhere, any time, “on demand.” And fifth is empowering self-employed micro-entrepreneurs through e-commerce platforms (Etsy). Digitalization is Janus-faced, enabling efficiency and profits, on one hand, and exclusion and exploitation, on the other. It is up to humans to make morally balanced choices.

These organizational features of work digitalization affect workers differently, according to education and skill levels, with repercussions on earnings. The low skilled may be assigned by platforms to perform routine work in-person in constantly changing locations (think “Net-Cleaners”) for low pay, no security, and high personal safety risks. They may even engage in a “race to the bottom,” driving down fees to compete. Behind the resource-saving and philanthropic rhetoric lie vast opportunities for labor commodification and exploitation (Cherry Reference Cherry2016). The moderately skilled can use platforms to offer products to an enlarged and vetted client pool (for instance, web page designers) while saving overhead expenses. And highly skilled creatives (social media star influencers) can choose or even create their own platforms, delivering custom products at prices they themselves set.

We glean from the preceding a sliding scale in job stability, security or benefits, and earning capacity, depending on skill level. Work digitalization can result in unstable jobs, short-term contracts, and shorter temporary work. It provides loopholes to deny permanent status or benefits to workers, involuntarily stuck in temporary, part-time jobs. Although it increases opportunities for self-employment, worker mobility, and flexibility, it also leads to job fragmentation (deskilling, “dumbing down,” alienation, or lack of meaning), greater competition (lowering wages), and more frequent outsourcing (less commitment or loyalty). To the extent humans suffer depressed skill and knowledge levels (the loss of the ability to store information due to search engine dependence or the “Google effect”; Sparrow et al. Reference Sparrow, Liu and Wegner2011; Sutton et al. Reference Sutton, Arnold and Holt2018), their ability to learn and innovate is compromised. Although AI can facilitate employee creativity (Jia et al. Reference Jia, Luo, Fang and Liao2023), higher-skilled employees take better advantage than lower-skilled ones.

An outcome of work digitalization is more pronounced polarization, not only in income, skills, and work conditions, but also in opportunities for virtue. Technology is always ambivalent, and digitalization no different, in its capacity to produce opposite results depending on user intention and conditions (Bankins and Formosa Reference Bankins and Formosa2023). Digitalization has widened the worker divide through relentless and recursive labor substitution. It has created two worker castes: the digitally skilled and the low skilled. Job growth is much faster at these extremes, and when those in the middle are displaced, they adjust down, taking lower-paying jobs and worse conditions.

Even in advanced stages of digital transformation, humans maintain a competitive advantage over technology in tasks that require perception and manipulation, creative intelligence, and social intelligence (Table 1). In competing against other humans, a premium is given to digital skills and competencies, although these tend to be instrumental and transient. The organization of digitally transformed work gives rise to greater flexibility, with ambivalent, highly polarized effects on job stability or security, benefits, and earnings.

Table 1: Human Advantages vis-à-vis Technology in Digitally Transformed Work

Let us now examine the notion of virtuous work.

2. Virtuous work from a neo-Aristotelian, CATHOLIC SOCIAL TEACHING, and MacIntyrean perspective

For an ethical assessment of virtuous work, we draw on neo-Aristotelian, CST, and MacIntyrean sources. A common trait is that they view work beyond its purely economic significance, a disutility tolerated as a means to income; instead, they consider work a value-laden activity, a direct expression of human agency, and a necessary component of flourishing (Gevorkyan and Clark Reference Gevorkyan and Clark2020).

In neo-Aristotelianism, work is any “productive activity” that brings forth (genesis) something new. Work (poiesis, “making”), inasmuch as it is “heterotelic” (with an end or purpose outside itself), is often contrasted with praxis (doing), an activity whose end is its own performance, such as knowing (for its own sake) (Aristotle Reference Irwin1985, 1140b). We work to produce what we need or desire, such as food, clothes, or furniture, and these activities—farming, weaving and tailoring, carpentry—are guided by codifiable rules (algorithms), arts, or techniques (technai). Autotelic (self-contained) praxis, on the other hand, is governed by virtues (aretai) irreducible to rules. Virtues are good moral habits of character. Poiesis and praxis, along with their corresponding perfections, technique and virtue, respectively, have socioeconomic dimensions as well. Originally for Aristotle, work (poiesis) is proper to productive classes (artisans, slaves), whereas theoretical knowledge (praxis) like philosophy is proper to the ruling class.

CST diverges from neo-Aristotelianism in affirming work as opus humanum or actus humanus, a productive activity proper to humans regardless of socioeconomic class. It is a personal act involving the whole human being and expressive of dignity or intrinsic worth; through work, humans imitate their Creator (Gevorkyan and Clark Reference Gevorkyan and Clark2020). CST distinguishes two dimensions in work: a subjective aspect, “the activity of the human person as a dynamic being capable of performing a variety of actions that are part of the work process and that correspond to this personal vocation” (Pontifical Council for Justice and Peace [PCJP] 2004, 270), and an objective aspect, “the sum of activities, resources, instruments, and technologies used by men and women to produce things” (PCJP 2004, 270; John Paul II Reference Paul1981). All work gives rise to two outcomes: one subjective, corresponding to the knowledge, skills, attitudes, virtues, and meanings people develop, and the other objective, the external objects they manufacture. Technology participates in this double dimension, entailing skills and know-how (subjective dimension) and a set of instruments or tools (objective dimension). Technology is neither intrinsically good nor intrinsically evil, something that obliterates human agency, but a potential ally, if properly used, for flourishing (Gevorkyan and Clark Reference Gevorkyan and Clark2020). CST ordains that the subjective aspect, inseparable from workers themselves, take normative precedence over the objective aspect (Gaburro and Cressotti Reference Gaburro and Cressotti1998).

For CST, it is not enough that work comply with justice; it should satisfy charity as well, according to the logic of gift. Hence “decent work” “expresses the essential dignity of every man and woman in the context of their particular society: work that is freely chosen, effectively associating workers, both men and women, with the development of their community; work that enables the worker to be respected and free from any form of discrimination; work that makes it possible for families to meet their needs and provide schooling for their children, without the children themselves being forced into labor; work that permits the workers to organize themselves freely, and to make their voices heard; work that leaves enough room for rediscovering one’s roots at a personal, familial and spiritual level; work that guarantees those who have retired a decent standard of living” (Benedict XVI Reference Benedict2009, 63). This is, perhaps, the fullest account of “virtuous work,” bringing together the principles of human dignity, common good, universal destination of goods, private property, preferential option for the poor, subsidiarity, participation, and solidarity (Sison et al. Reference Sison, Ferrero and Guitián2016).

MacIntyre’s ([1981] Reference MacIntyre2007, 273) approach to the virtues, including virtuous work, goes through three stages. First, he defines “practices” (socially complex activities with internal goods whose excellence is known only to practitioners and whose goods are objects of cooperation) in contrast to “institutions” (activities or organizations pursuing external goods, such as wealth, status, or power; whose effectiveness can be judged by anyone; and whose goods are objects of competition or rivalry). Examples of practices are the healing and caring professions, whose good is the restoration of health; knowledge of excellence in these tasks requires firsthand involvement, as opposed to opinions of independent observers, and the good sought is an object of cooperation or shared good, such that the whole community benefits when each member enjoys good health. Because they involve practices, virtues are inescapably “personal,” addressing both individual and relational aspects of agents (Hagendorff Reference Hagendorff2022). Caring professions, however, need to be “housed” in corresponding “institutions” as hospitals, providing necessary external resources for healing and care. “Institutions” are essential for “practices” to thrive. Yet “practices” and “institutions” are in constant tension. Often, institutional goods take priority, while “practices” are relegated to the back seat. Think of hospitals more concerned with rankings than patient care. Virtuous work for hospital administrators then consists of avoiding seeking institutional goods for their own sakes and ensuring that these are placed at the service of practices.

Next, MacIntyre examines how individual practitioners render compatible various roles, as they live out biographies or life narratives. People cannot be engaged in practices all the time; human beings are called upon besides to participate in a variety of roles (familial, professional, religious, athletic, civic), which often enter into conflict. Many doctors are parents who respond to domestic, apart from hospital, emergencies. Which should they attend to first? There are also tensions among actors: critical care doctors may have different priorities than other specialists. Virtue requires establishing the right priority among conflicting roles, practices, and goods.

Third, the fulfillment of practices ought to contribute to the advancement of communities. Good doctors do not only take good care of patients and are good parents or spouses; they also are innovative and creative, moving the practice of medicine forward with colleagues, through pathbreaking research. Virtuous work pushes practices, individuals navigating multiple roles, and entire communities toward traditions of shared excellence. Virtues are key for individuals embedded in communities to jointly reach flourishing.

Aristotle identified flourishing (eudaimonia) as the common good of the political community, on which all other human goods are premised. CST puts forward the common good principle as “the social and community dimension of the moral good” (John Paul II Reference Paul2004; PCJP 2004, 164) and “the good of all people and of the whole person” (John Paul II Reference Paul2004; PCJP 2004, 165). MacIntyre (Reference MacIntyre2016) asserts that properly recognizing, engaging, and developing practices is also a common good, first, because it is something we reasonably desire and that perfects us, and second, because it can only be achieved collaboratively.

We rely on others even in procuring our own individual goods. None could become an “independent practical reasoner” without help from committed caregivers. Superior common goods—“the goods of family, of political society, of workplace, of sports teams, orchestras, and theater companies” (MacIntyre Reference MacIntyre2016, 51–52)—can be achieved only as members of groups. As moral agents, we cannot but act as social and political agents, influencing a common destiny. Business and workplaces are no different, for “the common goods of those at work together are achieved in producing goods and services that contribute to the life of the community and in becoming excellent at producing them” (MacIntyre Reference MacIntyre2016, 170).

From the preceding, we derive the following conditions for virtuous work (Table 2). First, it ought to be a productive human activity, meant to satisfy human needs and wants, with a view to flourishing. To qualify as work, activities have to involve the whole human being, an individual rational and social animal, an embodied self-directing agent. CST calls work a “human” or “personal act,” voluntary and rational: it cannot be forced from the outside. Its origin is the actor’s free will, and it is performed through bodily powers, physical and mental. Although work ought to be “productive,” bringing about something new, its purpose is not mere productivity but to contribute to the common good of flourishing. Activities that are unproductive (artistic contemplation) or that produce useless goods and services ought not be called work, strictly speaking. More importantly, not productivity itself but how flourishing can be achieved through productivity should be work’s goal. Otherwise, what would we want productivity for?

Table 2: Conceptual Foundations for Virtuous Work from Neo-Aristotelian, CST, and MacIntyrean Perspectives

Second, work is viewed as a right and duty; everyone who can work should, socioeconomic class notwithstanding (John Paul II Reference Paul1981). Work is necessary for self-development in the many different personal facets (physical, mental, social, and moral), and it channels each individual’s contribution to the political common good (service) (Guitián Reference Guitián2015). Only grave and exceptional reasons arising from health or age exempt people from the duty. Instead of thinking of digitalization as capital substituting labor, perhaps it’s better to frame it as capitalized labor making future work better.

Third, virtuous work demands that we prioritize the subjective over the objective dimension, giving greater value to knowledge, skills, attitudes, meanings, and moral habits than external, objective goods, including profits. People matter more than objects; the dignity of work derives from the dignity of workers. MacIntyre reminds us of the primacy of the internal goods of practices, which well-governed institutions ought to support. Otherwise, institutions lose orientation and are corrupted when efficiency or profit is sought for its own sake.

And fourth, every effort must be made to achieve the normative ideal of “decent work.” This concept goes beyond what justice demands and enters into the realm of charity (Baviera et al. Reference Baviera, English and Guillén2016). Not only does “decent work” acknowledge unique, individual talents; it also acknowledges the complex web of relations individuals inhabit. Individuals ought to be respected in their free work choices and decisions, not subject to unjust discrimination; afforded a chance to contribute and participate in workplace governance; and given opportunities for adequate rest, recreation, and worship. Society should recognize that workers have families to feed and educate and that firms are communities as well. “Decent work” connects very closely with properly navigating conflicts and the duty to help advance community traditions. As it ensures that no one is left behind, “decent work” aligns with the common good principle.

3. How digitally transformed work can be virtuous

We now examine how digitally transformed work can meet these criteria (Table 3).

Table 3: Principles for Virtuous, Digitally Transformed Work, Dos and Don’ts

Note. DTW = digitally transformed work.

Digital technology, like all technology, is morally neutral. It’s a mistake to think of digitalization as a natural force, like gravity, against which humans are powerless (Roose Reference Roose2021). Humans create digital technology, deciding the laws and norms governing use. Technology is never irreversible: humans can agree to strictly limit (nuclear weapons) or stop its use (land mines or asbestos insulation). The same is true with digital technology: we can decide to continue developing autonomous weapon systems, facial recognition, and social robots or not. Technology does not have its own desires or preferences because it is not a living being; objectives are supplied from the outside by humans who use them (heterotelic poiesis or “doing”). Technological determinism (Grint and Woolgar Reference Grint and Woolgar1997), either utopian or dystopian, is an error (Crawford Reference Crawford2021, 213–14). It’s not that algorithms know better than humans, in which case we have no choice but to embrace AI as a technological savior and hope it does not wipe us out (Gigerenzer Reference Gigerenzer2022). The value and power of human freedom must be reinstated.

From the statement “technology is morally neutral,” several corollaries can be derived. First, the moral valence of technology depends on intentions and how the technology is used (Brundage et al. Reference Brundage, Avin, Clark, Toner, Eckersley, Garfinkel and Dafoe2018): “The machine’s danger to society is not from the machine itself but from what man makes of it” (Norbert Wiener, as cited in Roose Reference Roose2021, 3). Of course, technology embodies the interests and entrenches the power of designers (den Hond and Moser Reference den Hond and Moser2023), yet this imbalance does not per se merit moral censure: think of the technological aids to child-rearing and parenting. Above all, it’s a question of the moral character of agents who employ technology.

Second, like all tools and machines, digital technology is a means to make work, physical and mental, easier. Paradoxically, technology does not “work”; it only facilitates work, enabling humans to do new or better things. And third, digital technology raises productivity by multiplying human potential.

Competing against digital technology may prove entertaining, but it may just be a waste of time. Often, technology replicates more clumsily what humans do. Why would we want inferior duplicates? It is through collaboration, not competition, that digital technology reaches its full potential. Even literature on AI in team collaboration or “machines as teammates” (Seeber et al. Reference Seeber, Bittner, Briggs, De Vreede, De Vreede, Elkins and Maier2020) has to be taken with a grain of salt for at least two reasons: first, because such a technology does not yet exist—a machine capable of defining a problem, identifying root causes, proposing and evaluating remedies, and conducting after-action reviews is still just hypothetical—and second, because such an imaginary machine is not meant to engage in deliberate goal setting; in truth, it would not go beyond support or assistance. Humans would still control machine artifact design, collaboration design, and institution design.

Humans should leave to technology whatever it can do better, with greater consistency, precision, and speed, and concentrate on tasks requiring flexibility, creativity, and emotional intelligence.

Digitally transformed work can be virtuous, so long as it remains a human act that is productive and contributes to flourishing. In general, it makes economic and even moral sense to automate whenever possible in repetitive manufacturing (dull) or in hazardous activities (dangerous and dirty), to augment and enhance human agency, saving time, physical effort, and health or liberating creative energies.

To be virtuous, digitally transformed work needs to be rational and voluntary, that is, deliberate. This requires fending off “automation bias” or “machine drift” (Roose Reference Roose2021), outsourcing decisions to recommender systems due to convenience or frictionless design (one-click), deference (“machines know better”), or laziness. Decision-making ought to remain exclusively a human task, because AI cannot take responsibility or have a moral sense (Roose Reference Roose2021, 152). To do otherwise, besides falling into anthropomorphization, is morally censurable, relinquishing decisions to tools. AI operates with input data, whereas humans form moral judgments adaptively (Moser et al. Reference Moser, den Hond and Lindebaum2022). The quantified data on which AI relies act as a moral anesthetic, reducing human problems to technical issues (Heaven Reference Heaven2022). AI can never be truly “responsible” or “mature.” AI can fail due to “limitations in algorithmic models, poorly scoped system goals, or problems with integration with other systems” (Renieris et al. Reference Renieris, Kiron and Mills2022). AI can diminish agency and increase passivity and emotional distance (Friedland Reference Friedland2019).

Humans ought to retain control, instead of passively accepting whatever digital technology dispenses (Lindebaum et al. Reference Lindebaum, Vesa and den Hond2020). We should be wary of the habitual reinforcements of ceding decision-making to AI (Gigerenzer Reference Gigerenzer2022). There is no moral imperative to use AI simply because one can, and certainly not to substitute human judgment (Moser et al. Reference Moser, den Hond and Lindebaum2022). The efficiency of automation cannot be the sole concern; humans should not be automated (Zuboff Reference Zuboff2019) or junked when they fail to keep pace (Crawford Reference Crawford2021). There is also a danger of “overautomating,” giving machines tasks they’re unequipped to handle, such as hiring and firing decisions (Roose Reference Roose2021, 148) or, worse, decisions over life and death, as with Sarco, the “do-it-yourself” euthanasia machine (Heaven Reference Heaven2022). An egregious example of “overautomating” is when every worker’s movement is tracked, measured, and surveilled to be optimized by AI. Those who meet goals will be rewarded, those who don’t, punished. This is an affront to human dignity. Unsurprisingly, such jobs are characterized by high levels of stress, injury, and illness (Crawford Reference Crawford2021). Algorithmic management is especially unjust because subjects may not even be employees but independent contractors (Mateescu and Nguyen Reference Mateescu and Nguyen2019a, Reference Mateescu and Nguyen2019b). Smart machines should be working for people, not people for smart machines (Zuboff Reference Zuboff2019).

Digitally transformed work cannot be virtuous when it is inhuman, exploitative, and alienating. Exploitation occurs when work is so chopped up and distributed through a network that low-skilled contractors receive only a pittance for microtasks, without job security or bargaining power (Crawford Reference Crawford2021). A digital version of alienation takes place through “fauxtomation” (Crawford Reference Crawford2021) or “so-so automation” (Acemoglu and Restrepo Reference Acemoglu and Restrepo2019), for instance, self-checkout grocery counters that reduce low-skilled labor by transferring it to customers. Through exploitation and alienation, workers become disconnected with the value and fruits of their labor as well as among themselves (Marx). The objective dimension of their work is separated from its subjective dimension (CST); their existence is withdrawn from their essence (Hegel); they cease to be what they could be, what they ought to be (Gevorkyan and Clark Reference Gevorkyan and Clark2020). They sink into anomie (Durkheim) as deskilled and commoditized resources.

Digitally transformed work should support the duty and right of individuals to work, regardless of socioeconomic status and other considerations. Not only public authorities but business owners as well share the responsibility to create jobs. AI need not be a job destroyer; it can change work and create more jobs, taking over some tasks while freeing people to do other, more important and challenging functions (Davenport and Miller Reference Davenport and Miller2022). Digital technology can be a potent force for inclusion in the workplace, as in the case of people with disabilities. Systems that transform text to audio (audiobooks and audio navigating systems) are helpful not only for the blind but also for the sighted.

Managers ought to ensure that technology empowers rather than dehumanizes (Roose Reference Roose2021). They ought to avoid designing jobs in which humans are mere “gap fillers” for tasks soon to be automated (like filling orders in Amazon warehouses, which could very well be done by robots). Although the division of labor increases productivity, it also makes tasks monotonous, destroying creativity and ingenuity, the motors of innovation and progress (Gevorkyan and Clark Reference Gevorkyan and Clark2020). To have opportunities for virtuous work, humans have to be able to participate in decision-making and give feedback.

Automation, substituting labor with cheaper AI, tends to be zero sum, with workers losing jobs and business owners reaping all the productivity benefits (what Brynjolfsson [Reference Brynjolfsson2022] calls the “Turing Trap,” when all gains accrue to capital). Augmentation, by contrast, when AI complements human potential, grows the economic pie, with everyone winning. Amazon’s success, for instance, lies not in replacing workers with robots but in humans interacting with AI innovatively, assisting customers with the widest range of products and delivering them swiftly (Brynjolfsson Reference Brynjolfsson2022).

Augmentation implies going beyond understanding machine autonomy and human control as contrary forces. In the human–computer interaction (HCI) model, Shneiderman (Reference Shneiderman2022) proposes that high-level machine autonomy could be compatible with high-level human control. Think of electronic cameras that allow photographers to point and shoot because the software guides and permits them to improve on digital pictures with filters. This is a sociotechnical paradigm whereby humans maintain meaningful control through natural reflexes, keeping responsibility (Holford Reference Holford2020; Mejia Reference Mejia2023).

Digitally transformed work should highlight the value of the subjective (perception and manipulation skills, creative intelligence/intellectual virtues, social intelligence/moral virtues, digital skills) over the objective dimension. Roose (Reference Roose2021) summarizes the most valuable human characteristics in being “surprising, social, and scarce.” “Surprising” means the ability to deal with poorly defined rules, incomplete information, or messy scenarios, like preschool teachers. “Social” refers to the capacity to establish meaningful connections (Mejia Reference Mejia2023), to create positive emotional experiences. “Scarce” signifies developing a unique and diverse skill set, having the versatility of a neurosurgeon, pianist, and mixologist, for example. This translates into leaving “handprints,” like an artist’s signature, in work.

Digitally transformed work ought to include MacIntyrean practices to be virtuous. It must leave room to develop distinctly human knowledge, skills, or attitudes that cannot be achieved outside of such activities and whose excellence could only be perceived by those experienced in the matter.

Take the case of content moderators, reviewers of posts deemed toxic, because they are false and harmful to health, hateful, violent, illegal, or indecent. In 2020, Accenture and its overseas contractors, with almost six thousand workers, billed Facebook approximately $500 million for these services (Satariano and Isaac Reference Satariano and Isaac2021). Although 90 percent of noxious posts are filtered by AI, humans still have to deal with the remaining 10 percent in round-the-clock eight-hour shifts all over the world. Besides mental health risks (depression, anxiety, paranoia), reviewers have a very small margin for error: 5 percent of “false calls” could lead to dismissal. This is extremely challenging given shifting standards and the twenty-four-hour limit for removing flagged posts. Hiring practices are not very selective, and the jobs have low pay and inadequate training and support. Given these legal, ethical, and reputational risks, some service providers, such as Cognizant, decided to walk away from contracts worth hundreds of millions—but not Accenture.

As troubling and disgusting as this may seem, reviewing social media posts always involves human judgment, and technology can only be an enabler. That is why people protested when the picture of a nude girl, a victim of napalm bombing in Vietnam, was censored on Facebook (Levin, Wong, and Harding Reference Levin, Wong and Harding2016): the intent was not pornographic but to show the horrors of war. Although this may not be anyone’s dream job, it has become necessary due to the ubiquity of social media. Through engagement with the toxic material, moderators may be deemed victims as well. Despite low pay and little training, however, we can imagine that some will martial on, becoming frontline “educators,” “legislators,” and “judges” of what is tolerable or obscene. Their job is not only morally fraught but also organizationally complex (in-house, to maintain control, or outsourced, due to lower costs, lighter regulation, or language and cultural expertise?). A sense of community and social support among fellow workers is necessary to survive (similarly with Mumbai ragpickers, as described by Shepherd et al. [Reference Shepherd, Maitlis, Parida, Wincent and Lawrence2022]). Among those who began exclusively for the income, perhaps some will remain, having developed a sense of purpose and mission: to protect those even more vulnerable and to set a moral tone for society. It is not far-fetched that these workers in due course rise to supervisory levels and train recruits. They would have achieved “internal goods,” developing decency standards, acquiring moral education, and creating community.

Digitally transformed work should be organized to help especially those whose jobs are unstable, insecure, or poorly paid. Paolillo et al. (Reference Paolillo, Colella, Nosengo, Schiano, Stewart, Zambrano, Chappuis, Lalive and Floreano2022) have devised a “resilience index” (RI) to help the technologically displaced move to better-quality jobs with lower automation risk. For each pair of jobs, RI measures how feasible, in terms of retraining, and how convenient, in terms of Automation Risk Index reduction, is the switch from one to the other. This is useful for managers in technology-induced restructuring to better reposition workers.

Labor organizations can play a huge role (Roose Reference Roose2021) if they seek the good of members rather than the interests of labor leaders and politicians. In Germany, labor participation in governance through work councils accounts for greater employee retention with skills upgrading and retraining, without loss of efficiency or global industry leadership (FMEACA 2022). By contrast, US crowd workers are subject to ruthless algocracy and near-complete precarity (Cherry Reference Cherry2016).

CST acknowledges the right of workers to organize (PCJP 2004, 305) (freedom of association) and to contribute to institutional governance. Labor organizations constitute an intermediate layer between individuals and families, on one end, and governments and civil society, on the other. They play an indispensable role in accordance with the principle of subsidiarity (Pius XI Reference Pius1931): lower-level bodies should be allowed as much self-governance as possible, because they are better aware of local circumstances and stand to benefit or suffer the most. Conversely, higher-level bodies should offer assistance only when lower-level bodies are unable, not smothering initiatives and monopolizing power. Well-functioning labor organizations can act as a defense against abusive governments while uniting and reinforcing legitimate member claims.

An illustration of how even low-end work like data labeling and content moderation need not be exploitative is Sama, a platform-based service provider impacting more than sixty thousand employees and dependents in India, Kenya, and Uganda (Klym Reference Klym2022; Perrigo Reference Perrigo2023; Njanja Reference Njanja2023; Kantrowitz Reference Kantrowitz2023). (Linebaugh and Knutson [Reference Linebaugh and Knutson2023], however, find Sama labor practices exploitative. Ironically, Meaker [Reference Meaker2023] writes about prisoners in Finland doing click work for $1.67 an hour, just like the Kenyans, yet no one seems to mind, let alone protest.) In 2020, Sama was the first AI firm to obtain B Corp certification for observing high environmental, social, and governance standards, and in 2021, it was ranked among “B Corp’s Best for the World” for workforce commitment. Sama offers long-term job stability, living wages (several times the legal minimum, with fresh recruits often seeing a 360 percent jump), and generous benefits (health care, pension plans, subsidized meals, parental leaves). Although wages of less than $2 an hour or a net income of $300 a month may not suffice for a life of luxury, it’s enough to support an individual or even a young family in Kenya (Otieno Reference Otieno2023). To foster workplace participation, it holds floor meetings and more discreet processes when anonymity is desired. Workers receive training in core values, including gender equity: female Sama workers earn 60 percent more on average than women do in other companies. To mitigate health and mental risks, employees have access to wellness programs. They also have opportunities for professional advancement through training in basic digital literacy and the use of tools like Slack and the Google suite. Some entry-level Sama employees have progressed to leadership roles in projects for Fortune 100 firms. Not only does Sama seem to validate and professionalize data-labeling jobs but it also creates opportunities for virtuous “decent work.”

Last, governments also play a key part in promoting benefits and mitigating harms of digitally transformed work through regulation, tax policy, subsidies, labor law, and education (Gevorkyan and Clark Reference Gevorkyan and Clark2020). Digital connectivity could be harnessed to create jobs, developing a global labor market while installing safeguards such as certification schemes, digital labor organizing, regulation, and democratic control of online labor platforms (Graham et al. Reference Graham, Hjorth, Lehdonvirta and Graham2019).

4. Conclusion

This essay inquires whether digitally transformed work can be virtuous and under what conditions. It eschews technological determinism in both utopian and dystopian versions, opting for the premise of free human agency. This work is distinctive in adopting an actor-centric and explicitly ethical analysis based on neo-Aristotelian, CST, and MacIntyrean teachings on the virtues.

Beginning with an analysis of digital disruption, it identifies the most salient human advantages vis-à-vis technology in digitally transformed work and provides philosophical anthropological explanations for each. It also looks into external, organizational characteristics on both the macro and the micro levels of digitally transformed work, underscoring their ambivalence (efficiency and profits vs. exclusion and exploitation, flexibility and freedom vs. standardization and dependency) and the need to mitigate polarizing effects for the sake of shared flourishing.

The article presents standards for virtuous work according to neo-Aristotelian, CST, and MacIntyrean frames and applies them to digitally transformed work, giving rise to five fundamental principles. These basic guidelines indicate, on one hand, actions to be avoided and, on the other, actions to be pursued, together with their rationales.

At least two avenues for further research may be proposed. Given the mid-level abstraction of conclusions, descriptive, ethnographic case studies of businesses that strive to cultivate the different virtues (courage, moderation, justice, practical wisdom, etc.) in operations in a variety of environments (Holford Reference Holford2020) can be carried out. Second, investigators may wish to examine how the “human-centered AI” mind-set together with the HCI standards could effectively promote the development of virtues in the digital workplace (Hagendorff Reference Hagendorff2022; Gorichanaz Reference Gorichanaz2022). Understandably, both will be set against the background idea of the intrinsic human, social, cultural, and moral value of digitally transformed work.

Alejo José G. Sison () is professor at the School of Economics and Business of the University of Navarra. His research deals with the issues at the juncture of ethics with economics and politics, with a focus on the virtues and the common good. His recent projects extend this perspective to the challenges AI presents to business. He was president of the European Business Ethics Network from 2009 to 2012 and of the Society for Business Ethics in 2022–23.

References

REFERENCES

Acemoglu, Daron, and Autor, David. 2011. “Skills, Tasks and Technologies: Implications for Employment and Earnings.” In Handbook of Labor Economics, vol. 4, edited by Ashenfelter, Orley, 1043–171. Amsterdam: Elsevier.Google Scholar
Acemoglu, Daron, and Restrepo, Pascual. 2018. “The Race between Man and Machine: Implications of Technology for Growth, Factor Shares, and Employment.” American Economic Review 108 (6): 1488–542.CrossRefGoogle Scholar
Acemoglu, Daron, and Restrepo, Pascual. 2019. “Automation and New Tasks: How Technology Displaces and Reinstates Labor.” Journal of Economic Perspectives 33 (2): 330.CrossRefGoogle Scholar
Acemoglu, Daron, and Restrepo, Pascual. 2020. “Robots and Jobs: Evidence from US Labor Markets.” Journal of Political Economy 128 (6): 2188–244.CrossRefGoogle Scholar
Agarwal, Ritu, Gao, Guodong (Gordon), DesRoches, Catherine, and Jha, Ashish K.. 2010. “Research Commentary: The Digital Transformation of Healthcare: Current Status and the Road Ahead.” Information Systems Research 21 (4): 796809.CrossRefGoogle Scholar
Agrawal, Ajay, Gans, Joshua, and Goldfarb, Avi. 2018. Prediction Machines: The Simple Economics of Artificial Intelligence. Cambridge, MA: Harvard Business Press.Google Scholar
Alonso, Cristian, Kothari, Siddharth, and Rehman, Sidra. 2020. “How Artificial Intelligence Could Widen the Gap between Rich and Poor Nations.” IMF Blog, December 2. https://www.imf.org/en/Blogs/Articles/2020/12/02/blog-how-artificial-intelligence-could-widen-the-gap-between-rich-and-poor-nations.Google Scholar
Aristotle. 1936. Physics. Translated, introduced, and commented by Ross, William David. Oxford: Clarendon Press.Google Scholar
Aristotle. 1985. Nicomachean Ethics. Translated by Irwin, Terence. Indianapolis, IN: Hackett.Google Scholar
Arntz, Melanie, Gregory, Terry, and Zierahn, Ulrich. 2016. “The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis.” Social, Employment and Migration Working Paper 189, Organisation for Economic Co-operation and Development, Paris.Google Scholar
Arntz, Melanie, Gregory, Terry, and Zierahn, Ulrich. 2017. “Revisiting the Risk of Automation.” Economics Letters 159: 157–60.CrossRefGoogle Scholar
Aubert, Patrice, Caroli, Eve, and Roger, Mélanie. 2006. “New Technologies, Organisation, and Age: Firm-Level Evidence.” Economic Journal 116 (509): F73F93.CrossRefGoogle Scholar
Autor, David H., Levy, Frank, and Murnane, Richard J.. 2003. “The Skill Content of Recent Technological Change: An Empirical Exploration.” Quarterly Journal of Economics 118 (4): 1279–333.CrossRefGoogle Scholar
Bankins, Shane, and Formosa, Paul. 2023. “The Ethical Implications of Artificial Intelligence (AI) for Meaningful Work.” Journal of Business Ethics 185: 725–40.CrossRefGoogle Scholar
Baviera, Tomas, English, William, and Guillén, Miguel. 2016. “The ‘Logic of Gift’: Inspiring Behavior in Organizations beyond the Limits of Duty and Exchange.” Business Ethics Quarterly 26 (2): 159–80.CrossRefGoogle Scholar
Benedict, XVI. 2009. Caritas in Veritate. Vatican City: Libreria Editrice Vaticana.Google Scholar
Berghaus, Sabine, and Back, Andreas. 2016. “Stages in Digital Business Transformation: Results of an Empirical Maturity Study.” In Mediterranean Conference on Information Systems Proceedings, 22. https://aisel.aisnet.org/mcis2016/22.Google Scholar
Besson, Patrick, and Rowe, Frantz. 2012. “Strategizing Information Systems–Enabled Organizational Transformation: A Transdisciplinary Review and New Directions.” Journal of Strategic Information Systems 21 (2): 103–24.CrossRefGoogle Scholar
Boden, Margaret A. 1998. “Creativity and Artificial Intelligence.” Artificial Intelligence 103 (1–2): 347–56.CrossRefGoogle Scholar
Boden, Margaret A. 2003. The Creative Mind: Myths and Mechanisms. Abingdon, UK: Routledge.Google Scholar
Botica, Dan Aurelian. 2017. “Artificial Intelligence and the Concept of ‘Human Thinking.’” In Business Ethics and Leadership from an Eastern European, Transdisciplinary Context, edited by Vladu, Sorin, Fotea, Ioan, and Thomas, Adrian, 8794. Cham, Switzerland: Springer.CrossRefGoogle Scholar
Bowles, Jeffrey. 2014. “The Computerisation of European Jobs.” Bruegel, July 24. https://www.bruegel.org/blog-post/computerisation-european-jobs.Google Scholar
Bresnahan, Timothy F., Brynjolfsson, Erik, and Hitt, Lorin M.. 2002. “Information Technology, Workplace Organization, and the Demand for Skilled Labor: Firm-Level Evidence.” Quarterly Journal of Economics 117 (1): 339–76.CrossRefGoogle Scholar
Broekens, Joost, Heerink, Marcel, and Rosendal, Henk. 2009. “Assistive Social Robots in Elderly Care: A Review.” Gerontechnology 8 (2): 94103.CrossRefGoogle Scholar
Brundage, Miles, Avin, Shahar, Clark, Jack, Toner, Helen, Eckersley, Peter, Garfinkel, Ben, Dafoe, Allan et al. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. https://maliciousaireport.com/.Google Scholar
Brynjolfsson, Erik. 2022. “The Turing Trap: The Promise and Peril of Human-Like Artificial Intelligence.” Daedalus 151 (2): 272–87.CrossRefGoogle Scholar
Brynjolfsson, Erik, and McAfee, Andrew. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton.Google Scholar
Carretero, Stephanie, Vuorikari, Riina, and Punie, Yves. 2017. DigComp 2.1: The Digital Competence Framework for Citizens with Eight Proficiency Levels and Examples of Use. Luxembourg: Publications Office of the European Union.Google Scholar
Cherry, Miriam A. 2016. “Beyond Misclassification: The Digital Transformation of Work.” Comparative Labor Law and Policy Journal 37 (3): 577602.Google Scholar
Congregation for the Doctrine of the Faith. 1986. Instruction on Christian Freedom and Liberation Libertatis Conscientia. Vatican City: Libreria Editrice Vaticana.Google Scholar
Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.Google Scholar
Dahlin, Eric. 2022. “Are Robots Really Stealing Our Jobs? Perception versus Experience.” Socius: Sociological Research for a Dynamic World 8: 12.CrossRefGoogle Scholar
Davenport, Thomas H., and Miller, Steven M.. 2022. Working with AI: Real Stories of Human–Machine Collaboration. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
den Hond, Frank, and Moser, Christine. 2023. “Useful Servant or Dangerous Master? Technology in Business and Society Debates.” Business and Society 62 (1): 87116.CrossRefGoogle Scholar
Dries, Nicky, Luyckx, Joost, and Rogiers, Philip. 2023. “Imagining the (Distant) Future of Work.” Academy of Management Discoveries. DOI: 10.5465/amd.2022.0130.CrossRefGoogle Scholar
Economist . 2023. “Machines and Jobs. Where Are All the Robots?” March 11. https://www.economist.com/business/2023/03/06/dont-fear-an-ai-induced-jobs-apocalypse-just-yet.Google Scholar
Ekman, Paul. 2021. “Universal Emotions.” https://www.paulekman.com/universal-emotions/.Google Scholar
Elondou, Tyna, Manning, Sam, Mishkin, Pamela, and Rock, Daniel. 2023. “GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” ArXiv. DOI: 10.48550/arXiv.2303.10130.Google Scholar
Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.Google Scholar
Eurofound. 2019. Technology Scenario: Employment Implications of Radical Automation. Luxembourg: Publications Office of the European Union.Google Scholar
Federal Ministry for Economic Affairs and Climate Action. 2022. The Future of Work in the Digital Transformation. Berlin: Federal Ministry for Economic Affairs and Climate Action. https://www.bmwk.de/Redaktion/EN/Publikationen/bmwk-ga-the-future-of-work-in-the-digital-transformation.html.Google Scholar
Feldman-Barret, Lisa. 2017. How Emotions Are Made: The Secret Life of the Brain. New York: Houghton Mifflin Harcourt.Google Scholar
Felten, Edward W., Raj, Manav, and Seamans, Robert. 2023. “How Will Language Modelers Like ChatGPT Affect Occupations and Industries?” DOI: 10.2139/ssrn.4375268.CrossRefGoogle Scholar
Frey, Carl Benedict, and Osborne, Michael A.. 2017. “The Future of Employment: How Susceptible Are Jobs to Computerization?Technological Forecasting and Social Change 114: 254–80.CrossRefGoogle Scholar
Friedland, Julian. 2019. “Activating Moral Agency by Design: A Model for Ethical AI Development.” MIT Sloan Management Review 60 (4): 114.Google Scholar
Gaburro, Guiseppe, and Cressotti, Giancarlo. 1998. “Work as Such: The Social Teaching of the Church on Human Work.” International Journal of Social Economics 25 (11/12): 1618–39.CrossRefGoogle Scholar
Gavin, Joanne H., and Mason, Richard O.. 2004. “The Virtuous Organization: The Value of Happiness in the Workplace.” Organizational Dynamics 33 (4): 379–92.CrossRefGoogle Scholar
Ge, Shuzhi Sam. 2007. “Social Robotics: Integrating Advances in Engineering and Computer Science.” In Proceedings of the Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology International Conference, 912. New York: IEEE.Google Scholar
Gevorkyan, Aleksander V., and Clark, Charles. 2020. “Artificial Intelligence and Human Flourishing.” American Journal of Economics and Sociology 79 (4): 1307–44.Google Scholar
Gigerenzer, Gerd. 2022. How to Stay Smart in a Smart World. Cambridge, MA: MIT Press.Google Scholar
Goldin, Claudia, and Katz, Lawrence F.. 1998. “The Origins of Technology–Skill Complementarity.” Quarterly Journal of Economics 113 (3): 693732.CrossRefGoogle Scholar
Goldin, Claudia, and Katz, Lawrence F.. 2009. The Race between Education and Technology. Cambridge, MA: Harvard University Press.Google Scholar
Goldschmidt, Deborah, and Schmieder, Johannes F.. 2017. “The Rise of Domestic Outsourcing and the Evolution of the German Wage Structure.” Quarterly Journal of Economics 132 (3): 1165–217.CrossRefGoogle Scholar
Vázquez, González, Ignacio, Santo Milasi, Gomez, Stephanie Carretero, Napierala, Joanna, Böttcher, Nicolas Robledo, Jonkers, Koen, Beldarrain, Xabier Goenaga et al., eds. 2019. The Changing Nature of Work and Skills in the Digital Age. Luxembourg: Publications Office of the European Union.Google Scholar
Gorichanaz, Tim. 2022. “Designing a Future Worth Wanting: Applying Virtue Ethics to HCI.” In Proceedings of the ACM Conference (Conference ’17). New York: Association for Computing Machinery.Google Scholar
Graham, Mark, Hjorth, Isis, and Lehdonvirta, Vili. 2019. “Digital Labor and Development: Impacts of Global Digital Labor Platforms and the Gig Economy on Worker Livelihoods.” In Digital Economies at Global Margins, edited by Graham, Mark, 269–94. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Gray, Mary L., and Suri, Siddarth. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Underclass. Boston: Houghton Mifflin Harcourt.Google Scholar
Grint, Keith, and Woolgar, Steve. 1997. The Machine at Work. Cambridge: Polity Press.Google Scholar
Guitián, Gregorio. 2015. “Service as a Bridge between Ethical Principles and Business Practice: A Catholic Social Teaching Perspective.” Journal of Business Ethics 128 (1): 5972.CrossRefGoogle Scholar
Guizzo, Erico. 2008. “The Rise of the Machines.” IEEE Spectrum 45 (12): 88.CrossRefGoogle Scholar
Haase, Jennifer, and Hanel, Paul H. P.. 2023. “Artificial Muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity.” Journal of Creativity 33 (3): 100066.CrossRefGoogle Scholar
Hagendorff, Thilo. 2022. “A Virtue-Based Framework to Support Putting AI Ethics into Practice.” Philosophy and Theology 35: Article 55.Google Scholar
Heaven, Will Douglas. 2022. “The Messy Morality of Letting AI Make Life-and-Death Decisions.” MIT Technology Review, October 13. https://www.technologyreview.com/2022/10/13/1060945/artificial-intelligence-life-death-decisions-hard-choices/.Google Scholar
Hess, Thomas, Matt, Christian, Benlian, Alexander, and Wiesböck, Florian. 2016. “Options for Formulating a Digital Transformation Strategy.” MIS Quarterly Executive 15 (2): 123–39.Google Scholar
Holford, W. David. 2020. “An Ethical Inquiry of the Effect of Cockpit Automation on the Responsibilities of Airline Pilots: Dissonance or Meaningful Control?Journal of Business Ethics 176: 41157.Google Scholar
Illia, Laura, Colleoni, Elanor, and Zyglidopoulos, Stelios. 2023. “Ethical Implications of Text Generation in the Age of Artificial Intelligence.” Business Ethics, the Environment, and Responsibility 32 (1): 201–10.CrossRefGoogle Scholar
Jaimovich, Nir, and Siu, Henry E.. 2019. “How Automation and Other Forms of IT Affect the Middle Class: Assessing the Estimates.” Paper prepared for “Automation and the Middle Class” for the Brookings Institution, Future of the Middle Class Initiative. https://www.brookings.edu/wp-content/uploads/2019/11/Siu-Jaimovich_Automation-and-the-middle-class.pdf.Google Scholar
Jaimovich, Nir, and Siu, Henry E.. 2020. “Job Polarization and Jobless Recoveries.” Review of Economics and Statistics 102 (1): 129–47.CrossRefGoogle Scholar
Jia, Nan, Luo, Xueming, Fang, Zheng, and Liao, Chengcheng. 2023. “When and How Artificial Intelligence Augments Employee Creativity.” Academy of Management Journal. DOI: 10.5465/amj.2022.0426.CrossRefGoogle Scholar
Paul, John II. 1981. Laborem Exercens. Vatican City: Librería Editrice Vaticana.Google Scholar
Paul, John II. 2004. Compendium of the Social Doctrine of the Church. Vatican City: Librería Editrice Vaticana.Google Scholar
Kaasinen, Eija, Kymäläinen, Tiina, Niemelä, Marketta, Olsson, Thomas, Kanerva, Minni, and Ikonen, Veikko. 2012. “A User-centric View of Intelligent Environments: User Expectations, User Experience and User Role in Building Intelligent Environments.” Computers 2 (1): 133.CrossRefGoogle Scholar
Kane, Gerald C., Palmer, Doug, Anh Nguyen Phillips, G., Kiron, David, and Buckley, Natashat. 2015. “Strategy, Not Technology, Drives Digital Transformation.” MIT Sloan Management Review, July 14. https://sloanreview.mit.edu/projects/strategy-drives-digital-transformation/.Google Scholar
Kantrowitz, Alex. 2023. “The Horrific Content a Kenyan Worker Had to See While Training ChatGPT.” Slate, May 21. https://slate.com/technology/2023/05/openai-chatgpt-training-kenya-traumatic.html.Google Scholar
Katz, Lauren F., and Krueger, Alan B.. 2017. “The Role of Unemployment in the Rise in Alternative Work Arrangements.” American Economic Review 107 (5): 388–92.CrossRefGoogle Scholar
Kautz, Tim, Heckman, James J., Dirisi, Ron, ter Weel, Bas, and Borghans, Lex. 2014. “Fostering and Measuring Skills Improving Cognitive and Non-cognitive Skills to Promote Lifetime Success.” Education Working Paper 110, Organisation for Economic Co-operation and Development, Paris.CrossRefGoogle Scholar
Kim, Tae Wan, and Scheller-Wolf, Alan. 2019. “Technological Unemployment, Meaning in Life, Purpose of Business, and the Future of Stakeholders.” Journal of Business Ethics 160 (2): 319–37.CrossRefGoogle Scholar
Klym, Natalie. 2022. “Responsible Sourcing and the Professionalization of Data Work.” Montreal AI Ethics Institute, October 23. https://montrealethics.ai/responsible-sourcing-and-the-professionalization-of-data-work/.Google Scholar
Knieps, Gulnter. 2021. “Digitalization Technologies: The Evolution of Smart Networks.” In A Modern Guide to the Digitization of Infrastructure, edited by Montero, Juan and Finger, Matthias, 4358. Cheltenham, UK: Edward Elgar.Google Scholar
Korinek, Anton, and Juelfs, Megan. 2022. “Preparing for the (Non-existent?) Future of Work.” Working Paper 30172, National Bureau of Economic Research, Cambridge, MA.CrossRefGoogle Scholar
Langlois, Richard. 2003. “Cognitive Comparative Advantage and the Organization of Work: Lessons from Herbert Simon’s Vision of the Future.” Journal of Economic Psychology 24 (2): 167–87.CrossRefGoogle Scholar
Leivada, Evelina, Murphy, Elliot, and Marcus, Gary. 2023. “DALL-E 2 Fails to Reliably Capture Common Syntactic Processes.” Social Sciences and Humanities Open 8 (1): 100648.CrossRefGoogle Scholar
Levin, Sam, Wong, Julia Carrie, and Harding, Luke. 2016. “Facebook Backs Down from ‘Napalm Girl’ Censorship and Reinstates Photo.” The Guardian, September 9. https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo.Google Scholar
Lindebaum, Dirk, Vesa, Mikko, and den Hond, Frank. 2020. “Insights from ‘The Machine Stops’ to Better Understand Rational Assumptions in Algorithmic Decision Making and Its Implications for Organizations.” Academy of Management Review 45 (1): 247–63.CrossRefGoogle Scholar
Linebaugh, Kate, and Knutson, Ryan. 2023. “The Hidden Workforce That Helped Filter Violence and Abuse Out of ChatGPT.” Wall Street Journal, July 11. https://www.wsj.com/podcasts/the-journal/the-hidden-workforce-that-helped-filter-violence-and-abuse-out-of-chatgpt/ffc2427f-bdd8-47b7-9a4b-27e7267cf413.Google Scholar
Lordan, Grace, and Neumark, David. 2018. “People versus Machines: The Impact of Minimum Wages on Automatable Jobs.” Labour Economics 52: 4053.CrossRefGoogle Scholar
MacIntyre, Alasdair. (1981) 2007. After Virtue. 3rd ed. London: Duckworth.Google Scholar
MacIntyre, Alasdair. 1988. Whose Justice? Which Rationality? Notre Dame, IN: University of Notre Dame Press.Google Scholar
MacIntyre, Alasdair. 1990. Three Rival Versions of Moral Enquiry: Encyclopaedia, Genealogy, and Tradition. Notre Dame, IN: University of Notre Dame Press.Google Scholar
MacIntyre, Alasdair. 2016. Ethics in the Conflicts of Modernity. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Marsh, Peter. 2012. The New Industrial Revolution: Consumers, Globalization and the End of Mass Production. New Haven, CT: Yale University Press.Google Scholar
Mateescu, Alexandra, and Nguyen, Aiha. 2019a. “Explainer: Algorithmic Management in the Workplace.” Data and Society, February 6. https://datasociety.net/library/explainer-algorithmic-management-in-the-workplace.Google Scholar
Mateescu, Alexandra, and Nguyen, Aiha. 2019b. “Explainer: Workplace Monitoring and Surveillance.” Data and Society, February 6. https://datasociety.net/library/explainer-workplace-monitoring-surveillance.Google Scholar
Matt, Christian, Hess, Thomas, and Benlian, Alexander. 2015. “Digital Transformation Strategies.” Business and Information Systems Engineering 57: 339–43.CrossRefGoogle Scholar
Meaker, Morgan. 2023. “These Prisoners Are Training AI.” WIRED, September 11. https://www.wired.com/story/prisoners-training-ai-finland/.Google Scholar
Medeiros, Joao. 2023. “The Daring Robot Surgery That Saved a Man’s Life.” WIRED, May 18. https://www.wired.co.uk/article/proximie-remote-surgery-nhs.Google Scholar
Mejia, Santiago. 2023. “The Normative and Cultural Dimension of Work: Technological Unemployment as a Cultural Threat to a Meaningful Life.” Journal of Business Ethics 185: 847–64.CrossRefGoogle Scholar
Mitchell, Matthew. 2019. “Comment on ‘The Impact of Artificial Intelligence on Innovation: An Exploratory Analysis.’” In The Economics of Artificial Intelligence: An Agenda, edited by Agrawal, Ajay, Gans, Joshua, and Goldfarb, Avi, 146–48. Cambridge, MA: National Bureau of Economic Research.Google Scholar
Mokyr, Joel, Vickers, Chris, and Ziebarth, Nicholas L.. 2015. “The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?Journal of Economic Perspectives 29 (3): 3150.CrossRefGoogle Scholar
Mori, Masahiro, MacDorman, Karl F., and Kageki, Norri. 2012. “The Uncanny Valley.” IEEE Robotics and Automation 19 (2): 98100.CrossRefGoogle Scholar
Moser, Christine, den Hond, Frank, and Lindebaum, Dirk. 2022. “Morality in the Age of Artificially Intelligent Algorithms.” Academy of Management Learning and Education 21 (1).CrossRefGoogle Scholar
Muro, Mark, Maxim, Robert, and Withon, Jacob. 2019. Automation and Artificial Intelligence: How Machines Are Affecting People and Places. Washington, DC: Metropolitan Policy Program at Brookings. https://www.brookings.edu/wp-content/uploads/2019/01/2019.01_BrookingsMetro_Automation-AI_Report_Muro-Maxim-Whiton-FINAL-version.pdf.Google Scholar
Nadkarmi, Swen, and Prügl, Reinhard. 2021. “Digital Transformation: A Review, Synthesis, and Opportunities for Future Research.” Management Review Quarterly 71 (2): 233341.CrossRefGoogle Scholar
Nagel, Lisa. 2020. “The Influence of the COVID-19 Pandemic on the Digital Transformation of Work.” International Journal of Sociology and Social Policy 40 (9/10): 861–75.CrossRefGoogle Scholar
Nardi, Bonnie A., and Engeström, Yrjö. 1999. “A Web on the Wind: The Structure of Invisible Work.” Computer Supported Cooperative Work 8 (1–2): 18.CrossRefGoogle Scholar
Nedelkoska, Ljubica, and Quintini, Glenda. 2018. “Automation, Skills Use, and Training.” Social, Employment and Migration Working Paper 202, Organisation for Economic Co-operation and Development, Paris.Google Scholar
Njanja, Annie. 2023. “Meta Faces Third Lawsuit in Kenya as Moderators Claim Illegal Sacking, Blacklisting.” TechCrunch, March 20. https://techcrunch.com/2023/03/20/meta-faces-third-lawsuit-in-kenya-as-moderators-claim-illegal-sacking-blacklisting/.Google Scholar
Otieno, Matthew. 2023. “Is It Wrong to Pay Kenyans US$2 an Hour to Take Out ChatGPT’s Garbage?” Mercatornet, February 8. https://www.mercatornet.com/is-it-wrong-to-pay-kenyans-us2-an-hour-to-take-out-chatgpts-garbage.Google Scholar
Paolillo, Antonio, Colella, Fabrizio, Nosengo, Nicola, Schiano, Fabrizio, Stewart, William, Zambrano, Davide, Chappuis, Isabelle, Lalive, Rafael, and Floreano, Dario. 2022. “How to Compete with Robots by Assessing Job Automation Risks and Resilient Alternatives.” Science Robotics 7 (65).CrossRefGoogle ScholarPubMed
Patraucean, Viorica, Smaira, Lucas, Gupta, Ankush, Recasens, Adria, Yang, Yi, Malinowski, Mateusz, Doersch, Carl et al. 2022. “Measuring Perception in AI Models.” Google Deepmind (blog), October 12. https://www.deepmind.com/blog/measuring-perception-in-ai-models.Google Scholar
Perrigo, Billy. 2023. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 per Hour to Make ChatGPT Less Toxic.” Time, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/.Google Scholar
Picard, Rosalind W. 2010. “Affective Computing: From Laughter to IEEE.” IEEE Transactions on Affective Computing 1 (1): 1117.CrossRefGoogle Scholar
Pius, XI. 1931. Quadragesimo Anno. Vatican City: Libreria Editrice Vaticana.Google Scholar
Pontifical Council for Justice and Peace. 2004. Compendium of the Social Doctrine of the Church. Vatican City: Libreria Editrice Vaticana.Google Scholar
Ransbotham, Sam, Candelon, François, Kiron, David, LaFountain, Burt, and Khodabandeh, Shervin. 2021. “The Cultural Benefits of Artificial Intelligence in the Enterprise.” MIT Sloan Management Review, November 2. https://sloanreview.mit.edu/projects/the-cultural-benefits-of-artificial-intelligence-in-the-enterprise/.Google Scholar
Redín, Dulce M., Cabaleiro-Cerviño, Goretti, Rodriguez-Carreño, Ignacio, and Scalzo, German. 2023. “Innovation as a Practice: Why Automation Will Not Kill Innovation.” Frontiers in Psychology 13: 1045508.CrossRefGoogle ScholarPubMed
Renieris, Elizabeth, Kiron, David, and Mills, Steven. 2022. “Mature RAI Programs Can Help Minimize AI System Failures.” MIT Sloan Management Review, October 4. https://sloanreview.mit.edu/article/mature-rai-programs-can-help-minimize-ai-system-failures.Google Scholar
Robotics-VO. 2013. A Roadmap for US Robotics: From Internet to Robotics. 2013 ed. Atlanta: Georgia Institute of Technology. http://archive2.cra.org/ccc/files/docs/2013-Robotics-Roadmap.Google Scholar
Rodríguez-Lluesma, Carlos, García-Ruiz, Pablo, and Pinto-Garay, Javier. 2021. “The Digital Transformation of Work: A Relational View.” Business Ethics, the Environment, and Responsibility 30 (1): 157–67.CrossRefGoogle Scholar
Roose, Kevin. 2021. “The Robots Are Coming for Phil in Accounting.” New York Times, March 6. https://www.nytimes.com/2021/03/06/business/the-robots-are-coming-for-phil-in-accounting.html.Google Scholar
Rust, Roland T., and Huang, Ming-Hui. 2021. The Feeling Economy: How Artificial Intelligence Is Creating the Era of Empathy. Cham, Switzerland: Palgrave Macmillan.CrossRefGoogle Scholar
Sánchez-Puerta, María Laura, Valerio, Alexandria, and Gutiérrez-Bernalet, Marcela. 2016. Taking Stock of Programs to Develop Socioemotional Skills: A Systematic Review of Program Evidence. Washington, DC: World Bank.CrossRefGoogle Scholar
Sandberg, Anders, and Bostrom, Nick. 2008. Whole Brain Emulation: A Roadmap. Technical report 2008-3, Future of Humanity Institute, Oxford University, Oxford. https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf.Google Scholar
Satariano, Adam, and Isaac, Mark. 2021. “The Silent Partner Cleaning-Up Facebook for $500 Million a Year.” New York Times, August 31. https://www.nytimes.com/2021/08/31/technology/facebook-accenture-content-moderation.html.Google Scholar
Scherer, Klaus R., Banziger, Tanja, and Roesch, Etienne. 2010. A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford: Oxford University Press.Google Scholar
Schwab, Klaus. 2017. The Fourth Industrial Revolution. Sydney: Currency.Google Scholar
Seeber, Isabella, Bittner, Eva, Briggs, Robert Owen, De Vreede, Triparna, De Vreede, Gert Jan, Elkins, Aaron, Maier, Ronald et al. 2020. “Machines as Teammates: A Research Agenda on AI in Team Collaboration.” Information and Management 57 (2): 103174.CrossRefGoogle Scholar
Shepherd, Dean A., Maitlis, Sally, Parida, Vinit, Wincent, Joakim, and Lawrence, Thomas B.. 2022. “Intersectionality in Intractable Dirty Work: How Mumbai Ragpickers Make Meaning of Their Work and Lives.” Academy of Management Journal 65 (5): 1680–708.CrossRefGoogle Scholar
Shneiderman, Ben. 2022. Human-Centered AI. Oxford: Oxford University Press.Google Scholar
Sison, Alejo Jose G., Ferrero, Ignacio, and Guitián, Gregorio. 2016. “Human Dignity and the Dignity of Work: Insights from Catholic Social Teaching.” Business Ethics Quarterly 26 (4): 503–28.CrossRefGoogle Scholar
Sison, Alejo Jose G., and Redín, Dulce María. 2023. “A Neo-Aristotelian Perspective on the Need for Artificial Moral Agents (AMAs).” AI and Society 38: 4765.CrossRefGoogle Scholar
Snow, Nancy E. 2017. “Neo-Aristotelian Virtue Ethics.” In The Oxford Handbook of Virtue, edited by Snow, Nancy E., 321–42. Oxford: Oxford University Press.CrossRefGoogle Scholar
Sparrow, Betsy, Liu, Jenny, and Wegner, Daniel M.. 2011. “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science 333 (6043): 776–78.CrossRefGoogle ScholarPubMed
Stern, Jacob. 2023. “AI Is Running Circles around Robotics.” The Atlantic, April 4. https://www.theatlantic.com/technology/archive/2023/04/ai-robotics-research-engineering/673608/.Google Scholar
Stokel-Walker, Chris. 2022. “Meet the Tweeter Curators Highlighting DALL-E’s Weirdest AI Art.” Input, June 16. https://www.inverse.com/input/culture/dall-e-mini-ai-weird-twitter-images-viral-art.Google Scholar
Sutton, Steve G., Arnold, Vicky, and Holt, Matthew. 2018. “How Much Automation Is Too Much? Keeping the Human Relevant in Knowledge Work.” Journal of Emerging Technologies in Accounting 15 (2): 1525.CrossRefGoogle Scholar
Thompson, Derek. 2022. “What Moneyball-for-Everything Has Done to American Culture.” The Atlantic, October 30. https://www.theatlantic.com/newsletters/archive/2022/10/sabermetrics-analytics-ruined-baseball-sports-music-film/671924/.Google Scholar
Trenerry, Brigid, Chng, Samuel, Wang, Yang, Suhaila, Zainal Shah, Lim, Sun Sun, Lu, Han Yu, and Oh, Peng Ho. 2021. “Preparing Workplaces for Digital Transformation: An Integrative Review and Framework of Multi-level Factors.” Frontiers in Psychology 12: 620766.CrossRefGoogle ScholarPubMed
Tsang, F. Ted, and Almirall, Esteve. 2021. “Artificial Intelligence as Augmenting Automation: Implications for Employment.” Academy of Management Perspectives 35 (4): 642–59.Google Scholar
Vallor, Shannon. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press.CrossRefGoogle Scholar
Venkatesh, Viswanath. 2006. “Where to Go from Here? Thoughts on Future Directions for Research on Individual-Level Technology Adoption with a Focus on Decision Making.” Decision Sciences 37 (4): 497518.CrossRefGoogle Scholar
Venkatesh, Viswanath, and Bala, Hillo. 2008. “Technology Acceptance Model 3 and a Research Agenda on Interventions.” Decision Sciences 39 (2): 273315.CrossRefGoogle Scholar
Verhoef, Peter C., Broekhuizen, Thijs, Bart, Yakov, Bhattacharya, Abhi, Don, John Qi, Fabian, Nicola, and Haenlein, Michael. 2021. “Digital Transformation: A Multidisciplinary Reflection and Research Agenda.” Journal of Business Research 122: 889901.CrossRefGoogle Scholar
Vial, Gregory. 2019. “Understanding Digital Transformation: A Review and a Research Agenda.” Journal of Strategic Information Systems 28 (2): 118–44.CrossRefGoogle Scholar
Vuorikari, Riina, Punie, Yves, Carretero, Stephanie, and Van den Brande, Godelieve. 2016. DigComp 2.0: The Digital Competence Framework for Citizens. Update Phase 1: The Conceptual Reference Model. Luxembourg: Publication Office of the European Union.Google Scholar
Warr, Peter. 2007. Work, Happiness, and Unhappiness. Hillsdale, NJ: Erlbaum.Google Scholar
Wessel, Lauri, Baiyeri, Abayomi, Ologeanu-Taddei, Roxana, Cha, Jonghyuk, and Blegind-Jensen, Tina. 2020. “Unpacking the Difference between Digital Transformation and IT-Enabled Organizational Transformation.” Journal of the Association for Information Systems 22 (1).Google Scholar
Wilson, H. James, Daugherty, Paul R., and Morini-Bianzino, Nicola. 2017. “The Jobs That Artificial Intelligence Will Create.” MIT Sloan Management Review, March 23. https://sloanreview.mit.edu/article/will-ai-create-as-many-jobs-as-it-eliminates/.CrossRefGoogle Scholar
Zuboff, S. 2019. “Surveillance Capitalism and the Challenge of Collective Action.” New Labor Forum 28 (1): 1029.CrossRefGoogle Scholar
Figure 0

Table 1: Human Advantages vis-à-vis Technology in Digitally Transformed Work

Figure 1

Table 2: Conceptual Foundations for Virtuous Work from Neo-Aristotelian, CST, and MacIntyrean Perspectives

Figure 2

Table 3: Principles for Virtuous, Digitally Transformed Work, Dos and Don’ts