Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-25T06:51:28.894Z Has data issue: false hasContentIssue false

Part III - Bringing Information Subjects into Commons Governance

Published online by Cambridge University Press:  29 March 2021

Madelyn Rose Sanfilippo
Affiliation:
University of Illinois, Urbana-Champaign
Brett M. Frischmann
Affiliation:
Villanova University School of Law
Katherine J. Strandburg
Affiliation:
New York University School of Law

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

8 Governing the Internet of Everything

Scott J. Shackelford Footnote 1

Since the term was first coined in the late 1990s during a presentation about the benefit of radio-frequency identification (RFID) tags in the retail sector, the “Internet of Things” (IoT) has promised a smart, interconnected global digital ecosystem enabling your toaster to text you when your breakfast is ready, and your sweatshirt to give you status updates during your workout. This rise of “smart products” such as internet-enabled refrigerators and self-driving cars holds the promise to revolutionize business and society. But the smart wave will not stop with stuff owing to related trends such as the “Internet of Bodies” now coming into vogue (Atlantic Council, 2017). It seems that, if anything, humanity is headed toward an “Internet of Everything,” which is a term that Cisco helped to pioneer (Reference EvansEvans, 2012).

The Internet of Everything (IoE) takes the notion of IoT a step further by including not only the physical infrastructure of smart devices but also its impacts on people, business, and society. Thus, the IoE may be understood as “the intelligent connection of people, process, data and things[,]” whereas IoT is limited to “the network of physical objects accessed through the Internet” (Reference BanafaBanafa, 2016). This broader lens is vital for considering the myriad security and privacy implications of smart devices becoming replete throughout society, and our lives. Other ways to conceptualize the problem abound, such as Bruce Schneier’s notion of Internet+, or Eric Schmidt’s contention that “the Internet will disappear” given the proliferation of smart devices (Reference GilesGiles, 2018). Regardless, the salient point is that our world is getting more connected, if not smarter, but to date governance regimes have struggled to keep pace with this dynamic rate of innovation.

Yet it is an open question whether security and privacy protections can or will scale within this dynamic and complex global digital ecosystem, and whether law and policy can keep up with these developments. As Schneier has argued:

The point is that innovation in the Internet+ world can kill you. We chill innovation in things like drug development, aircraft design, and nuclear power plants because the cost of getting it wrong is too great. We’re past the point where we need to discuss regulation versus no-regulation for connected things; we have to discuss smart regulation versus stupid regulation.

The natural question, then, is whether our approach to governing the IoE is, well, smart? This chapter explores what lessons the Institutional Analysis and Development (IAD) and Governing Knowledge Commons (GKC) frameworks hold for promoting security, and privacy, in an IoE, with special treatment regarding the promise and peril of blockchain technology to build trust in such a massively distributed network. Particular attention is paid to governance gaps in this evolving ecosystem, and what state, federal, and international policies are needed to better address security and privacy failings.

The chapter is structured as follows. It begins by offering an introduction to the IoE for the uninitiated, and continues by applying the IAD and GKC frameworks, emphasizing their application for the IoE. The utility of blockchain technology is next explored to help build trust in distributed systems before summarizing implications for managers and policymakers focusing on the intersection between polycentric governance and cyber peace.

8.1 Welcome to the Internet of Everything

As ever more stuff – not just computers and smartphones, but thermostats and baby monitors, wristwatches, lightbulbs, doorbells, and even devices implanted in our own bodies – are interconnected, the looming cyber threat can easily get lost in the excitement of lower costs and smarter tech. Indeed, smart devices, purchased for their convenience, are increasingly being used by domestic abusers as a means to harass, monitor, and control their victims (Reference BowlesBowles, 2018). Yet, for all the press that the IoT has received, it remains a topic little understood or appreciated by the public. One 2014 survey, for example, found that fully 87% of respondents had never even heard of the “Internet of Things” (Reference MerrimanMerriman, 2014). Yet managing the growth of the IoE impacts a diverse set of interests: US national and international security; the competitiveness of firms; global sustainable development; trust in democratic processes; and safeguarding civil rights and liberties in the Information Age.

The potential of IoT tech has arguably only been realized since 2010, and is arguably the result of the confluence of at least three factors: (1) the widespread availability of always-on high-speed Internet connectivity in many parts of the world; (2) faster computational capabilities permitting the real-time analysis of Big Data; and (3) economies of scale lowering the cost of sensors and chips to manufacturers (Shackelford, 2017). However, the rapid rollout of IoT technologies has not been accompanied by any mitigation of the array of technical vulnerabilities across these devices, highlighting a range of governance gaps that may be filled in reference to the Ostrom Design Principles along with the IAD and GKC frameworks.

8.2 Applying the IAD and GKC Frameworks to the Internet of Everything

The animating rationale behind the IAD framework was, quite simply, a lack of shared vocabulary to discuss common governance challenges across a wide range of resource domains and issue areas (Reference Cole, Frischmann, Madison and StrandburgCole, 2014). “Scholars adopting … [the IAD] framework essentially commit to ‘a common set of linguistic elements that can be used to analyze a wide diversity of problems,’” including, potentially, cybersecurity and Internet governance. Without such a framework, according to Professor Dan Cole, confusion is common, such as in defining “resource systems” that can include “information, data, or knowledge” in the intellectual property context, with natural resources (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 51). In the Internet governance context, similar confusion surrounds core terms such as “cyberspace,” “information security,” and “cybersecurity (Reference ShackelfordShackelford, 2014). There are also other more specialized issues to consider, such as defining what constitutes “critical infrastructure,” and what if any “due diligence” obligations operators have to protect it from cyber attackers. Similarly, the data underlying these systems is subject to a range of sometimes vying legal protections. As Professor Cole argues, “[t]rade names, trade secrets, fiduciary and other privileged communications, evidence submitted under oath, computer code, and many other types of information and flows are all dealt with in various ways in the legal system” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 52).

Although created for a different context, the IAD framework can nevertheless improve our understanding of data governance, identify and better understand problems in various institutional arrangements, and aid in prediction under various alternative institutional scenarios (Reference Cole, Frischmann, Madison and StrandburgCole, 2014). Indeed, Professor Ostrom believed that the IAD framework had wide application, which has been born out given that it is among the most popular institutional frameworks used in a variety of studies, particularly those focused on natural commons. The IAD framework is unpacked in Figure 8.1, and its application to IoE governance is analyzed in turn, after which some areas of convergence and divergence with the GKC framework are highlighted.

Figure 8.1 The Institutional Analysis and Development (IAD) framework

It can be difficult to exclude users from networks, especially those with valuable trade secrets, given the extent to which they present enticing targets for both external actors and insider threats. With these distinctions in mind, Professor Brett Frischmann, Michael Madison, and Katherine Strandburg have suggested a revised IAD framework for the knowledge commons reproduced in Figure 8.2.

Figure 8.2 The Governing Knowledge Commons (GKC) framework

Space constraints prohibit an in-depth analysis of the myriad ways in which the GKC framework might be useful in conceptualizing an array of security and privacy challenges in the IoE, but nevertheless a brief survey is attempted later. In brief, the distinctions with this approach, as compared with the traditional IAD framework, include (1) greater interactions on the left side of the chart underscoring the complex interrelationships in play; (2) the fact that the action area can similarly influence the resource characteristics and community attributes; and (3) that the interaction of rules and outcomes in knowledge commons are often inseparable (Reference FrischmannFrischmann, Madison and Strandburg, 2014, 19). These insights also resonate in the IoE context, given the tremendous amount of interactions between stakeholders, including IoT device manufacturers, standards-setting bodies, regulators (both national and international), and consumers. Similarly, these interactions are dynamic, given that security compromises in one part of the IoE ecosystem can lay out in a very different context, as seen in the Mirai botnet, in which compromised smart light bulbs and other IoE devices were networked to crash critical Internet services (Reference BotezatuBotezatu, 2016).

The following subsections dive into various elements of the GKC framework in order to better understand its utility in conceptualizing IoE governance challenges.

8.2.1 Resource Characteristics and Classifying Goods in Cyberspace

Digging into the GKC framework, beginning on the left side of Figure 8.2, there are an array of characteristics to consider, including “facilities through which information is accessed” such as the Internet itself, as well as “artifacts … including … computer files” and the “ideas themselves” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 10). The “artifacts” category is especially relevant in cybersecurity discussions, given that it includes trade secrets protections, which are closer to a pure private good than a public good and are also the currency of global cybercrime (Reference Shackelford, Proia, Craig and MartellShackelford et al., 2015). Internet governance institutions (or “facilities” in this vernacular) can also control the rate at which ideas are diffused, such as through censorship taking subtle (e.g., YouTube’s decision to take down Nazi-themed hate speech videos) or extreme (e.g., China’s Great Firewall) forms (Reference BeechBeech, 2016).

There is also a related issue to consider: what type of “good” is at issue in the cybersecurity context? In general, goods are placed into four categories, depending on whether they fall on the spectra of exclusion and subtractability (Reference BuckBuck, 1998). Exclusion refers to the relative ease with which goods may be protected. Subtractability evokes the extent to which one’s use of a good decreases another’s enjoyment of it. If it is easy to exclude others from the use of a good, coupled with a high degree of subtractability, then the type of good is likely to be characterized as “private goods” that are defined by property law and best regulated by the market (Reference Hiller and ShackelfordHiller and Shackelford, 2018). Examples in the IoT context are plentiful, from smart speakers to refrigerators. Legal rights, including property rights, to these goods include the right of exclusion discussed above. At the opposite end of the spectrum, where exclusion is difficult and subtractability is low, goods are more likely characterized as “public goods” that might be best managed by governments (Reference Ostrom, Ostrom, Cole and McGinnisOstrom and Ostrom, 2015). An example is national defense, including, some argue, cybersecurity (Reference OstromOstrom, 2009). This is an area of some debate, though, given the extensive private sector ownership of critical infrastructure, which makes drawing a clear line between matters of corporate governance and national security difficult.

In its totality, the IoE includes all forms of goods, including private devices and municipal broadband networks, catalyzing a range of positive and negative externalities from network effects to cyberattacks. For example, the IoE includes digital communities as a form of club good, with societies being able to set their own rights of access; a contemporary example is the efforts of Reddit moderators to stop trolls, limit hate speech, and promote a more civil dialogue among users (Reference RooseRoose, 2017). Such communal property rights may either be recognized by the state, or be based on a form of “benign neglect” (Reference BuckBuck, 1998, 5). Indeed, as of this writing, there is an active debate underway in the United States and Europe about the regulation of social-media platforms to limit the spread of terrorist propaganda, junk news, sex trafficking, and hate speech. Such mixed types of goods are more the norm than the exception. As Cole has argued:

[S]ince the industrial revolution it has become clear that the atmosphere, like waters, forests, and other natural resources, is at best an impure, subtractable, or congestible public good. As such, these resources fall somewhere on the spectrum between public goods, as technically defined, and club or toll goods. It is such impure public goods to which Ostrom assigned the label “common-pool resources”.

Naturally, the next question is whether, in fact, cyberspace may be comparable to the atmosphere as an impure public good, since pure public goods do not present the same sort of governance challenges, such as the well-studied “tragedy of the commons” scenario, which predicts the gradual overexploitation of common pool resources (Reference Feeny, Berkes, Mccay and AchesonFeeny et al., 1990). Though cyberspace is unique given that it can, in fact, expand such as through the addition of new networks (Jordan, 1990), increased use also multiplies threat vectors (Reference DeibertDeibert, 2012).

Solutions to the tragedy of the commons typically “involve the replacement of open access with restricted access and use via private property, common property, or public property/regulatory regimes” (Reference FrischmannFrischmann, Madison, and Strandburg, 2014, 54). However, in practice, as Elinor Ostrom and numerous others have shown, self-organization is in fact possible in practice, as is discussed later (Reference FrischmannFrischmann, 2018). The growth of the IoE could hasten such tragedies if vulnerabilities replete in this ecosystem are allowed to go unaddressed.

8.2.2 Community Attributes

The next box element on the left side of the GKC framework, titled “Attributes of the Community,” refers to the network of users making use of the given resource (Reference SmithSmith, 2017). In the natural commons context, communities can be macro (at the global scale when considering the impacts of global climate change) or micro, such as with shared access to a forest or lake. Similarly, in the cyber context, communities come in every conceivable scale and format from private pages on Facebook to peer-to-peer communities to the global community of more than four billion global Internet users as of October 2018, not to mention the billions of devices comprising the IoE. Even such a broad conceptualization omits impacted non-user stakeholders and infrastructure, as may be seen in the push to utilize 5G connectivity, AI, and analytics to power a “safe city” revolution, albeit one built on Huawei architecture. The scale of the multifaceted cyber threat facing the public and private sector parallels in complexity the battle to combat the worst effects of global climate change (Reference Cole, Frischmann, Madison and StrandburgCole, 2014; Reference ShackelfordShackelford, 2016). Such a vast scale stretches the utility of the GKC framework, which is why most efforts have considered subparts, or clubs, within this digital ecosystem.

An array of polycentric theorists, including Professor Ostrom, have extolled the benefits of small, self-organized communities in the context of managing common pool resources (Reference Ostrom, Burger, Field, Norgaard and PolicanskyOstrom, 1999). Anthropological evidence has confirmed the benefits of small-scale governance. However, micro-communities can ignore other interests, as well as the wider impact of their actions, online and offline (Reference MurrayMurray, 2007). A polycentric model favoring bottom-up governance but with a role for common standards and baseline rules so as to protect against free riders may be the best-case scenario for IoE governance, as is explored further. Such self-regulation has greater flexibility to adapt to dynamic technologies faster than top-down regulations, which even if enacted, can result in unintended consequences, as seen now in the debates surrounding California’s 2018 IoT law. As of January 2020, this law would require “any manufacturer of a device that connects ‘directly or indirectly’ to the Internet … [to] equip it with ‘reasonable’ security features, designed to prevent unauthorized access, modification, or information disclosure” (Reference RobertsonRobertson, 2018). Yet, it is not a panacea, as we will see, and there is plentiful evidence that simple rule sets – especially when they are generated in consultation with engaged and empowered communities – can produce better governance outcomes.

8.2.3 Rules-in-Use

This component of the GKC framework comprises both community norms along with formal legal rules. One of the driving questions in this area is identifying the appropriate governance level at which to formalize norms into rules, for example, whether that is at a constitutional level, collective-choice level, etc. (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 56). That is easier said than done in the cybersecurity context, given the wide range of industry norms, standards – such as the National Institute for Standards and Technology Cybersecurity Framework (NIST CSF) – state-level laws, sector-specific federal laws, and international laws regulating everything from banking transactions to prosecuting cybercriminals. Efforts have been made to begin to get a more comprehensive understanding of the various norms and laws in place, such as through the International Telecommunication Union’s (ITU)’s Global Cybersecurity Index and the Carnegie Endowment International Cybersecurity Norms Project, but such efforts remain at an early stage of development. A variety of rules may be considered to help address governance gaps, such as position and choice rules that define the rights and responsibilities of actors, such as IoT manufacturers and Internet Service Providers (ISPs), as is shown in Table 8.1 (Reference Ostrom, Crawford and OstromOstrom and Crawford, 2005). Given the degree to which core critical infrastructure – such as smart grids and Internet-connected medical devices – are also subsumed within IoT debates, there is a great deal of overlap between potential rule sets from incentivizing the use of cybersecurity standards and frameworks, as is happening in Ohio to hardening supply chains.

Table 8.1 Types of rules

Aggregation rulesDetermine whether a decision by a single actor or multiple actors is needed prior to acting at a decision point in a process.
Boundary rulesDefine:
  1. 1. who is eligible to take a position;

  2. 2. the process for choosing who is eligible to take a position;

  3. 3. how actors can leave positions;

  4. 4. whether anyone can hold multiple positions simultaneously; and

  5. 5. succession to vacant positions.

Choice rulesDefine what actors in positions must, must not, or may do in their position and in particular circumstances.
Information rulesSpecify channels of communication among actors, as well as the kinds of information that can be transmitted between positions.
Payoff rulesAssign external rewards or sanctions for particular actions or outcomes.
Position rulesDefine positions that actors hold, including as owners of property rights and duties.

Many of these rules have cyber analogues, which emphasize cybersecurity information sharing through public–private partnerships to address common cyber threats, penalize firms and even nations for lax cybersecurity due diligence, and define the duties – including liability – of actors, such as Facebook and Google (Reference ReardonReardon, 2018).

The question of what governance level is most appropriate to set the rules for IoT devices is pressing, with an array of jurisdictions, including California, pressing ahead. For example, aside from its IoT-specific efforts, California’s 2018 Consumer Privacy Act is helping to set a new transparency-based standard for US privacy protections. Although not comparable to the EU’s new General Data Protection Regulation (GDPR) discussed later, it does include provisions that allow consumers to sue over data breaches, including in the IoT context, and decide when, and how, their data is being gathered and used by companies (Reference AdlerAdler, 2018). Whether such state-level action, even in a state with an economic footprint as the size of California, will help foster enhanced cybersecurity due diligence across the broader IoE ecosystem remains to be seen.

8.2.4 Action Arenas

The arena is just that, the place where decisions are made, where “collective action succeeds or fails” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 59). Such arenas exist at three levels within the GKC framework – constitutional, collective-choice, and operational. Decisions made at each of these governance levels, in turn, impact a range of rules and community attributes, which is an important feature of the framework. Examples of decision-makers in each arena in the cybersecurity context include (1) at the constitutional level, judges deciding the bounds of “reasonable care” and “due diligence” (Reference Shackelford, Richards, Raymond and CraigShackleford, 2015); (2) federal and state policymakers at the collective-choice (e.g., policy) level, such as the Federal Trade Commission (FTC) policing unfair and deceptive trade practices; and (3) at the operational level, firms, households, and everyone else.

8.2.5 Evaluation Criteria

The final component, according to Cole, is “the most neglected and underdeveloped” of the frameworks (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 62). Elinor Ostrom, for example, offered the following “evaluative criteria” in considering how best to populate it, including “(1) economic efficiency; (2) fiscal equivalence; (3) redistributional equity; (4) accountability; (5) conformance to values of local actors; and (6) sustainability” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 62). In the GKC context, these criteria might include “(1) increasing scientific knowledge; (2) sustainability and preservation; (3) participation standards; (4) economic efficiency; (5) equity through fiscal equivalence; and (6)  redistributional equity” (Reference MurrayHess and Ostrom, 2007, 62). This lack of rigor might simply be due to the fact that, in the natural commons context, the overriding goal has been “long-run resource sustainability” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 62). It is related, in some ways, to the “Outcomes” element missing from the GKC framework but present in the IAD framework, which references predictable outcomes of interactions from social situations, which can include consequences for both resource systems and units. Although such considerations are beyond the findings of the IAD framework, in the cybersecurity context, an end goal to consider is defining and implementing cyber peace.

“Cyber peace,” which has also been called “digital peace,” is a term that is increasingly used, but it also remains an arena of little consensus. It is clearly more than the “absence of violence” online, which was the starting point for how Professor Johan Galtung described the new field of peace studies he helped create in 1969 (Reference GaltungGaltung, 1969). Similarly, Galtung argued that finding universal definitions for “peace” or “violence” was unrealistic, but rather the goal should be landing on an apt “subjectivistic” definition agreed to by the majority (Reference GaltungGaltung, 1969, 168). He undertook this effort in a broad, yet dynamic, way recognizing that as society and technology changes, so too should our conceptions of peace and violence. That is why he defined violence as “the cause of the difference between the potential and the actual, between what could have been and what is” (Reference GaltungGaltung, 1969, 168).

Cyber peace is defined here not as the absence of conflict, what may be called negative cyber peace. Rather, it is the construction of a network of multilevel regimes that promote global, just, and sustainable cybersecurity by clarifying the rules of the road for companies and countries alike to help reduce the threats of cyber conflict, crime, and espionage to levels comparable to other business and national security risks. To achieve this goal, a new approach to cybersecurity is needed that seeks out best practices from the public and private sectors to build robust, secure systems, and couches cybersecurity within the larger debate on Internet governance. Working together through polycentric partnerships of the kind described later, we can mitigate the risk of cyber war by laying the groundwork for a positive cyber peace that respects human rights, spreads Internet access along with best practices, and strengthens governance mechanisms by fostering multi-stakeholder collaboration (Reference Galtung and ChristieGaltung, 2012). The question of how best to achieve this end is open to interpretation. As Cole argues, “[f]rom a social welfare perspective, some combination of open- and closed-access is overwhelmingly likely to be more socially efficient than complete open or close-access” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 61). Such a polycentric approach is also a necessity in the cyber regime complex, given the prevalence of private and public sector stakeholder controls.

In the cybersecurity context, increasing attention has been paid identifying lessons from the green movement to consider the best-case scenario for a sustainable cyber peace. Indeed, cybersecurity is increasingly integral to discussions of sustainable development – including Internet access – which could inform the evaluative criteria of a sustainable cyber peace in the IoE. Such an approach also accords with the “environmental metaphor for information law and policy” that has been helpful in other efforts (Reference FrischmannFrischmann, Madison, and Strandburg, 2014, 16).

It is important to recognize the polycentric nature of the IoE to ascertain the huge number of stakeholders – including users – that can and should have a say in contributing to legitimate governance. Indeed, such concerns over “legitimate” Internet governance have been present for decades, especially since the creation of the Internet Corporation for Assigned Names and Numbers (ICANN). Given the pushback against that organization as a relatively top-down artificial construct as compared to the more bottom-up Internet Engineering Task Force (IETF), legitimacy in the IoE should be predicated to the extent possible locally through independent (and potentially air gapped) networks, Internet Service Providers (ISPs), and nested state, federal, and international law. To conceptualize such system, the literature on regime complexes might prove helpful, which is discussed next in the context of blockchain technology.

8.3 Is Blockchain the Answer to the IoE’s Woes?

Professor Ostrom argued that “[t]rust is the most important resource” (Escotet, 2010). Indeed, the end goal of any governance institution is arguably trust – how to build trust across users to attain a common goal, be it sustainable fishery management or securing the IoE. The GKC framework provides useful insights toward this end. But one technology could also help in this effort, namely blockchain, which, according to Goldman Sachs, could “change ‘everything’” (Reference LachanceLachance, 2016). Regardless of the question being asked, some argue that it is the answer to the uninitiated – namely, a blockchain cryptographic distributed ledger (Trust Machine, 2015). Its applications are widespread, from recording property deeds to securing medical devices. As such, its potential is being investigated by a huge range of organizations, including US Defense Advanced Research Projects Agency (DARPA), IBM, Maersk, Disney, and Greece, the latter of which is seeking to leverage blockchain to help enhance social capital by helping to build trust around common governance challenges, such as land titling (Reference Casey and VignaCasey and Vigna, 2018). Examples similarly abound regarding how firms use blockchains to enhance cybersecurity. The technology could enable the Internet to become decentralized, pushing back against the type of closed platforms analyzed by Professor Johnathan Zittrain and others (Reference ZittrainZittrain, 2008). Already, a number of IoT developers are experimenting with the technology in their devices; indeed, according to one recent survey, blockchain adoption in the IoT industry doubled over the course of 2018 (Reference ZmudzinskiZmudzinski, 2019).

Yet formidable hurdles remain before blockchain technology can be effectively leveraged to help promote sustainable development, peace, and security in the IoE. No blockchain, for example, has yet scaled to the extent necessary to search the entire web. There are also concerns over hacking and integrity (such as when a single entity controls more than fifty percent of the processing power), including the fact that innovation is happening so quickly that defenders are put in a difficult position as they try to build resilience into their distributed systems (Reference VillasenorVillasenor, 2018). But the potential for progress demands further research, including how it could help promote a polycentric cyber peace in the burgeoning IoE.

8.4 Polycentric Implications

As Professor Cole has maintained, “those looking for normative guidance from Ostrom” and the relevant governance frameworks and design principles discussed herein are often left wanting (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 46). Similar to the big questions in the field of intellectual property, such as defining the optimal duration of a copyright, it stands to reason, then, that the Ostroms’ work might tell us relatively little about the goal of defining, and pursuing, cyber peace. An exception to the Ostroms’ desire to eschew normative suggestions, though, is polycentric governance, which builds from the notion of subsidiarity in which governance “is a ‘co-responsibility’ of units at central (or national), regional (subnational), and local levels” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 47).

For purposes of this study, the polycentric governance framework may be considered to be a multi-level, multi-purpose, multi-functional, and multi-sectoral model that has been championed by numerous scholars, including the Ostroms (Reference McGinnisMcginnis, 2011). It suggests that “a single governmental unit” is usually incapable of managing “global collective action problems” such as cyber-attacks (Reference OstromOstrom, 2009, 35). Instead, a polycentric approach recognizes that diverse organizations working at multiple scales can enhance “flexibility across issues and adaptability over time” (Reference Keohane and VictorKeohane and Victor, 2011, 15). Such an approach can help foster the emergence of a norm cascade improving the Security of Things (Reference Finnemore and SikkinkFinnemore and Sikkink, 1998, 895).

Not all polycentric systems are guaranteed to be successful. Disadvantages, for example, can include gridlock and a lack of defined hierarchy (Reference Keohane and VictorKeohane and Victor, 2011). Yet progress has been made on norm development, including cybersecurity due diligence, discussed later, which will help IoT manufacturers better fend off attacks against foreign nation states. Still, it is important to note that even the Ostroms’ commitment to polycentric governance “was contingent, context-specific, and focused on matching the scale of governance to the scale of operations appropriate for the particular production or provision problem under investigation” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 47). During field work in Indianapolis, IN, for example, the Ostroms found that, in fact, medium-sized police departments “outperformed both smaller (neighborhood) and larger (municipal-level) units” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 47). In the IoE context, as has been noted, the scale could not be greater with billions of people and devices interacting across myriad sectors, settings, and societies. The sheer complexity of such a system, along with the history of Internet governance to date, signals that there can be no single solution or governance forum to foster cyber peace in the IoE. Rather, polycentric principles gleaned from the GKC framework should be incorporated into novel efforts designed to glean the best governance practices across a range of devices, networks, and sectors. These should include creating clubs and industry councils of the kind that the GDPR is now encouraging to identify and spread cybersecurity best practices, leveraging new technologies such as blockchain to help build trust in this massively distributed system, and encouraging norm entrepreneurs like Microsoft and the State of California to experiment with new public–private partnerships informed by the sustainable development movement. Success will be difficult to ascertain as it cannot simply be the end of cyber attacks. Evaluation criteria are largely undefined in the GKC framework, as we have seen, which the community should take as a call to action, as is already happening by members of the Cybersecurity Tech Accord and the Trusted IoT Alliance.

Such efforts may be conceptualized further within the literature on the cyber regime complex. As interests, power, technology, and information diffuse and evolve over time within the IoE, comprehensive regimes are difficult to form. Once formed, they can be unstable. As a result, “rarely does a full-fledged international regime with a set of rules and practices come into being at one period of time and persist intact” (Reference Keohane and VictorKeohane and Victor, 2011, 9). According to Professor Oran Young, international regimes emerge as a result of “codifying informal rights and rules that have evolved over time through a process of converging expectations or tacit bargaining” (Reference Young and YoungYoung, 1997, 10). Consequently, regime complexes, as a form of bottom-up institution building, are becoming relatively more popular in both the climate and Internet governance contexts, which may have some benefits since negotiations for multilateral treaties could divert attention from more practical efforts to create flexible, loosely coupled regimes (Reference Keohane and VictorKeohane and Victor, 2011). An example of such a cyber regime complex may be found in a work by Professor Joseph S. Nye, Jr., which is reproduced in Figure 8.3.

Figure 8.3 Cyber regime complex map (Reference NyeNye, 2014, 8)

But there are also the costs of regime complexes to consider. In particular, such networks are susceptible to institutional fragmentation and gridlock. And there are moral considerations about such regime complexes. For example, in the context of climate change, these regimes omit nations that are not major emitters, such as the least developed nations that are the most at risk to the effects of a changing climate. Similar arguments could play out in the IoE context with some consumers only being able to access less secure devices due to jurisdictional difference that could impinge on their privacy. Consequently, the benefits of regime complexes must be critically analyzed. By identifying design rules for the architecture, interfaces, and integration protocols within the IoE, both governance scholars and policymakers may be able to develop novel research designs and interventions to help promote cyber peace.

8.5 Conclusion

As Cole has argued, “there are no institutional panaceas for resolving complex social dilemmas” (Reference Cole, Frischmann, Madison and StrandburgCole, 2014, 48). Never has this arguably been truer than when considering the emerging global digital ecosystem here called the IoE. Yet, we ignore the history of governance investigations at our peril, as we look ahead to twenty-first century global collective action problems such as promoting cyber peace in the IoE. Important questions remain about the utility of the Ostrom Design Principles, the IAD, and GKC frameworks in helping us govern the IoE. Even more questions persist about the normative goals in such an enterprise, for example, what cyber peace might look like and how we might be able to get there. That should not put off scholars interested in this endeavor. Rather, it should be seen as a call to action. The stakes could not be higher. Achieving a sustainable level of cybersecurity in the IoE demands novel methodologies, standards, and regimes. The Ostroms’ legacy helps to shine a light on the path toward cyber peace.

9 Contextual Integrity as a Gauge for Governing Knowledge Commons

Yan Shvartzshnaider Footnote 1, Madelyn Rose Sanfilippo Footnote 2, and Noah Apthorpe Footnote 3
9.1 Introduction

This chapter describes our approach to combine the Contextual Integrity (CI) and Governing Knowledge Commons (GKC) frameworks in order to gauge privacy expectations as governance. This GKC-CI approach helps us understand how and why different individuals and communities perceive and respond to information flows in very different ways. Using GKC-CI to understand consumers’ (sometimes incongruent) privacy expectations also provides deeper insights into the driving factors behind privacy norm evolution.

The CI framework (Reference NissenbaumNissenbaum, 2009) structures reasoning about the privacy implications of information flows. The appropriateness of information flows is defined in context, with respect to established norms in terms of their values and functions. Recent research has operationalized CI to capture users’ expectations in varied contexts (Reference Apthorpe, Shvartzshnaider, Mathur, Reisman and FeamsterApthorpe et al., 2018; Reference Shvartzshnaider, Tong, Wies, Kift, Nissenbaum, Subramanian and MittalShvartzshnaider et al., 2016), as well to analyze regulation (Reference SelbstSelbst, 2013), establish research ethics guidelines (Reference ZimmerZimmer, 2018), and conceptualize privacy within commons governance arrangements (Reference Sanfilippo, Frischmann and StrandburgSanfilippo, Frischmann, and Strandburg, 2018).

The GKC framework examines patterns of interactions around knowledge resources within particular settings, labeled as action arenas, by identifying background contexts; resources, actors, and objectives as attributes; aspects of governance; and patterns and outcomes (Reference Frischmann, Madison and StrandburgFrischmann, Madison, and Strandburg, 2014). Governance is further analyzed by identifying strategies, norms, and rules-in-use through an institutional grammar (Reference Crawford and OstromCrawford and Ostrom, 1995). According to GKC, strategies are defined in terms of attributes, aims, and conditions; norms build on strategies through the incorporation of modal language; and rules provide further structure by embedding norms with consequences to sanction non-compliance. For example, a strategy can describe a digital personal assistant that uses audio recordings of users (attributes) in order to provide personalized advertisements (aim) when a user does not pay for an ad-free subscription (condition). If this information flow also included modal language, such as a hedge, like “may” and “could,” or a deontic, like “will” and “cannot,” it would be a norm. The addition of a consequence, such as a denial of service or financial cost, would make this example a rule. It is also notable that, from this perspective, there are differences between rules-on-the-books, which prescribe, and rules-in-use, which are applied.

GKC and CI are complementary frameworks for understanding privacy as both governing institutions (Reference Sanfilippo, Frischmann and StrandburgSanfilippo, Frischmann, and Strandburg, 2018) and appropriate flows of personal information, respectively. Within the GKC framework, as with the broader intellectual tradition of institutional analysis, an institutional grammar can be applied to deconstruct individual institutions (Reference Crawford and OstromCrawford and Ostrom, 1995). Table 9.1 illustrates the overlap between these frameworks and how each provides parameter specificity to the other. While the CI framework deconstructs information flows, the GKC framework considers governance structures and constraints regarding actors and their interactions with knowledge resources. Consider the digital personal assistant example from the previous paragraph. Under the institutional grammar (Reference Crawford and OstromCrawford and Ostrom, 1995), the digital personal assistant, audio recordings, and users are all considered “attributes.” The CI framework further divides these elements into sender, information type and subject parameters, respectively. Conversely, the CI framework uses the “transmission principle” parameter to articulate all constraints on information flows, while the GKC framework provides definitions of aims, conditions, modalities, and consequences.

Table 9.1 Conceptual overlap between CI and Institutional Grammar (GKC) parameters

In this work, we use the GKC and CI frameworks to understand the key aspects behind privacy norm formation and evolution. Specifically, we investigate divergences between privacy expectations and technological reality in the IoT domain. The consumer Internet of things (IoT) adds Internet-connectivity to familiar devices, such as toasters and televisions, resulting in data flows that do not align with existing user expectations about these products. This is further exacerbated by the introduction of new types of devices, such as digital personal assistants, for which relevant norms are only just emerging. We are still figuring out whether the technological practices enabled by these new devices align with or impact our social values. Studying techno-social change in the IoT context involves measuring what people expect of IoT device information flows as well as how these expectations and underlying social norms emerge and change. We want to design and govern technology in ways that adhere to people’s expectations of privacy and other important ethical considerations. To do so effectively, we need to understand how techno-social changes in the environment (context) can lead to subtle shifts in information flows. CI is a useful framework for identifying and evaluating such shifts as a gauge for GKC.

We conduct a multi-part survey to investigate the contextual integrity and governance of IoT devices that combines open-ended and structured questions about norm origins, expectations, and participatory social processes with Likert-scale vignette questions (Reference Apthorpe, Shvartzshnaider, Mathur, Reisman and FeamsterApthorpe et al., 2018). We then perform a comparative analysis of the results to explore how variations in GKC-CI parameters affect privacy strategies and expectations and to gauge the landscape of governing norms.

9.2 Research Design

In the first part of the survey, we asked respondents to list the devices they own and how they learn about the privacy properties of these devices (e.g., privacy policies, discussions with legal experts, online forums). We next presented the respondents with scenarios A through D, as described in Table 9.2, each scenario was followed by applied questions based on the GKC framework.

Table 9.2 Survey scenarios with corresponding aspects of the GKC framework

#ScenarioGKC Aspects
AImagine you’re at home watching TV while using your phone to shop for socks on Amazon. Your TV then displays an ad informing you about a great discount on socks at a Walmart close to your neighborhood.
  • Background: normative values

  • Attributes: resources

  • Patterns and Outcomes: benefits

BYou later hear from your neighbor that a similar thing happened to him. In his case, his wife posted on Facebook about their dream vacation. A few days later he noticed an ad as he was browsing the web from a local cruiser company.
  • Background: normative values

  • Attributes: resources, community members, goals and objectives

  • Governance: institutions

  • Patterns and Outcomes: benefits

C
  • Companies usually detail their information handling practices in their privacy policies and terms of service.

  • Imagine you do read through the privacy policy for your smart TV. You find a statement saying that the TV could, sometimes, send your information to third parties for analysis to offer you all the top features.

  • The privacy policy also states that you may disable third party sharing; however, this may cause additional subscription charges for some features.

  • Governance: context, institutions, actors

  • Patterns and Outcomes: benefits, costs, legitimacy

DYou have an acquaintance who is a software engineer. They tell you that you shouldn’t be concerned. It’s considered a normal practice for companies to track the habits and activities of their users. This information is then typically sold to third parties. This is how you can get all of these free personalized services!
  • Attributes: community members, goals and objectives

  • Governance: institutions, actors

  • Patterns and Outcomes: costs, legitimacy

Each scenario focused on different factors that previous research has identified as having an effect on users’ expectations and preferences (Reference Apthorpe, Shvartzshnaider, Mathur, Reisman and FeamsterApthorpe et al., 2018). Scenario A focused on third-party information sharing practices involving a smart TV that tracks viewing patterns and TV watching habits that are sold to an advertiser. Questions assessed the respondents’ specific concerns in this scenario as well as their anticipated reactions. We interpreted these reactions as indicators of respondents’ privacy expectations and beliefs as well as their understanding of information flows in context.

The remaining scenarios were built on Scenario A to explore different factors affecting privacy opinions and reactions. Scenario B introduced an additional, exogenous influence: a parallel, cross platform tracking incident that happened to someone else the respondent might know. Questions assessed how experiences with cross-device information flows and surrounding factors alter respondents’ expectations and resulting actions. This provides a sense of communities and contexts surrounding use, in order to support future institutionalization of information flows to better align with users’ values.

Scenario C focused on privacy policies and whether they mitigate privacy concerns. Specifically, we asked how often respondents read privacy policies and what they learn from them. We also queried whether the practice of information sharing with third parties potentially changes respondents’ behavior whether or not the data are anonymized. Finally, we asked whether the respondents would be willing to employ a workaround or disable information sharing for an additional charge – examples of rules-in-use contrasting sharply with rules-on-the-books that otherwise support information flows respondents may deem inappropriate.

Scenario D assessed how exogenous decision-makers influence privacy perceptions and subsequent behavior by providing respondents with an example of expert advice. Questions about this scenario addressed differences in perceptions between stakeholder groups as well as the legitimacy of expert actors in governance. While Scenario D specifically included a software engineer as the exemplar expert role, a parallel study has assessed perceptions of many additional expert actors (Shvartzshnaider, Sanfilippo, and Apthorpe, under review).

The second section of the survey tested how variations in CI and GKC parameters affect the perceived appropriateness of information flows. We designed this section by combining GKC parameters with an existing CI-based survey method for measuring privacy norms (Reference Apthorpe, Shvartzshnaider, Mathur, Reisman and FeamsterApthorpe, 2018).

We first selected GKC-CI parameters relevant to smart home device information collection. These parameters are listed in Table 9.3 and include a variety of timely privacy issues and real device practices.

Table 9.3 Smart home GKC-CI parameters selected for information flow survey questions

SenderModalityAim
  • Google Home

  • Amazon Echo (Alexa)

  • Apple HomePod (Siri)

  • Smart watch

  • Garmin watch

  • can

  • might

  • will

  • if the information is used for advertising

  • if the information is used for academic research

  • if the information is used for developing new device features

Subject & TypeCondition
  • Your personal information

  • Your location

  • Recorded audio

  • if you have given consent

  • if you are notified

RecipientConsequence
  • Its manufacturer

  • A third party

  • if the information is used to generate summary statistics

  • if the information is necessary for the device to function properly

  • if the information is used to personalize content

The questions in this section followed a parallel structure. Respondents were first presented with an information flow description containing a randomly selected combination of sender, subject, information type, recipient, and modal parameters (Figure 9.1). Respondents rated the appropriateness of this flow on a 6-point Likert scale from “very inappropriate” to “very appropriate.”

Figure 9.1 Example baseline information flow question

This baseline question was followed by a series of matrix-style multiple choice questions with one row for each condition, aim, and consequence parameter (Figure 9.2). Respondents were asked to indicate how each of these parameters would affect the appropriateness of the original information flow on a 5-point Likert scale from “much more appropriate” to “much less appropriate.”

Figure 9.2 Example question with varying condition parameters

This process was repeated three times for each survey participant. Each participant rated three sets of baseline flows with different subject/type/recipient/modal parameters and corresponding matrices for condition/aim/consequence parameters. Null parameters were included as controls for each category.

The survey concluded with a series of standard demographics questions, querying respondents’ age, gender, state of residence, education level, and English proficiency. Each of these questions had a “prefer not to disclose” option in case respondents were uncomfortable divulging this information.

We created the survey using Qualtrics. We conducted “cognitive interviews” to test survey before deployment via UserBob, an online usability testing platform. Five UserBob workers were asked to take the survey while recording their screen and providing audio feedback on their thought processes. These workers were paid $1 per minute, and all completed the survey in less than 10 minutes. While the UserBob responses were not included in the results analysis, they confirmed the expected survey length of less than 10 minutes and that the survey did not contain any issues that would inhibit respondents’ understanding.

We deployed the survey as a Human Intelligence Task (HIT) on Amazon Mechanical Turk (AMT). The HIT was limited to AMT workers in the United States with a 90–100 percent HIT approval rating. We recruited 300 respondents and paid each $1 for completing the survey.

We began with 300 responses. We then removed 14 responses from individuals who provided incomprehensible answers or non-answers to the free-response questions. We also removed 2 responses from individuals who answered all matrix questions in the same column. This resulted in 284 total responses for analysis.

9.3 Governing Appropriate Flows of Personal Information from Consumer IoT

We analyze our survey results from the combined GKC-CI perspective. We use GKC framework to identify the background environment (specific context) of consumer IoT, attributes involved in the action arena of IoT information flows (including goals and objectives), governance rules within consumer IoT contexts, and various patterns and outcomes, including the perceived cost and benefits of IoT information flows. We also use the CI framework with the institutional grammar parameters (aims, conditions, consequences, modalities) as transmission principles to understand what specific aspects of governance have the most significant impact on respondent perceptions.

9.3.1 Background Environment

Smart devices are pervasive in Americans’ lives and homes. We interact with a wide range of these supposedly smart systems all the time, whether we recognize and consent to them or not, from Automated License Plate Readers (ALPR) technologies tracking drivers’ locations (Reference JohJoh, 2016) to Disney World’s MagicBand system (Reference Borkowski, Sandrick, Wagila, Goller, Chen and ZhaoBorkowski et al., 2016) to Alexa in dorm rooms (Reference Manikonda, Deotale and KambhampatiManikonda et al., 2018). These devices, which are part of a larger digital networked environment, collect massive amounts of data that surreptitiously capture human behaviors and support overt sociotechnical interactions in public and private spaces.

It is notable that there are very different scales of use and applications of smart devices, with many deployed publicly without public input. In contrast, smart devices in individuals’ homes are most often configured by the users themselves with appropriate use negotiated within households. Notable exceptions include the controversial and well-publicized implementations of smart locks and systems in rental housing (e.g., Reference Geeng and RoesnerGeeng and Roesner, 2019) and uses of IoT to surveil victims by perpetrators of domestic violence (Reference Tanczer, Neira, Parkin, Patel and DanezisTanczer et al., 2018). These consumer IoT devices have wildly different patterns of interactions and governance. They are operated under complex arrangements of legal obligations, cultural conditions, and social norms without clear insight into how to apply these formal and informal constraints.

It is thus important to establish applicable norms and evaluate rules-in-use to support good governance of consumer IoT moving forward. Understanding interactions where users have some control of institutional arrangements involving their devices is helpful toward this end. We therefore focus on consumers’ everyday use of smart devices, primarily smartphones, wearables, and in-home smart devices. It is our objective to understand both how users would like information flows associated with these devices to be governed and how their privacy perceptions are formed.

The background context for personal and in-home IoT device use extends beyond individual interactions with smart devices. It includes aggregation of information flows from devices and interactions between them, discussion about the relevant normative values surrounding device use, and governance of information flows. There are distinct challenges in establishing norms, given that there is no default governance for data generated, as knowledge resources, or predictable patterns of information to help form user expectations.

Our survey respondents documented the devices they owned, which aligned with recent consumer surveys of IoT prevalence (e.g., Reference Kumar, Shen, Case, Garg, Alperovich, Kuznetsov, Gupta and DurumericKumar et al., 2019). About 75 percent of respondents reported owning more than one smart device, with 64 percent owning a smart TV and 55 percent owning a Digital Personal Assistant (such as an Amazon Echo, Google Home, or Apple HomePod). Smartwatches were also very popular. A small percentage of respondents owned smart lightbulbs or other Internet-connected appliances.

As these devices become increasingly popular and interconnected, the contexts in which they are used are increasingly complex and fraught with value tensions, making it important to further study user preferences in order to develop appropriate governance. For example, digital personal assistants don’t clearly replace any previous analogous devices or systems. They therefore lack pre-existing norms or underlying values about appropriateness to guide use. In contrast, smart televisions are obviously analogous to traditional televisions and are thereby used in ways largely guided by existing norms. These existing norms have often been shaped by entrenched values but do not always apply to emerging information flows from and to new smart features. The resulting tensions can be resolved by identifying relevant values and establishing appropriate governing institutions around IoT information flows. To do so, it is first important to understand the relevant factors (attributes) so as to clarify how, when, and for what purpose changes in information flows governance are and are not appropriate.

9.3.2 Attributes
9.3.2.1 Resources

Resources in the IoT context include both (1) the data generated by devices and (2) knowledge about information flows and governance. The latter also includes characteristics of these devices, including necessary supporting technologies and personal information relevant to the IoT action arena.

The modern home includes a range of devices and appliances with Internet-connectivity. Some of these devices are Internet-connected versions of existing appliances, for example, refrigerators, TVs, thermostats, lightbulbs. Other devices, such as digital assistants (e.g., Amazon Echo and Google Home), are new. These devices produce knowledge by generating and consuming information flows. For example, a smart thermostat uses environmental sensors to collect information about home temperature and communicates this information to cloud servers for remote control and monitoring functionality. Similar information flows across devices are causing the IoT ecosystem to evolve beyond the established social norms. For example, now refrigerators order food, toasters tweet, and personal health monitors detect sleeping and exercise routines. This rapid change in the extent and content of information flows about in-home activities leads to a mismatch between users’ expectations and the IoT status quo. Furthermore, mismatches extend beyond privacy to features, as some new “smart” functions are introduced for novelty sake, rather than consumer preferences, such as kitchen appliances that are connected to social media.

Our survey respondents’ comments reveal discrepancies between users’ privacy perceptions/preferences and how IoT devices are actually used. This provides further insight into the attributes of data resources within this context by illustrating what is considered to be appropriate. For example, some respondents noted that even though they have smart TVs, they disconnect them from the Internet to limit communication between devices. Generally, our survey results highlight the range of confusion about how smart devices work and what information flows they send.

A few respondents implied that they were only learning about IoT cross-device communications through the scenarios described in our survey, describing their surprise (e.g., “How they already know that. How did it get from my phone to the tv? That seems very fishy”) or in some cases absolute disbelief (“I see no connection between what I’m doing on the phone and a random TV ad”) that such a thing was possible. One respondent specifically summarized this confusion amidst common experiences with new technologies:

At first, you are concerned. The lightning fast speed at which Google hits you in the heads [sic] for an item you were considering buying makes you believe they are spying on you. They aren’t spying, because spying implies watching you without your permission, but in using the service you give them complete permission to use any data you put into search queries, posts, etc, to connect you to items you are shopping for, even if it is just to look around.

Social media consumers do not understand that they are NOT the customer. They are the product. The customer is the numerous businesses that pay the platform (Google, Facebook, etc) various rates to get their product in front of customers most likely to pay. Radio did this long before Cable TV who did this long before Social Media companies. It’s a practice as old as steam.

This quotation highlights perceived deception about information collection practices by popular online platforms and IoT devices. Users of IoT devices are shaping their expectations and practices amidst a lack of transparency about privacy and problematic notions of informed consent (e.g., Reference Okoyomon, Samarin, Wijesekera, Elazari Bar On, Vallina-Rodriguez, Reyes and EgelmanOkoyomon et al., 2019). This respondent also touches on the inextricable links between the two knowledge resources; when users have poor, confusing, or limited explanations of information flows, they fail to understand that they are a resource and that their data is a product.

As Figure 9.3 illustrates, respondents learn about IoT information flows and privacy from a variety of different sources. Online forums represent the most prevalent source of privacy information, yet only just over 30 percent of respondents turn to online forums of IoT users with privacy questions. Privacy policies and discussions with friends and family were also common sources of privacy information, but even these were only consulted by 28 percent and 25 percent of respondents, respectively. Respondents turned to technical and legal experts for privacy information even less frequently, with only 9 percent and 3 percent of respondents reporting these sources, respectively. Overall, there was no single source of privacy information consulted by a majority of respondents.

Figure 9.3 Where respondents learn about the privacy implications of IoT devices

9.3.2.2 Community Members

Community members, through the lens of the GKC framework, include those who participate and have roles within the action arena, often as users, contributors, participants, and decision-makers. The action arena also includes a variety of additional actors who shape these participants’ and users’ expectations and preferences, including lawyers and privacy scholars; technologists, including engineers and developers; corporate social media campaigns; anonymous discussants in online forums; and friends and family, which we examine in a related study (Shvartzshnaider, Sanfilippo, and Apthorpe, under review). It is important to consider who is impacted, who has a say in governance, and how the general public is impacted. In this context, community members include IoT device owners, developers, and users, as well as users’ family, friends, and neighbors in an increasingly connected world.

While the respondents who depend on online communities and forums for privacy information are a small subset, those communities represent an important source of IoT governance in use. User-generated workarounds and privacy discussions are meaningful for understanding and establishing appropriate information flows. Users are thus the community-of-interest in this context, and those who responded to our survey reflect the diversity of users. The respondents were 62 percent male and 37 percent female with an average age of 34.5 years. 53 percent of the respondents had a Bachelor’s degree or higher. 38 percent of respondents self-reported annual incomes of <$40,000, 43 percent reported incomes of <$80,000, 8 percent reported incomes of <$100,000, and 10 percent reported income of >$100,000. We have not established clear demographic indicators for the overall community of IoT users, in this sense, beyond ownership and a skew toward a younger population. However, it is also possible that tech savviness is overrepresented among users.

9.3.2.3 Goals and Objectives

Goals and objectives, associated with particular stakeholders, are grounded in history, context, and values. It is important to identify the specific obstacles and challenges that governance seeks to overcome, as well as the underlying values it seeks to institutionalize.

In our studies, the respondents identified multiple governance objectives and dilemmas associated with information flows to and from IoT devices, including control over data collection and use, third parties, and autonomy in decision-making. Interests among respondents were split between those who valued cross-device information flows and those who felt increased interoperability and/or communication between devices was problematic. Additionally, there were a few respondents who agreed with some of the perceived interests of device manufacturers that value monetization of user data; these respondents appreciated their ability to utilize “free services” in exchange for behavioral data collection. Furthermore, there are additional tensions between the objectives of manufacturers and developers and the interests of users, as evidenced by the split in trust in the expertise of a technical expert in judging appropriateness of information flows. These results show fragmentation in perception of both governance and acceptance of the status quo for information flows around IoT devices.

9.3.3 Governance

Through the lens of the GKC framework, including the institutional grammar, we gain insight into different aspects of governance. We can capture how the main decision-making actors, individual institutions, and the norms governing individual information flows emerge and change over time, as well as how these norms might be enforced. Results also indicate that privacy, as appropriate flows of personal information, governs interactions with and uses of IoT devices. For example, we see evidence that anonymization, as a condition modifying the information type and its association with a specific subject within an information flow, does not serve as meaningful governance from the perspective of respondents. Fifty-five percent of respondents stated that they would not change their behavior, or support cross-device communication, just because data was anonymized. It is not immediately clear, from responses to that question alone, what leads to divergence on this interpretation of anonymization or any other perceptions about specific information flows. However, it echoes theorization about CI that incomplete transmission principles are not helpful in understanding information flows (e.g., Reference Bhatia and BreauxBhatia and Breaux, 2018), extending this idea to governance; the condition of anonymity is not a stand-alone transmission principle.

This aligns with our approach combining the GKC and CI frameworks to gauge the explicit and implicit norms that govern information flows within a given context. The CI framework captures norms using five essential parameters of information flows. Four of the parameters capture the actors and information type involved in an information flow. The fifth parameter, transmission principle, constrains information flows. The transmission principle serves as a bridging link between the CI and GKC frameworks. Figure 9.4 shows the average score for perceived appropriateness for an information flow without qualifying it with the transmission principle. We remind the reader that the respondents were first presented with information flow descriptions using sender, subject, information type, recipient, and modal parameters. They rated the appropriateness of these flows on a 6-point Likert scale from “very inappropriate” (-2) to “very appropriate” (+2).

Figure 9.4 Average perceptions of information flows by parameter

This figure illustrates the average participant opinion of information flows controlled to specific examples of information type and subject, modalities, recipients, and senders.

Figure 9.5 The impact of specific parameters in changing respondent perceptions of information flows.

This figure indicates the average change in perceptions in response to specific examples for each parameter. It does not indicate initial perceptions, in contrast to Figure 9.4.

For the GKC framework questions in the first part of the survey, 73 percent of respondents reported that they would change their behaviors in response to third-party sharing. Specific actions they would take are illustrated in Figure 9.6. Figure 9.4 shows that respondents view a “manufacturer” recipient less negatively than a generic third party. Additionally, not stating a recipient all together has a lesser negative effect on information flow acceptability than a generic “third party” recipient. We can speculate that when the recipient is omitted, the respondents mentally substitute a recipient that fits their internal privacy model, as shown in previous research (Reference Martin and NissenbaumMartin and Nissenbaum, 2016).

Figure 9.6 User actions in response to third-party sharing scenarios

We further gauge the effect on user perceptions of aims, conditions, modalities, and consequences as components of transmission principles. Figure 9.5 illustrates changes in average perceptions based on the addition of specific aims, conditions, and consequences to the description of an information flow. We see that stating a condition (such as asking for consent, upon notification or keeping the data anonymous) has a positive effect on the perception of appropriateness. Conversely, we see that not stating an aim correlates with positive perception, while the respondents seemed on average neutral towards “for developing new features” and “for academic research” aims, they show negative attitude towards the “for advertising purposes” aim. When it comes to consequences, the results show that the respondents view not stating a consequence as equal, on average, to when the information “is necessary for the device to function properly.” However, respondents viewed information flows with the consequence “to personalize content” slightly positively, while viewing information flows with the consequence of “[generating] summary statistics” correlates with slightly negative perception.

Respondents also identified a number of additional approaches that they might take in order to better control flows of their personal information and details of their behaviors between devices. In addition to browsing anonymously and disconnecting their smart TV from the Internet, various respondents suggested:

  • “Use a VPN”

  • “Wouldn’t buy the TV in the first place”

  • “It’s just getting worse and worse. I’ll almost certainly return it.”

  • “Research and see if there is a way to not have my info collected.”

  • “Be much more careful about my browsing/viewing habits.”

  • “Circumvent the tracking”

  • “Try to find a way to circumvent it without paying”

  • “Sell it and get a plain TV”

  • “Block access to my information”

  • “Delete cookies”

  • “Disable features”

When they perceived information flows to be inappropriate, many respondents favored rules-in-use that would circumvent inadequate exogenous governance. While many respondents favored opportunities to opt out of inappropriate flows, a small sub-population developed their own approaches to enact their privacy preferences as additional layers of governance in use. Often these work-arounds subverted or obfuscated default information flows.

9.3.3.1 Rules-in-Use and Privacy Policies

Few respondents found the rules-on-books described in privacy policies to be useful for understanding information flows associated with IoT devices. Many respondents described how they found privacy policies lengthy and confusing. For example, when asked what they learn from reading privacy policies, one respondent explained:

That they [sic] hard to read! Seriously though, they are tough to interpret. I know they try and protect some of my information, but also share a bunch. If I want to use their services, I have to live that [sic].

One of the 62 respondents who reported that they read privacy policies “always” or “most of the time” further elaborated:

I’ve learned from privacy policies that a lot of times these company [sic] are taking possession of the data they collect from our habits. They have the rights to use the information as they pleased, assuming the service we’re using from them is advertised as ‘free’ I’ve learned that sometimes they reserve the right to call it their property now because we had agreed to use their service in exchange for the various data they collect.

The information users learn from reading a privacy policy can undermine their trust in the governance imposed by IoT device manufacturers. The above comment also touches on issues of data ownership and rights to impact or control information flows. Privacy policies define rules-on-the-books about these issues, which some respondents perceive to be imposed governance. However, as noted by another respondent, the policies do not apply consistently to all devices or device manufacturers:

That companies can be pretty loose with data; that some sell data; that others don’t go into specifics about how your data is protected; and there are some that genuinely seem to care about privacy.

This comment emphasizes an important point. For some respondents, practices prescribed in privacy policies affect how they perceive each respective company. In cases where privacy policy governance of information flows aligns with social norms, companies are perceived to care about privacy. Respondents also identify these companies as more trustworthy. In contrast, privacy policies that are vague about information flows or describe information flows that respondents perceive to be egregious or excessive, such as selling user data to many third parties, indicate to respondents that associated companies do not care about user privacy.

Relative to these inappropriate and non-user centered information flows and policies, respondents also described rules-in-use and work-arounds that emerged in order to compensate for undesirable rules-on-the-books. Over 80 percent of respondents indicated that they would pursue work-arounds, with many pursuing alternate strategies even if it took an hour to configure (31 percent) or regardless of difficulty (32 percent).

A few respondents recognized that privacy policies sometimes offer ways to minimize or evade tracking, such as outlining opportunities to opt out, as well as defining the consequences of those choices. When asked “What do you learn from privacy policies?,” one respondent elaborated:

Occasionally, there are ways to minimize tracking. Some of the ways the data is used. What things are needed for an app or device.

In this sense, privacy policies disclose and justify information flows, often discouraging users from opting-out through institutionalized mechanisms, such as options to disable recommendations or location services, by highlighting the features they enable or the consequences of being left out. However, despite institutionalized mechanisms to evade tracking, opt out options are sometimes insufficient to protect privacy (Reference MartinMartin, 2012). Furthermore, many respondents don’t actually read privacy policies and therefore may not be aware of them. Thus, individuals also develop their own approaches and share them informally among friends and online communities, as shown in Figure 9.1.

Through the lens of the GKC framework, privacy policies serve as a source for rules-on-the-books. These rules govern the flow of information into and out of IoT companies. From respondents’ comments, we see that privacy policies play an important role in shaping their expectations for better for worse. On one side, the respondents turn to privacy policies because they want to learn “what [companies] do and how they may use information they receive.” On the other side, respondents echoed the general public frustration of not being able to “to learn anything because [privacy policies] are purposefully wordy and difficult to understand.” Companies that outline clear information governance policy help inform users’ expectations about their practices, while those companies that offer ambiguous, lengthy, hard to understand policies force users to rely on their existing (mostly negative) perceptions of company practices and/or turn to other sources (family, experts) for information.

Finally, the respondents discuss several options for dealing with the gap between rules-on-the-books and their expectations. First, they could adjust their expectations (“these smart devices know too much about me,” “be more careful about what I do searches on”). They could also find legal ways to disable practices that do not align with their expectations, such as paying to remove ads or changing settings (“I trust them but I still don’t like it and want to disable”). In addition, they could opt out from the service completely (“Sell it and get a plain TV”).

9.3.4 Patterns and Outcomes

Our survey reveals a significant fragmentation within the community of IoT users relative to current governance practices, indicating irresolution in the action arena. As we piece together data on who IoT users are and how they are shaping appropriate flows of personal information from and between their smart devices, certain patterns and outcomes become evident. Table 9.4 illustrates how respondents’ preferences about third party sharing, professed concerns about privacy, and device ownership shape their average perceptions of governance outcomes around IoT. We assessed the extent to which respondents embraced technology based on the number of devices they own.

Table 9.4 divides the respondents of our survey into subcommunites based on the opinions of various IoT practices elicited from the first part of the survey. Some respondents largely have embraced IoT technologyFootnote 4 and are not concerned about privacy issues.Footnote 5 Others, while embracing the technology, are concerned about privacy issues. Concerns about third party sharing or a lack of embrace of smart technology yield very different opinions, on average. We cluster these subcommunities into three groups, in order to gauge their perceptions.

When gauging the respondents’ perceptions, we note that those who are unconcerned about the privacy implications of cross platform sharing, regardless of other values associated with information governance, have on average favorable views of information flows. Additionally, those respondents who express general concern about the privacy implications, but are not concerned about third party sharing, have similar perceptions on average. These subpopulations of our respondents are the most likely to belong to group 1, who perceive current governance of IoT information flows to be positive, on average. In contrast, those who are concerned about privacy and either don’t embrace smart technologies or express concerns about third party sharing are most likely to belong to group 3, who are slightly dissatisfied with current governance outcomes on average. Finally, group 2 is generally concerned about privacy but embraces smart devices with average perceptions slightly above neutral.

Table 9.4 Average perceptions of information flow appropriateness gauged by respondent subpopulations. For each subcommunity we calculate the number of respondents and the average perception score across information flows including consequence, condition, and aim.

Embrace Tech (own >2 devices)Don’t embrace techConcerned about third party sharingNot concerned about third party sharing
Unconcerned
  • 0.53

  • (n=48)

  • 0.5

  • (n=35)

  • 0.5

  • (n=52)

  • 0.6

  • (n=31)

Concerned
  • 0.06

  • (n=94)

  • 0.05

  • (n=92)

  • 0.05

  • (n=171)

  • 0.7

  • (n=15)

We now highlight the open-ended comments from respondents belonging to each group, that put their opinions in context, in an effort to better understand fragmentation and what underlying beliefs and preferences lead to divergent normative patterns. While individual comments are not representative, they illuminate individuals’ rationales underlying perceptions associated with groups.Footnote 6

9.3.4.1 Group 1: Positive Perceptions

This group includes respondents that positively perceive information sharing practices and tend to emphasize both user consent and preferences for personalization on average. As one user specified:

Because I knew what I was getting myself into when using these types of products. How else should companies market to me? We could go back to the old days when there was no personalization at all, when ads were completely dumb and never actually spoke to your needs. But, how is that better? People worry about privacy, but they should only really be concerned with security, which is not the same thing. Keep my financial info secure, keep certain embarrassing stuff under wraps to the public, sure, but we share so much of our lives openly that it seems silly to scoff at ad personalization. I do, however, get annoyed when it doesn’t seem personalized ENOUGH, because then it’s akin to the uncanny valley for CGI … in those moments, I’m frustrated that the personalization isn’t stronger, such as when I continually see ads for stuff I’ve already bought.

Some participants in this group also expressed a firm belief that linking devices that share data would have to be deliberate on the part of users. These users would implicitly consent to information flows, in contrast to respondents with neutral and negative perceptions. In this sense, discoverability, or the ability of smart devices to recognize and communicate with one another, was not acknowledged as a smart feature. For example:

For the devices to work like that I must have linked them in some way. That deliberate action would have been my consent to allow these devices to exchange data.

9.3.4.2 Group 2: Neutral Perceptions

Respondents in this group have a relatively neutral perception of information flows on average. While participants in this group seem to recognize the issues related to discoverability between devices, they don’t view them as a privacy violation. As one participant explained their thought process:

I feel like at this point everything is somehow connected. There have been many times where I browse the internet and then on my Facebook profile I see adds for the very thing that I was looking for. I know that it is an effort to target me and things that I might like, I don’t think my privacy is being compromised.

They accept data flows between devices, relative to their user behavior, as standard practice and seem to perceive personalized advertising as independent of their privacy values. However, other members of this group raised concerns about the risks of specific information recipients:

I trust them because I think they just want to advertise to me better, I’d only be concerned if the information was being sold to criminals or hackers.

In this sense, those with neutral perceptions of IoT information flows view credible commercial entities to be legitimate recipients. Sales and advertising are valid objectives, which various individuals within this moderate group saw as compatible with their privacy interests. In contrast, “criminals or hackers” were not seen to be acceptable recipients; future research should assess the differences in perceptions between these recipients and others.

In addition to concerns about some lesser-known third-party recipients, the past history of particular major manufacturers and commercial actors who have been careless or whose security has been compromised was also considered. Some respondents firmly believed that recent history with respect to breaches was unlikely to repeat, consistent with a recent study (Reference Zou, Mhaidli, McCall and SchaubZou et al., 2018). One respondent explained their trust that violations of privacy would not recur:

because it seems that a lot of companies have gotten into trouble over the years and hopefully they’re taking extra precautions these days.

In other words, there is a belief that the companies would learn from past events and govern data in a way that was acceptable to them. This group was largely defined by acceptance of major manufacturers as trustworthy enough, without particular enthusiasm. Some of these users appeared to consider these flows in primarily the context of economic transactions.

9.3.4.3 Group 3: Negative Perceptions

Finally, those with negative perceptions of information flows and governance did not share the overall trust in companies to govern user data in accordance with social expectations. In particular, this group held negative perceptions of information flows between devices. Many of these respondents described these cross-platform flows as invasive:

It seems invasive and annoying. I also worry that my devices are exchanging information which each other that I didn’t specifically agree to divulge. And who knows where else this information is going! All for what? To try and sell me garbage that I don’t need and won’t actually buy.

The underlying problem was often with information being used out of context:

If it was just on the browser that I was using to search for socks, it wouldn’t be as creepy. It’s the fact that multiple platforms are being used in conjunction to analyze what I am doing for targeted advertising that I find invasive.

This sizeable community perceives current information flow practice and governance relative to IoT as violating their expectations.

Some respondents explained how IoT information flows also undermine their trust in other contexts because governance is non-transparent:

This seems like an invasion of privacy and makes me wonder what kinds of information it is collecting, storing, or otherwise utilizing for purposes not formally disclosed. Additionally, some devices are shared among families and friends when they visit. I find it to be a violation of my right to privacy to have data related to my phone searches and activities show up across multiple devices that are not used by only one person.

This is only exacerbated by the industry’s continued downplaying of the significance of data sharing.

This group of users was most unified and verbose in explaining their frustration with current governance and information flows in practice. They more often distrusted the technology industry and practitioners, such as in the software engineer scenario on our survey. In addition to not valuing personalization, some emphasized the problematic lack of control and uncertainty about data destinations beyond initial third-party recipients:

… who knows what happens to this data in the end? Will these third parties sell my info to other third parties? Of course they will. Is all this “free” stuff worth it? There’s always a price, you know.

Some respondents emphasized that current outcomes are egregious and that companies and regulators are falling short in governing user data:

I don’t believe that it’s something people should roll over about. When do we consider it out of hand? It’s better to nip these kind of things in the bud. As a computer science major, having one persons opinion on the matter is not going to sway my opinion entirely. I wouldn’t just get one opinion from a single doctor of my life was on the line would I?

These respondents, in particular, emphasize that they want to play a more active role in governing their personal information flows.

Our results demonstrate the tensions that users experience when thinking of privacy in the IoT context. Through the scenarios addressing GKC concepts in the survey, we can observe divergence in interests and concerns of various respondents. Some welcome the new innovations and believe companies have their interest at heart. Others are more concerned, however, and often admit that they feel that there is little they can do to protect their information. This reflects technological acceptance models in the larger population (e.g., Reference Valdez and ZiefleValdez and Ziefle, 2019). By gauging their perceived appropriateness of specific information flows, we can examine additional dimensions of governance using the language of the institutional grammar.

9.4 Implications
9.4.1 Conceptual and Methodological

As home environments evolve with the introduction of new technologies, norms of governance and information flow evolve as well. The growing tradition of GKC analysis of a cooperative governance schema offers a way to uncover the contributing elements related to a shift in privacy expectations.

Our approach relies on the GKC framework to identify emerging communities in a given context and then use the CI framework to pose questions about what information flows they consider appropriate. Our methodology bridges the two frameworks by quantifying the effect of each of the elements on the collective norms by measuring how each factor affects the appropriateness of information flows in a given context. This allows researchers to gauge the effect of various factors on the formation of the norms and could be employed to structure future case studies in other contexts to understand norm formation. Our study shows that omitting a condition has an effect on appropriateness; different condition values vary the levels of appropriateness. We observed a similar effect for aims and consequences. In this sense, beyond the specific methodological contributions this gauging introduces, the design also offers a path toward overarching conceptual questions regarding norm formation. Through meta-analysis of cases structured through this approach, it would be possible to better understand privacy norm formation across contexts.

9.4.2 Practical

The GKC-CI method is useful in emerging contexts, such as IoT, which often lack established norms. We first identify the various exogenous variables that act as a proxy to understanding respondents’ disposition towards privacy. For example, certain respondents tend to be concerned about privacy and are actively pursuing ways to improve it for themselves. They read privacy policies, disable third party sharing, and find ways to circumvent the system whenever possible. Our CI analysis of the flows they deem acceptable confirms it: on average they tend to disallow flows, with notable exceptions when specific conditions, aims, and consequences align with social expectations. Another community perceives the polar opposite. They rarely read privacy policies, embrace third party sharing and don’t disable the tracking functionalities – all in the name of convenience and efficiency.

Furthermore, many respondents across interest groups perceive “anonymity” to be ineffective governance of information flows. “Anonymity” thus further fragments the overarching community of IoT users. In contrast to “consent,” “anonymity” modifies information, rather than flow, impacting the association between information type and subject. Results indicate that adding “anonymity” as governance does not meaningfully impact perceptions of acceptability or behaviors.

Our results illustrate that governance of IoT should necessarily specify all parameters of the CI framework in structuring information flows, with clear identification of aims and conditions in the transmission principles. Practically, this means that when introducing new technology, it is possible to gauge the various factors using our methodology to reveal factors that have an effect on the acceptability of newly generated flows.

Furthermore, our results confirm previous findings that respondents (n=159) look for privacy policies to understand the privacy implications (e.g., Reference Martin and NissenbaumMartin and Nissenbaum, 2016), however, some indicated in their comments that privacy policies are difficult to comprehend. Online forums and discussion with family were the other leading responses.

This result has practical implications with respect to how privacy related information could be structured and communicated so that users more intuitively understand. We propose that IoT manufacturers should clearly define all parameters according to CI and include institutional components within the transmission principle when prescribing information transfers. This could also offer a more informative and constructive discussion on the forums, with all the parameters stated explicitly.

9.5 Conclusion

We live in an age of great innovation! In the blink of an eye, information packets traverse the world; with a click of a button, information reaches millions of people. Things evolve at great speed and we, as a society, are looking for ways to keep apace with it. This forces us to adapt to the new reality and reconsider established concepts, such as the notion of privacy.

The GKC-CI method builds on the strength of two privacy theories. We use GKC to describe rules specific to a given context (rules-on-the-books and rules-in-use) and to understand users’ strategies and norms. We use CI to gauge the appropriateness of information flows resulting from existing practices (rules-in-use) and/or prescribed by policy (rules-on-the-books).

Our results show diversity in respondents’ privacy understanding and expectations around IoT devices. By gauging the information flows resulting from various practices employed by the Internet-connected systems, we can further see the importance of contextual elements to gain deeper insights into their appropriateness. Specifically, we use the expressive language of GKC to further describe CI transmission principles. Results from survey questions that addressed CI and institutional aspects illustrate how more detailed conceptualizations of transmission principles, deconstructed using the attributes within the institutional grammar, highlight what aspects yield differences in respondents’ opinions of information flows. This in turn helps to illuminate how particular aspects of institutional governance improve perceptions of these information flows to engender trust in governance.

Acknowledgments

This project was supported by National Security Agency (NSA) Award (#H98230-18-D-006). Sincere thanks to Helen Nissenbaum and Jessie Garrett Taft for their help in finalizing, funding, and deploying the survey.

10 Designing for the Privacy Commons

Darakhshan J. Mir
10.1 Introduction

This chapter frames privacy enforcement processes through the lens of governance and situated design of sociotechnical systems. It considers the challenges in formulating and designing privacy as commons (as per the Governing Knowledge Commons framework (Reference Sanfilippo and StrandburgSanfilippo, Frischmann, and Strandburg 2018)) when privacy ultimately gets enacted (or not) in complex sociotechnical systems.

Privacy has traditionally (in computing, legal, economic, and other scholarly communities) been conceptualized in an individualistic framing, often as a private good that is traded off against other goods. In this framing, meaningful decision-making processes about one’s data are available to oneself, and any resulting decisions are assumed to impact only one’s own self. While social scientists have articulated and studied social conceptualizations of privacy (Reference Petronio and AltmanPetronio and Altman 2002; Reference AltmanAltman 1975), the dominant public and scholarly discourse on privacy has been that of individualized control, with characterizations such as informed consent, and “notice and choice” being particularly prominent.

An important conceptualization of the social nature of privacy that has found expression in policy and technical practices is due to Helen Nissenbaum, whose articulation of privacy as Contextual Integrity (Reference NissenbaumNissenbaum 2009) rests on the notion of information flows between social actors within a specific social context. The Contextual Integrity (CI) framework states that privacy is preserved when any arising information flows comply with contextual informational norms and, conversely, privacy is violated when contextual norms are breached. In other words, flows are appropriate when they comply with (privacy) norms and (prima facie) inappropriate when these norms are disrupted. While CI is a powerful framework that foregrounds social conceptualizations of privacy, the contextual norms themselves are exogenous to it. Yet, the fundamentally political question of who has the power and authority to decide what is appropriate is inextricably linked to high-level moral and political values of a society, and the contextual functions, purposes, and values that practices, as per CI, must serve. In order to directly engage with these questions, the Governing Knowledge Commons (GKC) framework considers privacy as the governance of these informational norms (Reference Sanfilippo and StrandburgSanfilippo, Frischmann, and Strandburg 2018). It draws attention to the political and procedural aspects of governing these rules (or norms) of appropriateness.

Scholarly commitments to the characterization of privacy as governance and constitution of appropriate informational norms raise several theoretical, conceptual, empirical, and technical questions. This chapter explores questions that such orientations generate in the conceptualization, design, implementation, and production of technical artifacts and surrounding sociotechnical systems that enable these information flows. If attention to considerations of governance of informational norms is important, then it must find an expression in the design and conceptualization of sociotechnical systems, where information flows occur. These emergent questions reside at a rich interface between different disciplines such as communication theory, sociology, law, and computer science – including the sub-discipline of human–computer interaction (HCI).

As a computer scientist, my objective of mapping these research directions is two-fold: one, to frame richer, more politically and normatively grounded questions for computer scientists to engage with. Even as CI has found expression in privacy scholarship within the discipline of computer science, including HCI and software engineering, existing literature review shows (Reference Benthall, Gürses and NissenbaumBenthall, Gürses, and Nissenbaum 2017; Reference Badillo-Urquiola, Page and WisniewskiBadillo-Urquiola, Page, and Wisniewski 2018) that computer scientists have largely not engaged with the normative aspects of CI. Benthall et al. (Reference Benthall, Gürses and NissenbaumBenthall, Gürses, and Nissenbaum 2017) and Badillo-Urquiloa et al. (Reference Badillo-Urquiola, Page and WisniewskiBadillo-Urquiola, Page, and Wisniewski 2018), with the latter being focused on HCI researchers, call upon computer scientists to engage with the normative elements of CI. In this chapter, I reinforce this calling by highlighting the normative valence of the governance of informational norms, and outline a set of research directions that such orientations open up for privacy researchers who locate themselves in computer science. Second, by examining conceptualizations and practices in computer science, the GKC framework has an opportunity to make connections to existing literature in computer science, particularly one that conceptually aligns with the philosophy of the commons approach, yet might not have a similar theoretical and conceptual articulation. This is especially pertinent as the commons approach seeks to “systematize descriptive empirical case studies of real-world contexts.” Finding points of injection into the design and architecture of sociotechnical systems both expands the purview of the GKC approach as well as provides opportunities to construct additional empirical case studies.

Consequently, I identify six distinct research directions pertinent to the governance and formulation of privacy norms, spanning an examination of how tools of design could be used to develop design strategies and approaches to formulate, design, and sustain a privacy commons, and how specific technical formulations and approaches to privacy can serve the governance of such a privacy commons. First, I examine if the tools and methodologies of design can be used to explore questions of governance and procedural legitimacy both to assess the appropriateness of entrenched norms or rules-in-use, and to handle previously unresolved, hidden, un-surfaced ethical disagreements. Second, I examine what opportunities one of these design methodologies, Participatory Design (Reference MullerMuller 2009), with its political and ideological commitments to democratic decision-making, presents in the formulation and governance of privacy norms by communities in specific contexts. This direction lays out participatory decision-making about privacy as a normative goal to achieve. Third, I explore questions that arise from the relationship between privacy literacy, civic learning, and models of participatory governance. Relatedly, fourth I propose the empirical study of relationships between privacy norms and individuals’ privacy expectations and preferences, and how participation and effective modes of community engagement can shape the latter. Fifth, I identify questions related to the capacities of computational techniques to automatically extract informational norms from human sentences that consist of privacy policies formulated through a participatory process. Sixth, I examine how a technical conceptualization of privacy, differential privacy (Reference DworkDwork 2006), that provides a mathematical guarantee of plausible deniability to an individual can operate within the larger normative framing of governance.

The rest of the chapter is organized as follows. The next section discusses social conceptualizations of privacy. Following this, I outline existing literature on the operationalization of social notions of privacy in the design and implementation of technical systems, finally leading to a section that elaborates on the six research directions identified previously.

10.2 Social Conceptualizations of Privacy

The dominant public and scholarly discourse on privacy has been that of individualized control, with characterizations such as informed consent, and “notice and choice” being particularly prominent. Two conceptual underpinnings of this individualistic framing, namely, access to meaningful decision-making and the largely localized impact of sharing one’s data, are insufficient when considering the larger social contexts in which privacy is or is not enacted. Meaningful decisions to share (or not share) one’s data are contingent upon the availability of informative disclosures about how such data will be shared and processed. In reality, we have little to no control or understanding over what information about ourselves we exude, where it travels, who has access to it, the processes through which other parties or individuals share this information, the ways in which it is made actionable, and how we should respond to these situations on an individual level besides by opting out of services and becoming a “digital recluse”. Furthermore, even if informative disclosures are made, and understood as such by the affected population, any resulting decisions people make are largely superfluous since access to services is typically only available in exchange for information that individuals must provide about themselves.

Additionally, individuals’ lives, and, therefore, data are interlinked with each other in underlying social contexts animated by the social, communal, professional, civic, and commercial links they have with other individuals, entities, and institutions. Consequently, our privacy (or the lack thereof) is inherently linked. This becomes amply clear when privacy is considered within the context of predictive analytic power of data, including their correlational analyses – inferences about aspects of individuals’ lives from data on other individuals are precisely possible because of the underlying networked nature of our personal information. Locating its origin in the networked nature of our social relationships, Marwick and boyd capture aspects of this inherently social nature of privacy using the concept of “Networked Privacy” (Reference Marwick and boydMarwick and boyd 2014).

One of the earlier and more comprehensive articulations of the social dimensions of privacy is due to Reference ReganRegan (1986, Reference Regan2000). She comprehensively outlines three dimensions of the social nature of privacy: that privacy is a common value, with all individuals having an appreciation of privacy to some extent, and with cultures and communities having a shared perception of privacy; that privacy is a public value in that it is crucial in supporting democratic political processes, and in “the forming of a body politic or public” (Reference Regan, Roessler and MokrosinskaP. M. Regan 2015); and that privacy is a collective value in that one person is unlikely to have privacy unless all people have a similar level of privacy echoing the conceptualization of “networked privacy” by Marwick and boyd (Reference Marwick and boydMarwick and boyd 2014). Other scholars have recognized the need to deemphasize the individualized narrative of privacy by arguing that privacy is a “public good” (Reference Fairfield, Engel and MillerFairfield and Engel 2017; Reference Regan, Roessler and MokrosinskaP. M. Regan 2015, Reference Regan2016) – something that requires public coordination for its protection – and that legal and regulatory tools should be “redesigned to focus less on individual knowledge and empowerment and more on facilitating groups’ collective protection of their privacy” (Reference Fairfield, Engel and MillerFairfield and Engel 2017). In another powerful departure from individualistic framings, Cohen argues that “protecting privacy effectively requires willingness to depart more definitively from subject-centered frameworks in favor of condition-centered frameworks” (Reference CohenCohen 2019).

In a seemingly orthogonal recognition (from the approaches summarized above) of the social nature of privacy, Nissenbaum’s articulation of privacy as Contextual Integrity (Reference NissenbaumNissenbaum 2009) rests on the notion of information flows between social actors within a specific social context. As discussed in the previous section, CI rests on the notion of appropriate information flows that are regulated by contextual informational norms. A norm is conceptualized to be “well-formed” if it is composed of five parameters: sender, recipient, information subject, attribute (information type), and a transmission principle. For example, in the healthcare context, senders, recipients, and subjects are social actors within this sphere, such as physicians, nurses, patients, therapists, etc., and attributes could consist of elements such as diagnoses, prescriptions, and test results. Transmission principles are expressed as a condition under which the information flow can occur, such as with permission of the subject, under confidentiality, etc. According to CI, when information flows comply with entrenched informational norms, privacy is respected, and when flows violate norms, privacy is violated.

While it might seem on the surface that informational norms (whether in policy or in technical practice) merely act as tools that regulate the appropriateness of the flow of information concerning an individual, key to the CI framework is the recognition that “legitimate” contextual informational norms are not determined individually (even though the flows themselves might involve information about specific individuals); rather these are socially constructed by our shared understanding, as members of a society, of contextual goals, values, and ends. Information flows do not occur in a vacuum but purportedly to achieve specific contextual goals and outcomes in distinct social contexts. Privacy as CI rests on this notion of socially constructed informational norms that have achieved “settled accommodation” (Reference NissenbaumNissenbaum 2019) among a group, network, or community. It also provides a normative yardstick to evaluate the appropriateness of novel information flows that could reflect evolving societal norms, against high-level moral and political values, and the extent to which these novel or evolving information flows align with the values, end, and goals of the social context they occur in.

In all of these characterizations of privacy seen above, the social versus individual dimensions of privacy (or to what extent each characterization lies on the social vs. individual spectrum) is actuated by the underlying values inherent in these characterizations and the origins of these values. As we shall see later, and elsewhere in this chapter, the GKC framework aims to understand the sources and conflicts in values in addition to locating shared values.

Among social conceptualizations of privacy, Nissenbaum’s CI framework is particularly prominent, because of its descriptive and evaluative power, and because by virtue of finding expression into the logics of software system design, it is actionable in the design of technical systems. See for example Reference Barth, Datta, Mitchell and NissenbaumBarth et al.’s (2006) work on expressing information flows and their appropriateness using first order temporal logic.

The GKC framework draws attention to the political and procedural aspects of governing these rules (or norms) of appropriateness. By foregrounding the perspective of governance, the norms of information flow can no longer be deemed to be exogenous to a specific context, but demand an engagement with aspects of procedural legitimacy of these norms – how are the norms of appropriateness in specific contexts constituted, who has a say in the process, who is excluded, how are these norms governed, and if, how, and by whom is compliance with these norms enforced? The GKC approach positions actors as members of a community rather than individuals acting within a broad social sphere subject to norms and rules that are largely deemed to be exogenous to the context. Sanfilippo et al. state that the most important difference between the knowledge commons framework and the CI framework is that the latter “envisions actors as individual participants in a broadly defined social context, such as education, healthcare, or the commercial market, while the knowledge commons framework envisions actors as members of a ‘community’ involved in producing or managing a set of resources, and in producing (or at least co producing) the applicable rules-in-use within a broader context ordinarily accounted for as part of the background environment.” Sanfilippo et al., argue that:

this shifts the focus from questions of consistency with externally defined norms and rules to questions of community governance involving not only what background norms and rules are in forces in a specific action arena but also how and by whom those rules are determined.

The GKC framework fortifies CI by further directing attention away from individuals’ perceptions or experiences about privacy to the consideration of these perceptions and experiences in the context of governance, placing privacy squarely in the political and normative realm. While individuals feel the impacts of information flows, the networked nature of these impacts, and their enactment in, often, contested social contexts, necessitates an approach that returns their consideration to the normative and political sphere.

10.3 Engaging with Underlying Technical Processes

In this section I review literature on the motivations and means to build privacy-preserving capacities in technical systems, particularly those that embrace social conceptualizations of privacy.

In his book “Code: And other Laws of Cyberspace,” Lawrence Reference LessigLessig (2000) argues that in addition to the law, social norms, and the market, the underlying architecture that enables digital environments, namely “code,” regulates cyberspace, making an argument for citizens to demand that any resulting technology reflect values that they would like to see being upheld in a democratic society:

But underlying everything in this book is a single normative plea: that all of us must learn at least enough to see that technology is plastic. It can be remade to do things differently. And that if there is a mistake that we who know too little about technology should make, it is the mistake of imagining technology to be too plastic, rather than not plastic enough. We should expect – and demand – that it can be made to reflect any set of values that we think important. The burden should be on the technologists to show us why that demand can’t be met.

Reference Gürses and JorisGürses and van Hoboken (2018) argue that public attention on privacy concerns is mainly focused on the step when digital artifacts reach consumers, and that as a result any strategies that address these concerns are conceptualized for this interface of technology consumption. They propose exploring ways in which interventions can be injected prior to any potential consumption – at the stage of production of such technologies. Shining a spotlight on the stages of production of software – the backbone of any technical artifact – can help scholars “better engage with new configurations of power” that “have implications for fundamental rights and freedoms, including privacy.” They articulate privacy governance as the “combination of technical, organizational and regulatory approaches” for the governance of privacy. They use the term “end-users” to underline the limited agency typically users of software services have in designing the privacy and other affordances of such systems, making the argument that in addition to paying more attention to the production stages of software, privacy scholarship should also focus on the functionality that the software offers and how it impacts end-users’ activities.

The recognition of the importance of integrating and operationalizing conceptualizations of privacy in the design of technical products led to the development of the Privacy by Design (PBD) framework (Reference CavoukianCavoukian and others 2009; Reference Gürses, Troncoso and DiazGürses, Troncoso, and Diaz 2011). PBD takes a proactive approach to privacy by ensuring that privacy-preserving capacities are upheld and privacy-harming ones are extenuated, during the design of a technical artifact. It relies on design of a product as a means of complying with privacy policies – which may be articulated through regulations or law – rather than a reactive system such as one that imposes penalties. The PBD paradigm foregrounds the technical design process to create an artifact that is protective of privacy from the “ground-up”.

Gürses et al. (Reference Gürses, Troncoso and DiazGürses, Troncoso, and Diaz 2011) point out that while a commitment to principles of PBD is finding growing traction in regulatory settings, there is little common, concrete understanding of how these principles translate to technical and design practice. They argue that an interpretation of these principles “requires specific engineering expertise, contextual analysis, and a balancing of multilateral security and privacy interests.” Systematically locating these principles and their translation in the practice of engineering sociotechnical systems has led to the expression of PBD in the emerging field of privacy engineering (Reference Gürses and del AlamoGürses and Alamo 2016).

However, the operationalization of social conceptualizations of privacy in the privacy engineering process remains an underexplored area. Gürses and Alamo (Reference Gürses and del AlamoGürses and Alamo 2016) assert that a future important direction for privacy engineering would be to conduct empirical studies that are cognizant of different contextual challenges when the tools, techniques, and methodologies of privacy engineering are used. In 2015, the Computing Community Consortium undertook a PBD initiative to identify appropriate conceptualizations of privacy and to operationalize these conceptualizations effectively in the engineering process, with contextual integrity merging as a prominent concept.

Even as CI has been used by computer scientists (in contexts within and outside privacy engineering), a recent literature review finds that they have largely not engaged with the normative elements of CI (Reference Benthall, Gürses and NissenbaumBenthall, Gürses, and Nissenbaum 2017). This finding holds true even for HCI researchers (Reference Badillo-Urquiola, Page and WisniewskiBadillo-Urquiola, Page, and Wisniewski 2018). Even as HCI engages more deeply with questions of technology embedded in social and cultural contexts, Badillo-Urquiloa et al. find that HCI researchers too have not engaged deeply with the critical and normative aspects of CI, and HCI researchers must engage more deeply with the normative aspects of CI to “inform their research design, design new sociotechnical systems, and evaluate whether CI can be used as an actionable framework for translating users’ privacy norms into usable systems.” Many of the research directions identified in this chapter, directly speak to these recommendations.

10.4 Research Directions

In this section, I map six research directions pertinent to the design of sociotechnical systems when considering the GKC framework. First, I examine if the tools and methodologies of design can be used to explore questions of governance and procedural legitimacy both to assess the appropriateness of entrenched norms or rules-in-use and to handle previously unresolved, hidden, un-surfaced ethical disagreements. Second, I examine what opportunities one of these design methodologies, Participatory Design, with its political and ideological commitments to democratic decision-making, presents in the formulation and governance of privacy norms by a community in a specific context. This direction lays out participatory decision-making about privacy as a normative goal to achieve. Third, I explore questions that arise from the relationship between privacy literacy, civic learning, and models of participatory governance. Relatedly, fourth I propose the empirical study of relationships between privacy norms and individuals’ privacy expectations and preferences, and how participation and effective modes of community engagement can shape the latter. Fifth, I identify questions related to the capacities of computational techniques to automatically extract informational norms from human sentences that consist of privacy policies formulated through a participatory process. Sixth, I examine how a technical conceptualization of privacy, differential privacy, that provides a mathematical guarantee of plausible deniability to an individual can operate within the larger normative framing of governance. In the following subsections, I expand on these six research directions.

10.4.1 Design Paradigms to Examine the Legitimacy of Privacy Rules-in-Use

As discussed in the previous section, the alignment of PBD with privacy engineering could make the former an important enactor of privacy-preserving capabilities of a sociotechnical system. Wong and Mulligan (Reference Wong and MulliganWong and Mulligan 2019) outline the important place PBD has come to occupy in the privacy policy sphere, owing to its inclusion in the EU’s General Data Protection Regulation, the United States Federal Trade Commission’s privacy policy recommendations, and other privacy advisory and regulatory institutions. They argue that PBD is currently, largely, dominated by engineering approaches that assume that privacy is pre-defined and exogenous to the design process, whereas HCI has a rich collection of design methodologies and tools that are capable of identifying relevant conceptualizations of privacy and related values within the design process. Such approaches, they further argue, are largely absent from policy-making and practice of PBD. Furthermore, even within HCI, they find that most PBD approaches use design and associated principles “to solve a privacy problem” or “to support or inform privacy decision making”, and that “design to explore people and situations and to critique, speculate, or present critical alternatives” – design approaches available from the field of HCI – are largely absent from both the policy-making and the practice dimensions of PBD. They argue that the latter are particularly pertinent when the “conception of privacy that ought to guide design is unknown or contested” (Reference Wong and MulliganWong and Mulligan 2019). This resonates with the GKC framework:

The commons governance perspective encourages us to look behind the curtain to investigate the origins and dynamic characters of both nominal rules and rules-in-use and to interrogate the potentially contested legitimacy of the formal and informal processes that produce them. We believe that issues of procedural legitimacy and distinctions between nominal rules and rules-in-use are central both to descriptive understanding of privacy and to normative evaluation and policy making. Governance and legitimacy may be particularly important for the most perplexing privacy issues, which often involve overlapping ethical contexts or contested values.

Both approaches emphasize the contested nature of privacy and the procedural aspects of exploring and uncovering these contestations. An important question that a synthesis of this shared emphasis raises is: what kinds of design paradigms in computer science, generally, but HCI and adjoining disciplines, specifically, provide a way for questions of governance and procedural legitimacy to enter into the design and implementation of technology that mediates or enables information flows? How can the tools and methodologies of design be employed to explore questions of governance and procedural legitimacy both to assess the appropriateness of entrenched norms or rules-in-use, and to handle previously unresolved, hidden, un-surfaced ethical disagreements?

Gurses and van Hoboken argue that contextual integrity while not tied down to concepts of time and location requires “looking back in time” to identify entrenched social norms that govern the “appropriate” information flows, in order to enable an informed and reflective design of novel socio-technical systems. Utilizing such a lens on norms, and considering the GKC framework, what can the tools and methodologies of design reveal about the procedural legitimacy of entrenched privacy norms and values?

One way forward toward exploring this question further is contained in the approaches outlined by Reference Wong and MulliganWong and Mulligan (2019), who map out the purposes for which design is employed in relation to privacy in the existing HCI literature. On examining 64 scholarly publications in HCI venues that use design in relation to privacy, they find that 56 percent use design “to solve a privacy problem,” where “privacy is a problem that has already been well-defined outside of the design process,” and 52 percent use design “to inform and support decision-making,” which foregrounds the individualized framing of privacy by focusing on providing information to users to enable them to make privacy-preserving decisions, or on the creation of tools and processes so that designers can incorporate privacy more easily in their practice. Only 22 percent used design “to explore people and situations” where design and other methodologies are used to explore what conceptualizations of privacy in varying social and cultural contexts are “at play” – an approach that has “implications for design”. Finally, only 11 percent use design to “to critique, speculate or present critical alternatives,” where questions such as “what should be considered as privacy?,” “privacy for whom?,” and “how does privacy emerge from technical, social, and legal entanglements” are considered. The latter two orientations are particularly well suited to the surfacing of privacy conceptualizations in relation to surrounding social, cultural, and political factors, yet are under-explored in the literature. These design approaches have the potential to provide tools to bring procedural legitimacy “into play in assessing whether the rules-in-use for personal information are normatively appropriate” (Reference Sanfilippo and StrandburgSanfilippo, Frischmann, and Strandburg 2018). Furthermore, these approaches directly relate to the three distinct ways identified by Sanfilippo et al. in which procedural legitimacy is in play the GKC framework: first, whether the procedures that construct the rules-in-use are deemed to be legitimate by diverse community members, and aid them in achieving their objectives; second, whether the governance practices account for the interests and needs of “impacted outsiders”; and third, whether the “exogenous rules and norms” to which a community is subject are responsive to member needs and interests.

In particular, three design methodologies are well positioned to explore these orientations: (a) speculative design, where design is undertaken to present critical alternatives (Reference Wong, Khovanskaya, Filimowicz and TzankovaWong and Khovanskaya 2018; Reference AugerAuger 2013; Reference DiSalvo, Jenkins and LodatoDiSalvo, Jenkins, and Lodato 2016); (b) value centered design, where design is used to achieve certain human values (Reference FriedmanFriedman 1997; Reference ShiltonShilton 2018); and (c) participatory design (Reference MullerMuller 2009), where design is undertaken not only for, but by impacted stakeholders.

In this section, I outline one possible direction that directly opens up points of engagement between privacy as governance of privacy rules and speculative design methodologies. Reference DiSalvo, Jenkins and LodatoDiSalvo et al. (2016) use speculative design in the context of “civic tech” as “a way to explore potential, alternative, and future conditions by articulating their existence in generative forms, with a particular focus on the complications of governance and politics disposed by computational technologies.” The tools of speculative design can speak directly to aspects of governance that the commons approach focuses on.

To summarize, design paradigms in HCI provide potent tools to explore questions of procedural legitimacy of rules-in-use in the commons governance framework. In addition to achieving, what Reference Wong and MulliganWong and Mulligan (2019) consider important, namely, broadening the notion of design in PBD, these orientations could build important bridges between the PBD framework and the GKC framework.

10.4.2 Formulation and Governance of Privacy Norms via Participatory Design

In this subsection, I explore the framework of Participatory Design (PD) in detail to consider the opportunities it presents for democratic governance of privacy norms. PD as a design methodology has historically had clear political commitments to democratic ideals. Reference PilemalmPilemalm (2018) notes that PD developed in the late 60s and early 70s (as cooperative design) with the intention of involving citizens in urban areas in Scandinavia in the planning and design of their living environments. Soon, PD entered workplaces in Scandinavia with the intention of making workplaces more democratic, and empowering workers to participate in and influence their working conditions and workplace technology through the use of collaborative design processes between the workers and the designers (Reference Bjerknes, Ehn, Kyng and NygaardBjerknes et al. 1987; Reference EhnEhn 1988; Reference Simonsen and RobertsonSimonsen and Robertson 2012). Often, this occurred by assisting workplace unions in devising technological “control activities and policies” (Asaro 2000). Subsequent “generations” of PD, particularly its variants in the United Kingdom and North America were more focused on involving users and other stakeholders in the process of design of technologies to create better systems, an adoption that largely found resonance in HCI (Reference MullerMuller 2009). Several studies since then have argued to actively re-introduce the political and ideological dimensions of PD, highlighting the importance of democracy as a core political ideal to PD (Reference BeckBeck 2002; Reference KanstrupKanstrup 2003).

Regan’s argument (Reference ReganRegan 1986; Reference Regan, Roessler and Mokrosinska2015) that privacy is both a collective and a democratic value lends credence to the idea of using democratic processes to determine which norms or rules regarding privacy should be in use, how they should be governed, how the appropriateness of specific privacy rules should be evaluated, and by whom. As Sanfilippo et al. articulate:

Like substantive appropriateness, procedural legitimacy is contextual. Legitimacy, as consensus about social good or appropriateness as reached through participatory decision-making of all potentially impacted, is itself a normative goal that may be addressed through commons institutions.

Scholarly and political commitments to democratic decision-making in the governance of privacy takes us down the route of exploring connections to PD, and its democratic and political ideals, in particular. Some preliminary attempts in this direction are due to Reference Mir, Shvartzshnaider and LatoneroMir et al. (2018) and Reference Shilton, Burke, Estrin, Hansen and SrivastavaShilton et al. (2008). Yet, at the time of writing this chapter, there is almost no work on operationalizing PD to conceptualize privacy. There is much important work to be done in this direction, such as determining which privacy rules-in-use in specific contexts are normatively appropriate, what the characteristics of the community are that determine these rules-in-use, how communities and other stakeholders, particularly dynamic ones, can negotiate around conflicting values such as privacy. In this section, I examine the affordances of PD to speak to such concerns.

While PD processes have largely been absent both in the shaping of privacy policy and in exploring contested aspects of privacy, privacy scholarship can learn and adapt from the vast body of literature that does envision using participatory, democratic processes in shaping and determining aspects of public policy. Such adaptations are especially pertinent in cases where technology (including potentially privacy-invasive technology) is employed within contexts that are democratic by their very nature, such as several decision-making processes employed by states, cities, municipalities, and public services, a context that is often dubbed as “civic tech.” In such contexts, participants’ relationship to the technology in question is more appropriately framed as that of a citizen rather than a consumer. For example, Reference PilemalmPilemalm (2018) studies the role of PD in public sector contexts, including civic engagement and “we-government” initiatives. He presents case studies showing that after addressing the challenges and practical difficulties of involving civil citizens, PD can be employed in the design of technologies in the public sector and lead to empowerment of citizens involved by both including them in designing the products that impact them and enhancing their understanding and skills.

In particular, the democratic framing of PD harkening back to its historical roots had led several PD researchers and practitioners to view PD as a process that interrogates issues of power and politics with the ultimate aim of enhancing democratic ideals, mutual learning and empowerment of the participants (Reference EhnEhn 1988). While PD flourished as a practice and value-based design system (Reference ShiltonShilton 2018) in the context of unionized workers in the Scandinavian workplace, the changing nature of work organizations and the adoption of PD outside Scandinavia led to the adoption of PD beyond the workplace. In particular Reference Teli, Lyle and SciannambloTeli et al. ( 2018) remark that the adoption of PD in the early 2000s extended beyond the “renewed workplace” – workplaces they term as arising out of “transformations in the mode of production toward post-Fordism” – to domains considered to be constituting the “public realm” (Reference Huybrechts, Benesch and GeibHuybrechts, Benesch, and Geib 2017). This expression continues in what Reference DiSalvo, Clement and PipekDiSalvo et al. (2012) call community-based PD, where the participants are not workers, but rather citizens interested in community-related issues, and the context involves negotiations among multiple parties with heterogeneous, and often conflicting values (Reference Grönvall, Malmborg and MesseterGrönvall, Malmborg, and Messeter 2016). As Grönvall and coauthors remark, in such settings:

Infrastructure is not viewed as a substrate that other actions are based upon, but rather as an on-going appropriation between different contexts with many different stakeholders and practices with negotiation of potentially conflicting agendas and motivations for participation. In community-based PD settings, contrasting and conflicting values are unavoidable and do not only need to be explicitly addressed in the PD process, but can act as drivers for PD negotiation processes.

Grönvall et al. present three case studies to demonstrate how design interventions enable the participants to become aware of other participant’s attitudes toward the collaboration at hand as well as their values. The case studies illustrate how even as PD as a process can enable a consensus and an understanding, the dynamic nature of the participant population leads to a continuously changing landscape of values as each participant brings in their own roles, stances, and values into these collaborations. They remark that:

the driving force in design is rarely a shared vision among stakeholders of a future made possible through design activities. Rather the driving force in our cases has been the plurality of dynamic values, and a continuous negotiation of values in agonistic spaces; not to reconcile value differences, but to reshape and achieve a productive co-existence between them, allowing new practices among project participants to form.

Reference Lodato and CarlLodato and DiSalvo (2018) consider PD in the context of institutions operating in the public realm, examining the constraints produced through employing PD in working with or through these institutions – what they call “institutional constraints,” and are ultimately interested in understanding such institutions through the lens of PD.

PD, when employed in the so-called public realm, raises questions about who the participants are, who is considered to be part of the community, how those boundaries are drawn, and who is left out of the “participation.” For example, Lodato and DiSalvo claim that:

A central concern of PD is the distribution of power – authority, control, decision-making, etc. – to underrepresented bodies, populations, and people in the design, use, and deployment of products, services, and systems in work and public life.

Since PD aims to enhance democratic decision-making, mutual learning between designers and participants, and empowerment of participants, Reference Bossen, Dindler and IversenBossen et al. (2016) consider the question of evaluating whether PD processes indeed achieve these goals. They present a framework to systematically evaluate PD projects for these goals paying attention to the purpose of the evaluation, who conducts and leads the evaluation, who participates, the methods used, and the audience for the evaluation. These criteria help understand questions of participation, legitimacy, and empowerment in PD.

There is some literature on the commonalities between Commons Design and Participatory Design; here I briefly review that literature to explore ideas pertinent to the design of a privacy commons. Reference Marttila, Botero and Saad-SulonenMarttila et al. (2014) examine the connections between the literature on commons (for example, using Ostrom’s framework (Reference OstromOstrom 1990)) and PD, with the aim of developing design strategies and approaches to designing the commons. They argue that both PD and the commons literatures “build upon stakeholders and communities’ capabilities and right to act and decide upon their future.” They point out how while Ostrom’s “design principles”(Reference OstromOstrom 1990) for long-enduring commons were not intended to provide a framework to design a commons, nevertheless, they can be integrated in the PD process “to develop a nuanced understanding of design agency and its interplay with multiple mechanisms of collective action” (Reference Marttila, Botero and Saad-SulonenMarttila, Botero, and Saad-Sulonen 2014).

Such orientations are also available (and arguably, direly needed) for the conceptualizations and implementations of privacy. However, such engagements open up questions about efficiency of processes, and scalability of solutions, two framings that technologists are particularly attuned to.

In his book titled the “Smart Enough City” (Reference GreenGreen 2019), Ben Green presents an example that instead works with an alternative concept: “meaningful inefficiencies” that he borrows from civic media scholars (Reference Gordon and WalterGordon and Walter 2016). Green cites work by Gordon and coauthors (Reference Gordon and Baldwin-PhilippiGordon and Baldwin-Philippi 2014) to create Community PlanIt (CPI),Footnote 1 an online, multiplayer game to promote engagement, deliberation, and decision-making within communities. The game is focused not on making the process of deliberation and engagement efficient, but rather to recognize that these are necessarily inefficient processes, and to design such platforms for “meaningful inefficiencies” that highlight aspects of community member engagement, coordination, and reflection:

Instead of being gamified with a rigid structure that funnels users to predetermined ends, CPI embraces play to enable exploration and deliberation. Every user is tasked with responding to open-ended prompts, and in order to see the responses of others, one must first submit one’s own answer. Such game mechanics lead to positive and reflective deliberation that one participant called “the back and forth that you don’t get in a town hall meeting.” Players also noted that the game encouraged them to reflect on their own opinions and appreciate alternative viewpoints. “I think it forced you to really think about what you wanted to say in order to see other people’s opinions,” said one participant. “Whenever I found out that I was like the minority … it just made me think of why do people think the other idea is better,” added another. “I put my comment and someone disagreed with it,” remarked another player, before adding, “I don’t really know who’s right, but I feel like it made me really think about what I thought prior.” Through these interactions, players developed their capacities to reflect on their positions and emerged with deeper trust in the community.

Could community engagement platforms that are designed to enhance civic engagement and are embedded appropriately in the civic, social, and cultural contexts of communities, such as Community PlanIt, be deployed to develop models of participatory governance of information norms? This question is inextricably linked to the larger goals of PD – that of enhancing democratic ideals, mutual learning and empowerment of the participants. The next section will delve into some of the literature on “civic learning” and reflective decision-making that enables participants to negotiate around and make collective decisions about issues impacting them.

10.4.3 Privacy Literacy, Civic Leaning, and Participatory Governance

Questions of participation in mechanisms of governance lead to underlying questions about people’s understanding of the information flow landscape, their perception of their roles in it, and what kinds of coordination and deliberation mechanisms enable people to engage meaningfully in such participatory frameworks. In relation to the GKC framework, “adequate” privacy literacy may be viewed as “attributes of the community members” (Reference Strandburg, Frischmann, Madison, Strandburg, Frischmann and MadisonStrandburg, Frischmann, and Madison 2017). Community members can effectively govern the privacy commons only when they understand the underlying information flows and consequences of appropriate and inappropriate flows.

An important question that such considerations raise is: What kinds of (pedagogical) tools can be used to enhance people’s understanding of the data ecosystem and its implications? As Regan outlines, “the goal here would be to make visible the privacy implications which to date have effectively remained invisible to those affected” (Reference ReganP. Regan 2016). Here, Reference KumarKumar (2018) offers some preliminary research directions by outlining the possibility of using CI as an educational tool. This stems from an earlier study Kumar conducted with her co-authors (Reference Kumar, Naik, Devkar, Chetty, Clegg and VitakKumar et al. 2017), where CI was used as an analytical tool to understand how children used digital devices and how they both understood and navigated privacy concerns online. The study provided evidence that children (especially over ten) largely understand how the parameters of CI affect norms of information flow, and in particular, they had an understanding of actors and attributes, even as they don’t use the same terminology. Based on this, Kumar suggests exploring CI as a tool for privacy education (Reference KumarKumar 2018). In related studies, Reference Martin and NissenbaumMartin and Nissenbaum (2015) use survey-based methods to show that people typically understand the parameters of an informational norm, and frame their privacy expectations in view of the context in which the information flow occurs, as well as how the information is transmitted and used, and who the senders and receivers of this information are (Reference MartinMartin 2012).

While Kumar is largely interested in privacy literacy for children, with the objective of equipping children to make better decisions about their privacy, a larger additional question worth examining would be to understand whether and how CI can be used as an educational tool to equip adults (and, potentially, children) to better understand information flows within a larger governance context.

Much work in the privacy literacy space has focused on the understanding and empowerment of individual actors with respect to their privacy – another place where individualistic, subject-centered notions of privacy have gained traction. As Park notes:

In the digital era, the idea encompasses critical understanding of data flow and its implicit rules for users to be able to act. Literacy may serve as a principle to support, encourage, and empower users to undertake informed control of their digital identities. In short, to exercise appropriate measures of resistance against the potential abuse of personal data, it may be that users should be able to understand data flow in cyberspace and its acceptable limits of exposure.

However, as Reference CohenCohen (2019) argues, to consider effective responses to the erosion of privacy, scholarship and practice needs to shift from “subject-centered” to “condition-centered” frameworks. In this vein, literacy can also be broadly conceptualized as the building of capacity for an individual to act in a deliberative democratic system, a direction that remains under-explored in studies of privacy literacy. Reference Gordon and Baldwin-PhilippiGordon and Baldwin-Phillipi (2014) call this “civic learning”. They present two case studies, in which the online game Community PlanIt (CPI) was deployed in a community to enhance civic-engagement with support from local community organizations. One was part of a district wide planning process in the Boston Public Schools and the second as part of a master planning process in Detroit, Michigan. On assessing the impact of CPI in both case studies, they concluded that the gaming platform allowed what they term as “civic learning” to occur. This has important implications for privacy governance and privacy literacy: what kinds of tools and systems can help build individuals’ capacities as engaged, informed, and empowered citizens in the governance of privacy rules?

10.4.4 Empirical Studies of Privacy Norms and Their Relation to Individuals’ Expectations and Preferences

A focus on procedural legitimacy of informational norms raises another related important question: how can community members’ expectations and preferences of privacy be used to assess the legitimacy of contextual informational norms?

This calls for ways of empirically studying such expectations and preferences, not merely at an individual level, but at a group level. In prior work (Reference Shvartzshnaider, Tong, Wies, Kift, Nissenbaum, Subramanian and MittalShvartzshnaider et al. 2016) survey-based methods were used to measure users’ expectations and preferences of privacy to determine whether or not specific information flows are appropriate. However, as Benthall at al. outline:

In CI, appropriateness is a function of social norms, and these norms do codify social expectations and values. Certainly, in some cases user expectations will track social expectations. But though they are related, we caution researchers against conflating social norms with user expectations and preferences. This is because individual users are more prone to becoming unreflectively habituated to a new technology than society as a whole. Also, individual user preferences may at times be opposed to the interests of society. We have identified elaborating on the relationship between individual preferences and social norms as a way to improve CI.

Since the GKC approach seeks to further direct attention from the individual, an important research direction is to explore how individuals’ understanding, expectations, and preferences regarding privacy change in a group setting, and how such changes reflect on the larger governance procedures, particularly when these processes are democratic and participatory in nature?

In her articulation of privacy as a Common Good (P. M. Reference ReganRegan 2002; Reference Regan, Roessler and Mokrosinska2015), Regan raises an important and nuanced point to differentiate between “groups” and “individuals in a group” as a unit of analysis. She also poses the question of probing how individuals in groups differ from individuals acting individually in regards to privacy, highlighting that focusing on individuals who act and are aware of their actions and experiences as members of a group rather than merely as individuals acting in isolated capacities will aid our understanding of privacy behaviors and consequent “privacy actions and inactions.” A consequent key problem Regan identifies is to create avenues to help individuals realize that they are not merely individuals but members of a group both being impacted by the actions of others in the privacy dimension and affecting other people’s privacy. This has close connections to the idea of civic learning explored in the previous section. She recommends drawing on the work of sociologists, social psychologists, and communication scholars who study individual behavior in groups. This line of investigation is also open and available to computer science researchers, particularly those in HCI.

10.4.5 Calibrating Norm Evaluation and Enforcement Engines for Dynamic Sources of Norms

Technical systems that implement CI usually express informational norms in formal systems, and operationalize these norms on information flows that act on specific data exchange between actors in a particular context. Such systems typically rely on norm evaluation and enforcement engines that check whether the information flows are consistent with the supplied norms (Reference Barth, Datta, Mitchell and NissenbaumBarth et al. 2006; Reference Chowdhury, Gampe, Niu, von Ronne, Bennatt, Datta, Jia and WinsboroughChowdhury et al. 2013). An important research consideration that the governance perspective raises is related to the design and architecture of CI norm evaluation and enforcement engines (along with accompanying human–computer interfaces) that are more suited for dynamic deliberative sources of these norms rather than static sources such as laws and policies, as has been the case in prior work (Reference Barth, Datta, Mitchell and NissenbaumBarth et al. 2006).

Reference Shvartzshanider, Balashankar, Wies and SubramanianShvartzshanider et al. (2018) provide important directions here – they use natural language processing techniques such as dependency parsing to automatically extract the parameters of CI from individual sentences. Their approach extracts the CI norm parameters based on the syntactic structure of a single sentence, and uses an accompanying reading comprehension model to incorporate a semantic understanding of the larger scope in order to incorporate it into the CI parameters. They apply their techniques on a corpus that contains website privacy policies in natural text alongside annotations by law students. By supplementing this process with crowdsourcing, they demonstrate that information flows can be automatically extracted from natural text and can be made more precise by appropriate crowdsourcing techniques. While they use a corpus of website privacy policies for this purpose, an open direction is to use natural language processing to infer the parameters of privacy norms from privacy policies generated in a more participatory setting.

10.4.6 Normative Considerations in Differential Privacy

Contextual Integrity could provide a normative framework to embed technical notions such as differential privacy within it (Reference DworkDwork 2006). To the best of the author’s knowledge, there is no existing work that considers the appropriateness (or not) of releasing specific functions of a database from the perspective of CI. The GKC framework could further engage with these questions of appropriateness by considering aspects of governance of these rules of appropriateness.

Differential privacy (DP) is primarily suitable for settings where there is interest in releasing an aggregate function of a dataset consisting of data from individuals. This could include simple functions such as averages or more complex machine learning predictors. As Dwork and Roth state:

“Differential privacy” describes a promise, made by a data holder, or curator, to a data subject: “You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources, are available.”

This is a more intuitive explanation of an underlying mathematical guarantee of plausible deniability, modulated by a privacy parameter, that has been called epsilon in the literature (Reference DworkDwork 2006; Reference Dwork and RothDwork and Roth 2013). For a detailed non-technical discussion of differential privacy consult Reference Wood, Altman, Bembenek, Bun, Gaboardi, Honaker, Nissim, O’Brien, Steinke and VadhanWood et al.’s (2018) primer.

Even though the DP guarantee targets individuals, functions that could be potentially publicly released or shared are computed over a dataset consisting of several individuals. Such a guarantee might, therefore, be meaningful to examine within the context of community governance and deliberation about sharing of data or functions of data more widely. For example, access to information that furthers understanding of medical ailments has a different normative valence than that of aggregation and prediction for commercial purposes such as online advertising and applications that might intentionally or unintentionally enact discrimination. Communities are likely to evaluate the appropriateness of sharing aggregate functions for these two purposes in different ways. For example, many polls indicate that the public views sharing of personal health data with researchers to be different from sharing such data with other more commercializing applications, indicating the need for context-specific attention to such details. On surveying personally controlled health records (PCHRs) users, Weitzman et al. found that 91 percent were willing to share medical information for health research with such willingness “conditioned by anonymity, research use, engagement with a trusted intermediary, transparency around PCHR access and use, and payment” (Reference Weitzman, Kaci and MandlWeitzman, Kaci, and Mandl 2010). In survey-based research conducted at the Pew Center, Reference Madden and RainieMadden and Rainie (2015) found that only 76 percent of respondents say they are “not too confident” or “not at all confident” that data on their online activity held by the online advertisers who place ads on the websites they visit will remain private and secure.

If sharing data at an aggregate level for, say, medical research purposes is deemed to be appropriate, DP can be employed within a governance framework to achieve the guarantee of plausible deniability for individual community members, and to consider questions about what are appropriate aggregate functions that should be shared with people outside the community. By paying attention to the larger normative elements of the use, purpose, and politics of aggregation, DP can be a powerful and effective tool to disrupt what Cohen terms “semantic continuity” (Reference CohenCohen 2019).

Several other research directions open up when we consider embedding DP within the larger normative elements of the commons framework: what kinds of interfaces will enable citizens (without a deep mathematical background) to understand the larger guarantees of DP, and make good governance decisions? Reference Bullek, Garboski, Mir and PeckBullek et al.’s (2017) preliminary work on making the core guarantees of DP understandable and accessible to the larger public provides one step in this direction. Further research that examines groups as units of analysis, rather than only individuals, along with considering contextual dimensions of the settings in which communities might want to share aggregate data, is needed here.

10.5 Conclusion

To conclude, attention toward aspects of governance, particularly its participatory orientations, opens a host of research directions that are ripe to be explored by computer scientists. Designing sociotechnical systems for the privacy commons is important scholarly work, which demands interdisciplinary engagements as well as orienting computer scientists toward such considerations. It is my hope that this chapter will be helpful in charting out some of these research directions.

Conclusion Privacy as Knowledge Commons Governance An Appraisal

Madelyn Sanfilippo , Katherine J. Strandburg and Brett M. Frischmann

The chapters in this book have explored how privacy commons, understood as the sharing and use of personal information, are governed, as well as how information subjects are sometimes excluded from governance. Our previous two books, Governing Medical Knowledge Commons (2017) and Governing Knowledge Commons (2014), collected case studies of commons governance aimed at promoting and sustaining innovation and creativity by sharing and pooling knowledge. While personal information is often shared and pooled for similar purposes, it is distinctive in several important respects. First, and foremost, personal information is inherently about someone, who arguably has a particularized stake in the way that information is shared, pooled and used. This relationship means that privacy commons governance may be ineffective, illegitimate or both if it does not appropriately account for the interests of information subjects. Second, personal information is often shared unintentionally or involuntarily as a side effect of activities aimed at other goals, possibly creating a schism between those seeking to pool and use personal information and the individuals most intimately tied to it. Third, in our current technological era, personal information often flows via commercial communication infrastructure. This infrastructure is owned and designed by actors whose interests may be misaligned or in conflict with the interests of information subjects or of communities seeking to pool, use and manage personal information for common ends. Finally, governing the flow of personal information can be instrumental and often essential to building trust among members of a community, and this can be especially important in contexts where it is a community interested in producing and sharing knowledge.

As the chapters in this volume illustrate, the distinctive characteristics of personal information have important implications for the observed features of commons governance and, ultimately, for legitimacy. Taken together, the studies in this volume thus deepen our understanding of privacy commons governance, identify newly salient issues related to the distinctive characteristics of personal information, and confirm many recurring themes identified in previous GKC studies.

Voice-shaped, Exit-shaped and Imposed Patterns in Commons Governance of Personal Information

To organize some of the lessons that emerge from the GKC analysis of privacy, we harken back to patterns of governance that we identified in our privacy-focused meta-analysis of earlier knowledge commons studies (Reference Sanfilippo, Frischmann and StrandburgSanfilippo, Frischmann and Strandburg, 2018). Though those earlier case studies were neither selected nor conducted using a privacy lens, the meta-analysis identified three patterns of commons governance: member-driven, public-driven and imposed. We observe similar patterns in the privacy-focused case studies gathered here. Reflecting on these new cases allows to refine our understanding of these governance patterns in three respects, which inform the analyses in sub-sections 1.1, 1.2 and 1.3, which illustrate and systematize some of the important patterns that we observe.

First, we hone our understanding of these patterns by drawing on A. O. Hirschmann’s useful conceptions of ‘voice’ and ‘exit’ as distinctive governance mechanisms. What we previously termed ‘member-driven’ commons governance is characterized by the meaningful exercise of participant ‘voice’ in governing the rules-in-use (Reference Gorham, Nissenbaum, Sanfilippo, Strandburg and VerstraeteGorham et al. 2020). Even when participants do not have a direct voice in governance, however, they may exert indirect influence by ‘voting with their feet’ as long as they have meaningful options to ‘exit’ if they are dissatisfied. The governance pattern that we previously characterized as ‘public-driven’ is associated with just such opt out capacity, driving those with direct authority to take participants’ governance preference into account – it is in this sense ‘exit-shaped’. Commons governance is ‘imposed’ when participants have neither a direct ‘voice’ in shaping rules-in-use nor a meaningful opportunity to ‘exit’ when those rules are not to their liking.

Second, as discussed in the Introduction to this volume, personal information can play two different sorts of roles in knowledge commons governance. Most obviously, as reflected in the cases studied in Chapters 2 through 5, personal information is one type of knowledge resource that can be pooled and shared. For example, personal health information from patients may be an important knowledge resource for a medical research consortium. In these cases, privacy is often an important objective to information subjects, as actors who may or may not be adequately represented in commons governance. But even when personal information is not pooled as a knowledge resource, the rules-in-use governing how personal information flows within and outside of the relevant community can have important implications for sustaining participation in a knowledge commons and for the legitimacy of its governance. Chapters 5 through 7 analyse this sort of situation. Either sort of privacy commons can be governed according to any of the three patterns we previously identified. Moreover, and independently, privacy commons governance can also be distinguished according to the role played by information subjects because personal information about one individual can be contributed, disclosed or collected by someone else. Thus, members who have a voice in commons governance might use personal information about unrepresented non-members to create a knowledge resource. Similarly, participants who opt to contribute to a knowledge commons might contribute information about non-participants who have neither a voice in the governance of their personal information nor any ability to opt out of contributing it. And, of course, imposed commons governance might be designed to force participants to contribute personal information ‘without representation’.

Third, we note that even the more nuanced taxonomy presented here papers over many grey areas and complexities that are important in real-world cases. Governance patterns reside on a continuum in, for example, the extent to which governance institutions empower particular individuals and groups. Moreover, most shared knowledge resources are governed by overlapping and nested institutions that may follow different patterns. The often polycentric nature of resource governance, involving overlapping centres of decision-making associated with different actors, often with different objectives and values, is well-recognized in studies of natural resource commons (e.g. Reference McGinnisMcGinnis, 1999; Reference OstromOstrom, 1990). Polycentricity is equally important in knowledge commons governance. Thus, the rules-in-use that emerge in any given case may have complex origins involving interactions and contestation between different groups of commons participants and between commons governance and exogenous background institutions. Different aspects of a case may exhibit different governance patterns. Moreover, some participants may have a voice in shaping certain rules-in-use, while others encounter those same rules on a take-it-or-leave-it basis. This polycentricity means that some cases appear in multiple categories in the analysis mentioned later.

We also emphasize that our categorization of voice-shaped, exit-shaped and imposed commons governance is descriptive. The normative valence of any commons activity depends on its overall social impact. Thus, different governance patterns may be normatively preferable for different knowledge commons or even for different aspects of the same knowledge commons. In particular, as we explain below, any of the three governance patterns can be implemented in a way that accounts adequately or inadequately for the interests and concerns of personal information subjects. For example, while imposed commons governance associated with commercial infrastructure is notoriously unresponsive to information subject concerns, government-imposed commons governance often aims to bring the interests of information subjects into the picture.

Voice-shaped Commons Governance

In the voice-shaped governance pattern, those creating and using knowledge resources are also responsible for their governance. The success of voice-shaped commons arrangements depends on governance that encourages reciprocal contribution for a mutually beneficial outcome. Chapters 2, 4, 5, 6 and 7 in this book describe cases characterized at least in significant part by voice-shaped governance of personal information. In Chapters 2, 4 and 6 this voice-shaped governance is mostly informal, while Chapters 3 and 7 describe more formal governance structures. Cases exhibiting voice-shaped commons can be further characterized as illustrated in Table 11.1, which employs the distinctions based on source and use of personal information described above to categorize cases from this volume and from our earlier meta-analysis.

Table 11.1 Voice-shaped commons breakdown

(Case studies in this volume are in bold)

Information Subjects = MembersInformation Subjects = Not Members
PI = Resource
  • Rare Disease Clinical Research Network

  • LINK Indigenous Knowledge Commons

  • Patient Innovation Project

  • MIDATA (Ch. 2)

  • Facebook Activist Groups (Ch. 5)

  • Biobanks

  • Sentinel Initiative

  • Open Neuroscience Movement

  • Oncofertility Consortium

  • University Institutional Research (Ch. 4)

  • Facebook Activist Groups (Ch. 5)

PI = Collateral Flow
  • Online Creation Communities (some)

  • Aviation Clubs

  • Nineteenth-Century Newspaper Editors

  • Congress

  • Patient Innovation Project

  • Republic of Letters (Ch. 6)

  • Chatham House (Ch. 7)

  • Gordon Conferences (Ch. 7)

  • Broadband ITAG (Ch. 7)

  • Facebook Activist Groups (Ch. 5)

As illustrated in the top row of Table 11.1, voice-shaped commons governance is sometimes applied to create and manage a pool of personal information as a resource. In the cases listed in the upper left quadrant, members participate in governance of knowledge resources created by pooling their own personal information. That quadrant includes medical commons in which patients or their representatives have a direct voice in commons governance, including the MIDATA case explored in Chapter 2 and earlier-studied RDCRN cases, the previously studied LINK Indigenous Knowledge Commons, in which representatives of indigenous groups participate in governing information resources that they view as intimately related to their communities, as well as some aspects of the Facebook activist groups explored in Chapter 5.

In the cases listed in the upper right quadrant, members govern knowledge resources they create by contributing other people’s personal information. In the previously studied medical cases in that quadrant, for example, patient information is governed by consortia of physicians and medical researchers without direct patient involvement. Similarly, in Chapter 4 of this volume, Jones and McCoy describe institutional research by university administrators using a pool of student personal information. Governance of the sharing and use of student information is largely voice-shaped, in that many of the rules-in-use are determined by university personnel who contribute and use the information. Crucially, however, the student information subjects are not members of this governing community.

The distinction is normatively significant. While members may have altruistic concerns for non-member information subjects or be bound, as in the medical and education contexts, by background legal or professional obligations to them, voice-shaped governance is no guarantee that the concerns of non-members will be adequately addressed. Indeed, the NIH included patient representatives as governing members in the Rare Disease Clinical Research Network as a condition of government funding following complaints that patient interests had not been sufficiently represented in earlier consortia made up entirely of physicians and researchers.

That said, governance without the direct participation of information subjects does not necessarily give members free rein to share and use other people’s personal information however they please. Personal health and education information, for example, is governed by applicable background privacy legislation, ethical rules and professional norms. Moreover, in some contexts commons members may be required to obtain the consent of information subjects before contributing their personal information to the pool. Consent, however, is not the same as participation in governance, a point we explore further below and in related work (Gorham et al.).

As illustrated in the bottom left quadrant of Table 11.1, voice-shaped commons governance may also be applied to collateral flows of members’ personal information that occur in conjunction with or as a by-product of creating some other sort of shared knowledge resource. Appropriate governance of such collateral flows of personal information can be important for encouraging participation, improving the quality of other sorts of knowledge resources the group creates and otherwise furthering the goals and objectives of voice-shaped commons governance. The cases in Chapter 7 by Frischmann et al. illustrate how constraints on the flow of members’ personal information to outsiders can incentivize diverse and open participation in creating other sorts of knowledge resources and improve their quality. Whether it is the Chatham House Rule’s incredibly simple prohibitionFootnote 1 on revealing the identity or affiliation of speakers or the more elaborate confidentiality rules adopted by Broadband Internet Technical Advisory Group (BITAG), privacy governance fosters knowledge production and sharing by members. Madison’s Chapter 6 illustrates how informal norms against disclosing personal information in exchanges with other members created a venue for building a knowledge base through rational, scientific argument. The previously studied Patient Innovation Project similarly aims to create a pool of generalizable knowledge about medical innovations made by patients and caregivers, but personal information flows are an inevitable by-product of the sharing of innovations so intimately bound up with patients’ medical conditions. Though the Patient Innovation Project governs these collateral flows of personal information in part by platform design, as discussed in the next sub-section, sub-communities have also developed more tailored, voice-shaped information sharing norms. The bottom right quadrant of Table 11.1 is empty, perhaps because collateral flow of non-member personal information that is not being pooled into a shared resource is rare.

The Facebook activist groups studied in Chapter 5 are included in three of the four quadrants in Table 11.1 because of the variety of personal information-based resources involved and the various ways in which intentional and collateral personal information flows affected participation in these groups. We can describe the governance of these pooled personal information resources and collateral flows as voice-shaped to the extent that contributors either participated actively in creating the mostly informal rules-in-use that emerged or viewed themselves as adequately represented by the groups’ more actively involved leaders and organizers. Voice-shaped governance was only part of the story for these Facebook activist groups, however, as discussed in the sections on exit-shaped and imposed commons later.

In these cases, personal information was contributed directly to shared knowledge resources by those who posted personal narratives to the public Facebook pages, contributed photos, joined Facebook groups or signed up for events or email lists. These pooled knowledge resources were used to further the group’s objectives by informing and persuading the public, facilitating communication of information to members and so forth. While much of this personal information pertained to the contributors, these cases are included in both left and right quadrants of the top row because it was also possible to contribute personal information pertaining to someone else. Indeed, this sort of behaviour occurred often enough that groups developed mechanisms for protecting potentially vulnerable non-participants from such disclosures through rules-in-use. These cases thus illustrate not only the potential for information subjects to be left out of voice-shaped governance, but also the fact that voice-shaped governance may nonetheless incorporate protections for non-members.

The Facebook activist groups of Chapter 5 are also represented in the bottom left quadrant of Table 11.1 because they adopted rules-in-use governing collateral personal information flow arising, for example, from the metadata identifying those who posted to the Facebook pages and the interactions between organizers behind the scenes. In some ways, the various interactions between personal information and participation parallel patterns observed within the Patient Innovation Project, a previous case study. With respect to Patient Innovation, however, personal information as a resource or as collateral flows always pertained to members, rather than non-member information subjects.

Exit-shaped Commons Governance

Exit-shaped commons governance, as we identified in Chapter 1, occurs when an individual or group creates an infrastructure for voluntary public participation in creating a shared knowledge resource. It thus differs from voice-shaped governance in that contributors to the knowledge resource do not participate directly in its governance. The key characteristic that distinguishes exit-shaped commons governance from imposed governance is that contributions are meaningfully voluntary. As a result, whoever governs the shared knowledge resource must do so in a way that will attract participants.

The characteristics of personal information surface distinctions among cases of exit-shaped commons governance similar to those we observed for voice-shaped governance, as illustrated in Table 11.2.

Table 11.2 Exit-shaped commons breakdown

(Case studies in this volume are in bold)

Information Subjects = Public participantsInformation Subjects = Others
PI = Resource
  • Mental Health Chatbots (Ch. 3)

  • Facebook Activist Groups (Ch. 5)

  • IoT (Ch. 9)

Facebook Activist Groups (Ch. 5)

IoT (Ch. 9)

PI = Collateral Flow

Online creation communities

Galaxy Zoo

Patient Innovation Project

Facebook Activist Groups (Ch. 5)

Before delving into the distinctions between cases in the different quadrants in Table 11.2, we focus on common features of exit-shaped commons governance. Most importantly, given that participation is meaningfully voluntary, designers of exit-shaped commons governance must ensure that potential participants will find it worth their while to contribute. As a result, even though contributors do not participate directly in governance, designers of exit-shaped commons cannot stray too far out of alignment with their interests. Trust is important. So, setting aside personal information for the moment, the need to attract participants means that the mental health chatbot must offer mental health assistance that, all things considered, is at least as attractive as alternatives. Galaxy Zoo and many online creation communities have adopted rules favouring non-commercial use of their (non-personal) knowledge resources, presumably because potential contributors find those policies attractive. More limited forms of democratic participation adopted by some online communities may have served similar purposes.

Turning more specifically to the exit-shaped commons governance of personal information, Table 11.2, like Table 11.1, lists cases aiming to create a pool of personal information in the top row and cases involving only personal information flow collateral to other sorts of activities in the bottom row.

The Woebot mental health chatbot described by Mattioli in Chapter 3 appears in the top left quadrant because it creates of pool of personal information contributed by patients as they use the app. By using a therapy chatbot, patients receive mental health assistance, while simultaneously contributing their personal health information to a knowledge pool that can be used by the app’s creators to improve its performance. Based on the analysis in Chapter 3, we categorize the governance of the personal information collected by the Woebot chatbot as exit-shaped. Governance of these personal information resources is not voice-shaped because it is physicians, not patients, who control the design of the app and the use of the associated personal information. Use of these chatbots, and the associated information pooling, does however currently appear to be meaningfully voluntary. Patients seem to have many viable alternative treatment options. Moreover, the chatbot’s physician designers appear to have transparently committed to using the resulting knowledge pool only for research purposes and to improve the app’s operation. It thus seems plausible that patients using the chatbot understand the ramifications of the chatbot’s collection of their personal information, because interesting rules-in-use operationalize this intent in ways that align with patient expectations.

We categorize the Facebook activist groups discussed in Chapter 5 under exit-shaped governance, as well as voice-shaped governance. Informal governance by trusted leaders is a recurring theme in knowledge commons governance. Nonetheless, participation in these movements was so broad that it is virtually inevitable that some participants – especially those who joined at a later stage – experienced the rules-in-use and governance as essentially ‘take it or leave it’. Like the more involved members discussed earlier, such participants could have posted personal information pertaining to themselves or to others. These groups were extremely successful in attracting large numbers of participants who contributed various sorts of personal information. While this success presumably reflects some satisfaction with the rules-in-use for personal information, later joining participants may not have viewed their choice to participate in these particular groups as entirely voluntary. As these groups became foci for expressing certain political views, their value undoubtedly rose relative to alternative protest avenues. This rich-get-richer phenomenon thus may have tipped the balance toward imposed governance, as discussed in the next sub-section.

The rules-in-use for collecting and employing personal information about users of Internet of Things (IoT) devices are largely determined by the commercial suppliers of ‘smart’ devices. The survey study by Shvartzshnaider et al., reported in Chapter 9, suggests that some device users have a sufficient understanding of the way that their personal information is collected and used by IoT companies that their decision to opt in by purchasing and using a given device or to opt out by not doing so are meaningfully voluntary. For this subset of users, the governance of IoT personal information resources may be categorized as exit-shaped and entered into the top left quadrant of Table 11.2. Notably, however, those users’ choices to opt in may also result in the collection of personal information from bystanders, guests and others who have made no such choice. We thus also categorize the IoT in the top right quadrant of Table 11.2. Much as for mental health chatbots, diminishing opportunities for meaningful exit amid pervasive surveillance environments oriented around IoT may disempower users, tipping governance from exit-shaped to imposed, as we will discuss in the next sub-section. On the other hand, one very interesting observation of the Shvartzshnaider et al. study is that online IoT forums allow users to pool their experiences and expertise to create knowledge resources about personal information collection by smart devices and strategies to mitigate it (at least to some degree). Those forums may thus empower consumers and expand the extent to which the governance of personal information resources collected through the IoT is exit-shaped.

The cases in the bottom row of Table 11.2 involve exit-shaped governance of collateral flows of personal information associated with the creation of other sorts of knowledge resources. Galaxy Zoo and the online creation community cases identified in our earlier meta-analysis both fall into this category. We observed in our earlier meta-analysis that those systems governed the collateral flow of personal information, at least in part, by allowing anonymous or pseudonymous participation. Nonetheless, though anonymity was the norm, participants were not discouraged from strategically revealing personal information on occasion in order to establish credibility or expertise. This set of rules presumably encouraged public participation by protecting participants from potentially negative effects of exposing their personal information publicly online while still allowing them to deploy it strategically in ways that benefitted them and may have improved the quality of the knowledge resource. The Patient Innovation Project similarly involves collateral flows of personal information intertwined with information about medical innovations developed by patients and caregivers, though its rules-in-use are different. Though sub-community governance is partially voice-shaped, as discussed above, much of the governance of personal information flows depends on platform design and is thus categorized as exit-shaped.

As noted in the previous section, the Facebook activist groups discussed in Chapter 5 also developed rules-in-use to govern collateral flows of personal information associated with the creation of other sorts of knowledge resources. To the extent those rules-in-use applied to contributors who were not adequately represented in governance, they also constitute exit-shaped commons governance.

Notably, all of the previously studied cases in Table 11.2 appear in the bottom row and involved the creation of general knowledge resources not comprised of personal information. These previously studied knowledge commons were also designed to make the knowledge resources they created openly available. For these earlier cases, the designation ‘public-driven’ may have been ambiguous, conflating openness to all willing contributors with public accessibility of the pooled information or public-generated data sets. The studies categorized in the top row of Table 11.2 clarify that there is a distinction. When we speak of exit-shaped commons governance, we mean openness regarding contributors.

We thus emphasize again the importance of meaningful voluntariness as the key characteristic of exit-shaped commons governance. If participation is not meaningfully voluntary, commons governance becomes imposed, rather than exit-shaped – a very different situation, which we discuss in the next section. Meaningful voluntariness means that potential contributors have meaningful alternatives as well as a sufficient grasp of the ramifications of contributing to the knowledge pool. Exit-shaped commons governance must therefore be designed to attract contributors in order to succeed. The need to attract contributors forces governance to attend sufficiently to participants’ interests. We do not, therefore, expect rules-in-use of open accessibility to emerge from exit-shaped commons governance of personal information pools because open availability would be likely to deter, rather than attract, potential contributors. In exit-shaped commons governance, rules-in-use regarding access to pooled resources are tools that designers can shape to attract participation. We would thus expect access rules to vary depending on the sorts of personal information involved and the goals and objectives of both potential participants and designers.

Of course, while meaningful voluntariness is the key to categorizing governance as exit-shaped, it is no guarantee of success. For example, one could imagine a version of the mental health chatbot that was completely transparent in its intentions to sell mental health information to advertisers or post it on the dark web. That sort of governance would be sufficiently voluntary to be classified as exit-shaped, but highly unlikely to attract enough participants to succeed.

Finally, it is important to note that while exit-shaped commons governance gives contributors some indirect influence over the rules-in-use, it does nothing to empower individuals whose personal information is contributed by others. Thus, cases in the upper right quadrant of Table 11.2 raise the same sorts of privacy concerns as cases in the upper right quadrant of Table 11.1. Just as members-driven governance may fail to attend to the interests of non-member information subjects, designers of exit-shaped governance may fail to attend to the interests of individuals whose personal information can be obtained without their participation.

Imposed Commons Governance

Imposed commons governance is similar to exit-shaped commons governance in that knowledge contributors do not have a voice in the rules-in-use that emerge, but differs significantly because contributors also do not opt for imposed governance in any meaningfully voluntary way. In other words, to the extent commons governance is imposed, contributors and information subjects alike lack both voice and the option to exit. While there is no bright line between voluntarily accepted and mandatory governance, one practical result is that imposed commons governance, unlike exit-shaped governance, need not be designed to attract participation. Thus, though designers might choose to take the interests and preferences of contributors into account, they need not do so.

Those with decision-making power over rules and governance are not always or necessarily the information subjects. Communities can include different membership groups and subgroups, and can rely on different existing infrastructures and technologies for collecting, processing and managing data. Governance associated with these infrastructure and external platforms are determined in design, by commercial interests, and through regulations, thus they will vary accordingly. Externally imposing commons governance requires power of some sort that effectively precludes contributors from opting out of participation. Such power may arise from various sources and can reside in either government or private hands.

One important source of power to impose commons governance over personal information is control and design of important infrastructure or other input resources needed to effectively create and manage the desired knowledge resources. This power is often associated with infrastructure because of network and similar effects that reduce the number of viable options. The Facebook activist groups study in Chapter 5 provides a good example of this source of privately imposed commons governance. Organizers repeatedly noted that they were displeased with certain aspects of Facebook’s platform design and treatment of contributors’ personal information. For these reasons, all three activist groups resorted to alternative means of communication for some purposes. Nonetheless, all concluded that they had no reasonable alternative to using Facebook as their central platform for communicating, aggregating and publicizing information. This example illustrates that complete market dominance is not required to empower a party to impose commons governance to some degree.

Another important source of imposed governance is the law, which is part of the background environment for every knowledge commons arrangement. (Of course, in a democracy, citizens ultimately create law, but on the time frame and scale of most knowledge commons goals and objectives, it is reasonable to treat legal requirements as mandatory and externally imposed.) Applicable law can be general or aimed more specifically at structuring and regulating the creation of particular knowledge resources. To create a useful categorization, we treat only the latter sorts of legal requirements as imposed commons governance. Thus, for example, while acknowledging the importance of HIPAA, and other health privacy laws, we do not classify every medical knowledge commons as involving imposed commons governance. We do, however, classify the specific government mandates of the previously studied Sentinel program as imposed governance. The power to impose governance through law is, of course, limited to governments. However, there are also parallels in corporate policies that, when imposed on employees and teams, are strictly enforced rules.

Commons governance may also be imposed through the power of the purse. For example, while medical researchers are not literally forced to accept government conditions on funding, such as those associated with the Rare Disease Clinical Research Network, their acceptance of those conditions is not meaningfully voluntary in the sense that matters for our categorization. While researchers could in principle rely entirely on other funding sources or choose a different occupation, the paucity of realistic alternatives empowers funding agencies to impose commons governance. Indeed, while there more often are viable funding alternatives in the private sector, large private funders may have similar power to impose governance in some arenas.

Collecting knowledge resources by surveillance is another way to elude voluntary exit and thus impose commons governance. Both governments and some sorts of private entities may be able to impose governance in this way. Many ‘smart city’ activities create knowledge resources through this sort of imposed governance. Private parties exercise this sort of power when they siphon off information that individuals contribute or generate while using their products or services for unrelated purposes. Internet giants such as Facebook and Google are notorious for pooling such information for purposes of targeting ads, but a universe of smaller ad-supported businesses also contribute to such pools. More recently, as discussed in Chapters 8 and 9, the IoT provides a similar source of private power to impose commons governance. Governments can accomplish essentially the same sort of thing by mandating disclosure. The earlier case study of Congress provides an interesting example of the way that open government laws create this sort of imposed commons governance.

Commons governance can also be imposed through control or constraint over contributor participation. This source of power can be illustrated by a thought experiment based on the mental health chatbots studied in Chapter 3. Mattioli’s study suggests that patients’ contributions of personal health information by using the current version of Woebot are meaningfully voluntary. If, however, a mental health chatbot’s use were to be mandated or highly rewarded by insurance companies, its governance pattern would shift from exit-shaped to imposed.Footnote 2 A less obvious example of this type of power comes from the Facebook activist group study. While there might initially be several different groups vying to organize a national protest movement, as time goes on potential participants will naturally value being a part of the largest group. At some point, this rich-get-richer effect can implicitly empower the most popular group to impose its rules-in-use on later joiners.

Finally, and in a somewhat different vein, power to impose commons governance can stem from a party’s ability to undermine contributor voluntariness by misleading individuals about the implications of contributing to a knowledge pool or using particular products or services. This concern has long been central to privacy discourse, especially in the private realm. Empirical studies have convinced many, if not most, privacy experts that privacy policies and similar forms of ‘notice and consent’ in the commercial context ordinarily do not suffice to ensure that participants understand the uses to which their personal information will be put. Facebook is only one prominent example of a company that has been repeatedly criticized in this regard. As another illustration, consider how the voluntariness of patients’ use of the mental health chatbot would be eroded if its pool of personal information came under the control of private parties who wanted to use to target advertising or for other reasons unrelated to improving mental health treatment. If the implications of such uses were inadequately understood by patients, the chatbot’s governance pattern might well shift from exit-shaped to imposed.

Table 11.3 lists cases that involve significant imposed governance. In most of these cases, imposed governance of some aspects of commons activity is layered with voice-shaped or exit-shaped governance of other aspects. The distinctions in Tables 11.1 and 11.2 based on information subjects’ role in governance and on whether pooling personal information is a knowledge commons objective are less salient for categorizing imposed governance in which both contributors and information subjects have neither voice nor the capacity to exit. Instead, the columns in Table 11.3 distinguish between cases in which governance is imposed by government and cases in which it is imposed by private actors, while the rows differentiate between rules-in-use associated with actors and knowledge resources, including contribution, access to, and use of personal information resources. Overall, though governments must balance many competing interests and are not immune to capture, one would expect government-imposed governance to be more responsive than privately imposed governance to the concerns of information subjects.

Table 11.3 Imposed commons governance

Governance imposed upon:GovtPrivate
Actors
  • RDCRN

  • Genome Commons

  • Sentinel Initiative

Resources
  • Open Neuroscience Movement

  • Genome Commons

  • IoT Cybersecurity (Ch. 8)

  • Galaxy Zoo

  • IoT Cybersecurity (Ch. 8)

  • Facebook (Ch. 5)

  • Personal IoT (Ch. 9)

With respect to imposed governance, it is also important to note that some of these cases highlighted efforts to contest these constraints, when they didn’t align with information subjects’ norms and values. Many of the efforts to create more representative rules-in-use or work arounds developed within existing knowledge commons, such as activists on Facebook (Chapter 5). Yet, occasionally, communities of information subjects emerged for the sole purpose of pooling knowledge about exit or obfuscation. For example, the formation of sub-communities of IoT users through online forums that wants to assert more control over the pools of user data generated through their use of smart devices. These users, rather than pooling personal information, create a knowledge resource aimed at supporting other users to more effectively decide whether or how to exit, as well as how to obfuscate the collection of personal information. In this sense, these forums allow information subjects, as actors, to cope with exogenously imposed governance by manufacturers, as well as publicly driven governance.

Privacy as Knowledge Commons Governance: Newly Emerging Themes

These new studies of privacy’s role in commons governance highlight several emerging themes that have not been emphasized in earlier Governing Knowledge Commons (GKC) analyses. In the previous section we reflected on the role of personal information governance in boundary negotiation and socialization, the potential for conflicts between knowledge contributors and information subjects; the potential adversarial role of commercial infrastructure in imposing commons governance; and the role of privacy work-around strategies in responding to those conflicts. Additional newly emerging themes include: the importance of trust; the contestability of commons governance legitimacy; and the co-emergence of contributor communities and knowledge resources.

The Importance of Trust

Focusing on privacy and personal information flows reveals the extent to which the success of voice-shaped or exit-shaped commons governance depends on trust. Perhaps this is most obvious in thinking of cases involving voluntary contributions of personal information to create a knowledge resource. Whether commons governance is voice-shaped or exit-shaped, voluntary contribution must depend on establishing a degree of trust in the governing institutions. Without such trust, information subjects will either opt out or attempt to employ strategy to avoid full participation. Voice-shaped commons governance can create such trust by including information subjects as members. This is the approach taken by the Gordon Research Conferences, the BITAG, the MIDATA case and RDCRN consortia, for example. When a group decides to adopt the Chatham House Rule to govern a meeting, it creates an environment of trust. Exit-shaped commons governance must rely on other mechanisms to create trust. In the medical and education contexts, professional ethics are a potentially meaningful basis for trust. Trust might also be based in shared agendas and circumstances, as was likely the case for the informal governance of the Facebook activist groups. The studies in Chapters 6 and 7 illustrate the perhaps less obvious extent to which trust based on rules-in-use about personal information can be essential to the successful of knowledge commons resources that do not incorporate personal information. This effect suggests that mechanisms for creating and reinforcing trust may be of very broad relevance to knowledge commons governance far beyond the obvious purview of personal information-based resources.

The Contestability of Commons Governance Legitimacy

These privacy-focused studies draw attention to the role of privately imposed commons governance, especially through the design of commercial infrastructure. Previous GKC studies that have dealt with imposed commons governance have focused primarily on government mandates, while previous consideration of infrastructure has been mostly confined to the benign contributions of government actors or private commons entrepreneurs whose goals and objectives were mostly in line with those of contributors and affected parties. These cases also highlight the potentially contestable legitimacy of commons governance of all three sorts and call out for more study of where and when commons governance is socially problematic and how communities respond to illegitimate governance. The issue of legitimacy also demands further attention, of the sort reflected in Chapters 8 through 10 of this volume, to policy approaches for improving the situation.

While GKC theory has always acknowledged the possibility that commons governance will fail to serve the goals and values of the larger society, previous studies have focused primarily on the extent to which a given knowledge commons achieved the objectives of its members and participants. Concerns about social impact focused mainly on the extent to which the resources created by a knowledge commons community would be shared for the benefit of the larger society. These privacy commons studies help to clarify the ways in which knowledge commons governance can fail to be legitimate from a social perspective. They underscore the possibility that knowledge commons governance can be illegitimate and socially problematic even if a pooled knowledge resource is successfully created. This sort of governance failure demands solutions that go beyond overcoming barriers to cooperation. Various types of solutions can be explored, including the development of appropriate regime complexes discussed by Shackelford in Chapter 8, the participatory design approach discussed by Mir in Chapter 9, to the collaborative development of self-help strategies illustrated by the IoT forums discussed in Chapter 10, the imposition of funding requirements giving information subjects a direct voice in governance illustrated by the RDRCN, the development of privacy-protective technologies and infrastructures, and the imposition of more effective government regulation.

Co-emergence of Communities and Knowledge Resources

One of the important differences between the IAD and GKC frameworks is the recognition that knowledge creation communities and knowledge resources may co-emerge, with each affecting the character of the other. The privacy commons studies provide valuable illustrations of this general feature of knowledge commons, particularly in voice-shaped and some exit-shaped situations. In some cases, this co-emergence occurs because at least some participants are subjects of personal information that is pooled to create a knowledge resource. This sort of relationship was identified in earlier RDCRN case studies and is a notable feature of cases discussed in Chapters 2, 3 and 5. The Gordon Research Conferences and BITAG examples from Chapter 5 are perfect examples. Even when the knowledge resource ultimately created by the community does not contain personal information, however, participants’ personal perspectives or experiences may be essential inputs that shape the knowledge resources that are ultimately created, as illustrated by the case studies discussed in Chapter 7 and in the earlier study of the Patient Innovation Project.

Privacy as Knowledge Commons Governance: Deepening Recurring Themes

The contributions in this volume also confirm and deepen insights into recurring themes identified in previous GKC studies (Reference Frischmann, Madison and StrandburgFrischmann, Madison and Strandburg, 2014; Reference Strandburg, Frischmann and MadisonStrandburg, Frischmann and Madison, 2017). These privacy-focused studies lend support to many of those themes, while the distinctive characteristics of personal information add nuance, uncover limitations and highlight new observations which suggest directions for further research. Rather than re-visiting all of those earlier observations, this section first summarizes some recurring themes that are distinctively affected by the characteristics of personal information and then identifies some new themes that emerge from privacy commons studies.

Knowledge Commons May Confront Diverse Obstacles or Social Dilemmas, Many of Which are Not Well Described or Reducible to the Simple Free Rider Dilemma

When we developed the GKC framework more than ten years ago, our focus was on challenging the simplistic view that the primary obstacle to knowledge creation was the free rider dilemma, which had to be solved by intellectual property or government subsidy. We were directly inspired by Ostrom’s demonstration that private property and government regulation are not the sole solutions to the so-called tragedy of the commons for natural resources. It became immediately clear from our early case studies, however, not only that there were collective solutions to the free rider problem for knowledge production, but that successful commons governance often confronted and overcame many other sorts of social dilemmas. Moreover, these other obstacles and dilemmas were often more important to successful knowledge creation and completely unaddressed by intellectual property regimes. Considering privacy and personal information confirms this observation and adds some new twists.

Among the dilemmas identified in the earlier GKC studies, the privacy-focused studies in this volume call particular attention to:

  • Dilemmas attributable to the nature of the knowledge or information production problem.

As we have already emphasized, personal information flow and collection creates unique dilemmas because of the special connection between the information and its subjects, who may or may not have an adequate role in commons governance.

  • Dilemmas arising from the interdependence among different constituencies of the knowledge commons.

When personal information is involved, these sorts of dilemmas reappear in familiar guises, but also with respect to particular concerns about the role of information subjects in governance.

  • Dilemmas arising from (or mitigated by) the broader systems within which a knowledge commons is nested or embedded.

On the one hand, accounting for personal information highlights the important (though often incomplete) role that background law and professional ethics play in mitigating problems that arise from the lack of representation of information subjects’ interests in commons governance. On the other hand, it draws attention to the ways in which infrastructure design, especially when driven by commercial interests, can create governance dilemmas related to that lack of representation.

  • Dilemmas associated with boundary management

The studies in this volume identify the important role that privacy governance can play in governing participation and managing access boundaries for knowledge commons, often even when the relevant knowledge resources are not comprised of personal information.

Close Relationships Often Exist between Knowledge Commons and Shared Infrastructure

Earlier GKC case studies noted the important role that the existence or creation of shared infrastructure often played in encouraging knowledge sharing by reducing transaction costs. In those earlier studies, when infrastructure was not created collaboratively, it was often essentially donated by governments or ‘commons entrepreneurs’ whose goals and objectives were closely aligned with those of the broader commons community. While some studies of privacy commons also identify this sort of ‘friendly’ infrastructure, their most important contribution is to identify problems that arise when infrastructure owners have interests that conflict with the interests of information subjects. This sort of ‘adversarial infrastructure’ is often created by commercial entities and closely associated with the emergence of imposed commons governance. Undoubtedly, there are times when society’s values are best served by embedding and imposing governance within infrastructure in order to constrain some knowledge commons from emerging, in competition with sub-communities’ preferences; in these cases infrastructure operationalizes rules to prevent certain resources from being pooled or disseminated, such as by white supremacists or for terrorism, or the emergence of rules-in-use to prevent social harms, such as pornography. There is a special danger, however, that society’s values will not be reflected in private infrastructure that takes on the role of imposing commons governance, as many of the privacy commons studies illustrate.

Knowledge Commons Governance Often Did Not Depend on One Strong Type or Source of Individual Motivations for Cooperation

Earlier GKC case studies largely presumed that contributing to a knowledge commons was largely, if not entirely, a voluntary activity and that commons governance had to concern itself with tapping into, or supplying, the varying motivations required to attract the cooperation of a sometimes diverse group of necessary participants. Privacy commons studies turn this theme on its head by emphasizing the role of involuntary – perhaps even coerced – contribution. Thus, a given individual’s personal information can sometimes be contributed by others, obtained by surveillance or gleaned from other behaviour that would be difficult or costly to avoid. This possibility raises important questions about the legitimacy of commons governance that were not a central focus of earlier GKC case studies.

The Path Ahead

The studies in this volume move us significantly forward in our understanding of knowledge commons, while opening up important new directions for future research and policy development. We mention just a few such directions in this closing section.

First, while the taxonomy of voice-shaped, exit-shaped and imposed commons governance emerged from studies of personal information governance, it is more broadly applicable. To date, GKC case studies have tended to focus on voice-shaped commons governance. More studies of exit-shaped commons governance would be useful, for knowledge commons aimed at pooling personal information and others. For example, it might be quite interesting to study some of the commercial DNA sequencing companies, such as 23andMe, which create pools of extremely personal genetic information, used at least partly for genetic research. There are currently a number of such companies, which seem to attract a fair amount of business. Without further study, it is unclear whether the behaviour of these entities is sufficiently clear to contributors to qualify them as exit-shaped commons governance. Moreover, these companies also collect a considerable amount of information that pertains to information subjects who are not contributors, making them a promising place to study those issues as well.

Second, we learned from these cases that the distinction between public- and voice-shaped governance is strongly associated with the differences between meaningful exit and voice, respectively. While these mechanisms are important in providing legitimacy (Gorham et al.), individual rules-in-use to establish exit and voice functions vary significantly across contexts. It is not yet clear what makes exit or voice meaningful in a given context. Future case studies should address the institutional structure, differentiating between specific strategies, norms and rules and seeking to associate particular governance arrangements with social attributes and background characteristics in order to understand when exit or voice solutions might work and the contextual nature of successful governance arrangements.

Third, many of these privacy commons case studies emphasized the complexity of governance arrangements, identifying many competing layers of rules-in-use and rules-on-the-books, which reflected the interests of different actors, including information subjects, private sector firms and government actors. These conflicting layers illustrate the polycentric nature of knowledge commons governance, providing an opportunity to reconnect to insights from natural resource commons in future case studies. Further, there is room for considerably more study of how adversarial (or at least conflicting) infrastructure design affects commons governance. Additional inquiries into communities’ relationships with social media platforms would likely provide significant insight, as would case studies in smart city contexts.

While each of these directions should be explored in their own right, they are also reflected in supplementary questions added to the GKC framework, as represented in Table 11.4, and should be considered in future case studies.

Table 11.4 Updated GKC framework (with supplementary questions in bold)

Knowledge Commons Framework and Representative Research Questions
Background Environment
What is the background context (legal, cultural, etc.) of this particular commons?
What normative values are relevant for this community?
What is the ‘default’ status of the resources involved in the commons (patented, copyrighted, open, or other)?
How does this community fit into a larger context? What relevant domains overlap in this context?
Attributes
ResourcesWhat resources are pooled and how are they created or obtained?
What are the characteristics of the resources? Are they rival or nonrival, tangible or intangible? Is there shared infrastructure?
What is personal information relative to resources in this action arena?
What technologies and skills are needed to create, obtain, maintain, and use the resources?
What are considered to be appropriate resource flows? How is appropriateness of resource use structured or protected?
Community MembersWho are the community members and what are their roles, including with respect to resource creation or use, and decision-making?
Are community members also information subjects?
What are the degree and nature of openness with respect to each type of community member and the general public?
Which non-community members are impacted?
Goals and ObjectivesWhat are the goals and objectives of the commons and its members, including obstacles or dilemmas to be overcome?
Who determines goals and objectives?
What values are reflected in goals and objectives?
What are the history and narrative of the commons?
What is the value of knowledge production in this context?
Governance
ContextWhat are the relevant action arenas and how do they relate to the goals and objective of the commons and the relationships among various types of participants and with the general public?
Are action arenas perceived to be legitimate?
InstitutionsWhat legal structures (e.g., intellectual property, subsidies, contract, licensing, tax, antitrust) apply?
What other external institutional constraints are imposed? What government, agency, organization, or platform established those institutions and how?
How is institutional compliance evaluated?
What are the governance mechanisms (e.g., membership rules, resource contribution or extraction standards and requirements, conflict resolution mechanisms, sanctions for rule violation)?
What are the institutions and technological infrastructures that structure and govern decision making?
What informal norms govern the commons?
What institutions are perceived to be legitimate? Illegitimate? How are institutional illegitimacies addressed?
ActorsWhat actors and communities: are members of the commons, participants in the commons, users of the commons and/or subjects of the commons?
Who are the decision-makers and how are they selected? Are decision-makers perceived to be legitimate? Do decision-makers have an active stake in the commons?
How do nonmembers interact with the commons? What institutions govern those interactions?
Are there impacted groups that have no say in governance? If so, which groups?
Patterns and Outcomes
What benefits are delivered to members and to others (e.g., innovations and creative output, production, sharing and dissemination to a broader audience and social interactions that emerge from the commons)?
What costs and risks are associated with the commons, including any negative externalities?
Are outcomes perceived to be legitimate by members? By decision-makers? By impacted outsiders?
Do governance patterns regarding participation provide exit and/or voice mechanisms for participants and/or community members?
Which rules-in-use are associated with exit-shaped, voice-shaped or imposed governance? Are there governance patterns that indicate the relative impact of each within the commons overall?

Footnotes

8 Governing the Internet of Everything

I thank my wonderful colleagues at the Ostrom Workshop for their suggestions and leadership on the many important topics covered herein, particularly Dan Cole and Mike McGinnis.

1 Chair, Indiana University-Bloomington Cybersecurity Program; Director, Ostrom Workshop Program on Cybersecurity and Internet Governance; Associate Professor, Indiana University Kelley School of Business. An earlier version of this chapter appeared as Scott J. Shackelford, Governing the Internet of Everything, Cardozo Arts & Entertainment Law Journal (2019).

9 Contextual Integrity as a Gauge for Governing Knowledge Commons

1 Assistant Professor/Faculty Fellow in the Courant Institute of Mathematical Sciences, NYU; Visiting Associate Research Scholar at the Center for Information Technology Policy (CITP), Princeton University.

2 Assistant Professor, School of Information Sciences, University of Illinois at Urbana-Champaign; Affiliate Scholar, The Vincent and Elinor Ostrom Workshop in Political Theory and Policy Analysis, Indiana University, Bloomington. Ph.D., Indiana University, Bloomington; M.I.S., Indiana University, Bloomington; B.S., University of Wisconsin-Madison.

3 Assistant Professor, Department of Computer Science, Colgate University; Ph.D., Department of Computer Science, Princeton University; Graduate Student Fellow, Center for Information Technology Policy, Princeton University.

4 Respondents indicated to own more than two smart devices.

5 Respondents in Group 1 indicated that they weren’t concerned with the privacy implications of the survey Scenario A.

6 Each group was identified by their average perceptions of appropriateness, rather than by similarity in open-ended responses.

10 Designing for the Privacy Commons

Conclusion Privacy as Knowledge Commons Governance An Appraisal

1 Despite the simplicity of the Chatham House Rule, there are variations in how it is applied. As law students learn in the first semester of law school, even simple rules require interpretation. Ambiguities arise, and thus lead to variances in applications across communities, with respect to questions, such as: Who decides whether the Rule governs? To whom does the ban on revealing a speaker’s identity or affiliation extend? Can identity be disclosed to someone bound by a duty of confidentiality? Can a speaker waive the Rule, and if so, under what circumstances?

2 Meaningfully voluntary is thus doing significant work, and one might question whether this criterion is as useful and clear cut as it seems. Like the similar concept of informed consent, it may be fundamentally flawed because it is contingent, at least to a substantial degree, on the integrity and stability of individual’s preferences and beliefs. Preferences and beliefs are, of course, malleable (i.e. subject to manipulation, nudging through adjustments in the choice architecture, and other forms of techno-social engineering). See Reference Frischmann and SelingerFrischmann and Selinger (2018). This is a significant challenge. Nonetheless, we believe Hirschmann’s conception of exit and voice remain useful as a baseline for evaluation.

References

References

Adler, Ben. “California Passes Strict Internet Privacy Law with Implications for The Country.” NPR. June, 2018.Google Scholar
Ashton, Kevin. “That ‘Internet of Things’ Thing.” RFID Journal, June, 2009.Google Scholar
Banafa, Ahmed. “The Internet of Everything (IOE).” Open Mind, August, 2016.Google Scholar
Beech, Hannah. “China’s Great Firewall is Harming Innovation, Scholars Say.” Time, June 2016.Google Scholar
Botezatu, Bogdan. “Unprotected IoT Devices Killed the US Internet for Hours.” Bitdefender, October, 2016.Google Scholar
Bowles, Nellie. “Thermostats, Locks and Lights: Digital Tools of Domestic Abuse.” New York Times, June, 2018.Google Scholar
Buck, Susan J. The Global Commons: An Introduction. New York: Island Press, 1998.Google Scholar
Casey, Michael J. and Vigna, Paul. The Truth Machine: The Blockchain and the Future of Everything. New York: St. Martin’s Press, 2018.Google Scholar
Cole, Dan. “Learning from Lin: Lessons and Cautions from the Natural Commons for the Knowledge Commons.” In Governing Knowledge Commons, Edited by Frischmann, Brett M., Madison, Michael J., and Strandburg, Katherine J., 1st ed. Oxford University Press, 2014.Google Scholar
Cole, Dan. “Cyber Risk Thursday: Internet of Bodies.” Atlantic Council, September, 2017.Google Scholar
Deibert, Ron. “Distributed Security as Cyber Strategy: Outlining a Comprehensive Approach for Canada in Cyberspace.” Canadian Defense and Foreign Affairs Institute, 2012.Google Scholar
Evans, Dave. “The Internet of Everything: How More Relevant and Valuable Connections Will Change the World.” Cisco, 2012.Google Scholar
Feeny, David, Berkes, Fikret, Mccay, Bomnnie J., and Acheson, James M.The Tragedy of the Commons: Twenty-Two Years Later.” Human Ecology, 18 (March 1990): 119.Google Scholar
Finnemore, Martha and Sikkink, Kathryn. “International Norm Dynamics and Political Change.” International Organization, 52 (1998): 887917.Google Scholar
Frischmann, Brett. “The Tragedy of the Commons, Revisited.” Scientific American, November, 2018.Google Scholar
Frischmann, Brett M., Madison, Michael J., and Strandburg, Katherine J., eds. Governing knowledge commons. Oxford University Press, 2014.CrossRefGoogle Scholar
Galtung, Johan. “Peace, Positive and Negative.” In The Encyclopedia of Peace Psychology. Edited by Christie, Daniel J.. Oxford, UK: Wiley-Blackwell, 2012.Google Scholar
Galtung, Johan. “Violence, Peace, and Peace Research.” Peace Research, 6, 3 (1969): 167191.Google Scholar
Giles, Martin. “For Safety’s Sake, We Must Slow Innovation in Internet-Connected Things.” MIT Technology Review. September, 2018.Google Scholar
Hiller, Janine and Shackelford, Scott J.The Firm and Common Pool Resource Theory: Unpacking the Rise of Benefit Corporations.” American Business Law Journal, 55, 1 (2018): 551.Google Scholar
Interview with Nobel Laureate Elinor Ostrom, Escotet Found., http://escotet.org/2010/11/interview-with-nobel-laureate-elinor-ostrom/ (last visited June 29, 2018).Google Scholar
Keohane, Robert O. and Victor, David G.The Regime Complex for Climate Change.” Perspectives on Politics, 9, 1 (2011): 723.CrossRefGoogle Scholar
Johnson, Gregoray A.Organizational Structure and Scalar Stress.” In Theory and Explanation in Archeology, Edited by Renfrew, Colin, Michael Rowlands and Barbara A. Segraves-Whallon, Academic Press, Inc., 1982.Google Scholar
Jordan, Tim. Cyberpower: The Culture and Politics of Cyberspace and the Internet. London: Routledge Press, 1999.Google Scholar
Lachance, Naomi. “Not Just Bitcoin: Why the Blockchain Is a Seductive Technology to Many Industries.” NPR, May, 2016.Google Scholar
McGinnis, Michael D.An Introduction to IAD and the Language of the Ostrom Workshop: A Simple Guide to a Complex Framework.” Policy Studies, 39, 1 (2011): 169183.CrossRefGoogle Scholar
Merriman, Chris. “87 Percent of Consumers Haven’t Heard of the Internet of Things.” Inquirer, August, 2014.Google Scholar
Murray, Andrew. The Regulation of Cyberspace: Control in the Online Environment. London: Routledge, 2007.CrossRefGoogle Scholar
Nye, Joseph S. “The Regime Complex for Managing Global Cyber Activities.” Global Commission on Internet Governance, 2014.Google Scholar
Ostrom, Vincent and Ostrom, Elinor. “Public Goods and Public Choices.” In Elinor Ostrom and the Bloomington School of Political Economy Vol. 2, Edited by Cole, Daniel H. and McGinnis, Michael, Lexington Books, 2015.Google Scholar
Ostrom, Elinor. “Beyond Markets and States: Polycentric Governance of Complex Economic Systems.” Nobel Prize Lecture, 2009.Google Scholar
Ostrom, Elinor. “A Polycentric Approach for Coping with Climate Change.” World Bank Policy Research, 35, 5095 (2009).Google Scholar
Ostrom, Elinor and Hess, Charlotte. “A Framework for Analyzing the Knowledge Commons.” In Understanding Knowledge as a Commons: From Theory to Practice. Edited by Hess, Charlotte and Ostrom, Elinor, London: MIT Press, 2007.Google Scholar
Ostrom, Elinor and Crawford, Sue. “Classifying Rules.” In Understanding Institutional Diversity. Edited by Ostrom, Elinor. Princeton, NJ: Princeton University Press, 2005.Google Scholar
Ostrom, Elinor, Burger, Joanna, Field, Christopher B., Norgaard, Richard B., and Policansky, David, “Revisiting the Commons: Local Lessons, Global Challenges.” Science, April, 1999.Google Scholar
Reardon, Marguerite. “Facebook’s FTC Consent Decree Deal: What You Need to Know.” CNet, August, 2018.Google Scholar
Robertson, Adi. “California Just Became the First State with an Internet of Things Cybersecurity Law.” Verge. September, 2018.Google Scholar
Roose, Kevin. “Reddit Limits Noxious Content by Giving Trolls Fewer Places to Gather.” New York Times, September, 2017.Google Scholar
Shackelford, Scott J., Raymond, Anjanette, Charoen, Danuvasin, Balakrishnan, Rakshana, Dixit, Prakhar, Gjonaj, Julianna, and Kavi, Rachith. “When Toasters Attack: Enhancing the ‘Security of Things’ through Polycentric Governance.” University of Illinois Law Review, 2017: 415.Google Scholar
Shackelford, Scott J.On Climate Change and Cyber Attacks: Leveraging Polycentric Governance to Mitigate Global Collective Action Problems.” Vanderbilt Journal of Entertainment and Technology Law, 18 (2016): 653711.Google Scholar
Shackelford, Scott J., Richards, Eric L., Raymond, Anjanette H., and Craig, Amanda N., “Using BITs to Protect Bytes: Promoting Cyber Peace and Safeguarding Trade Secrets through Bilateral Investment Treaties.” American Business Law Journal, 52 (2015): 174.Google Scholar
Shackelford, Scott J., Proia, Andrew, Craig, Amanda, and Martell, Brenton. “Toward a Global Standard of Cybersecurity Care? Exploring the Implications of the 2014 NIST Cybersecurity Framework on Shaping Reasonable National and International Cybersecurity Practices.” Texas International Law Journal, 50 (2015): 287.Google Scholar
Shackelford, Scott J. Managing Cyber Attacks in International Law, Business, and Relations: In Search of Cyber Peace. Washington, DC: Cambridge University Press, 2014.CrossRefGoogle Scholar
Smith, Michael. “The Tragedy of the Commons’ in the IoT Ecosystem.” Computerworld, August, 2017.Google Scholar
The Promise of the Blockchain: The Trust Machine, Economist, October, 2015.Google Scholar
Villasenor, John. “Blockchain Technology: Five Obstacles to Mainstream Adoption.” Forbes, June, 2018.Google Scholar
Young, Oran R.Rights, Rules, and Resources in World Affairs.” In Global Governance: Drawing Insights from the Environmental Experience. Edited by Young, Oran R.. New Haven: MIT Press, 1997.Google Scholar
Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven: Yale University Press, 2008.Google Scholar
Zmudzinski, Adrian. “Blockchain Adoption in IoT Industry More Than Doubled in 2018: Survey.” Cointelegraph, January, 2019.Google Scholar
Waltl, Josef. IP Modularity in Software Products and Software Platform Ecosystems.Google Scholar

References

Apthorpe, Noah, Shvartzshnaider, Yan, Mathur, Arunesh, Reisman, Dillon, and Feamster, Nick. “Discovering Smart Home Internet of Things Privacy Norms Using Contextual Integrity.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, no. 2 (2018): 59.Google Scholar
Bhatia, Jaspreet, and Breaux, Travis D.. “Semantic Incompleteness in Privacy Policy Goals.” In 2018 IEEE 26th International Requirements Engineering Conference (RE), pp. 159169. IEEE, 2018.Google Scholar
Borkowski, Stephen, Sandrick, Carolyn, Wagila, Katie, Goller, Carolin, Chen, Ye, and Zhao, Lin. “Magicbands in the Magic Kingdom: Customer-Centric Information Technology Implementation at Disney.” Journal of the International Academy for Case Studies 22, no. 3 (2016): 143.Google Scholar
Crawford, Sue ES and Ostrom, Elinor. “A grammar of institutions.” American Political Science Review 89, no. 3 (1995): 582600.Google Scholar
Frischmann, Brett M., Madison, Michael J., and Strandburg, Katherine Jo, eds. Governing knowledge commons. Oxford University Press, 2014.CrossRefGoogle Scholar
Geeng, Christine and Roesner, Franziska. “Who’s In Control?: Interactions In Multi-User Smart Homes.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 268. ACM, 2019.Google Scholar
Gorham, Ashley E., Nissenbaum, Helen, Sanfilippo, Madelyn R, Strandburg, Katherine, and Verstraete, Mark. “Legitimacy in Context.” At Privacy Law Scholars Conference (PLSC), University of California-Berkeley, 2019.Google Scholar
Joh, Elizabeth E.The new surveillance discretion: Automated suspicion, big data, and policing.” Harv. L. & Pol’y Rev. 10 (2016): 15.Google Scholar
Kumar, Deepak, Shen, Kelly, Case, Benton, Garg, Deepali, Alperovich, Galina, Kuznetsov, Dmitry, Gupta, Rajarshi, and Durumeric, Zakir. “All Things Considered: An Analysis of IoT Devices on Home Networks.” In 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 11691185. 2019.Google Scholar
Manikonda, Lydia, Deotale, Aditya, and Kambhampati, Subbarao. “What’s up with Privacy?: User Preferences and Privacy Concerns in Intelligent Personal Assistants.” In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 229235. ACM, 2018.Google Scholar
Martin, Kirsten. “Information technology and privacy: conceptual muddles or privacy vacuums?.” Ethics and Information Technology 14, no. 4 (2012): 267284.Google Scholar
Martin, Kirsten and Nissenbaum, Helen. “Measuring privacy: an empirical test using context to expose confounding variables.” Colum. Sci. & Tech. L. Rev. 18 (2016): 176.Google Scholar
Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.CrossRefGoogle Scholar
Okoyomon, E., Samarin, N., Wijesekera, P., Elazari Bar On, A., Vallina-Rodriguez, N., Reyes, I., … & Egelman, S. (2019). On The Ridiculousness of Notice and Consent: Contradictions in App Privacy Policies. Privacy Law Scholars Conference (PLSC 2019), University of California, Berkeley, May 30–31, 2019. https://blues.cs.berkeley.edu/wp-content/uploads/2019/05/conpro19-policies.pdfGoogle Scholar
Sanfilippo, Madelyn, Frischmann, Brett, and Strandburg, Katherine. “Privacy as Commons: Case Evaluation Through the Governing Knowledge Commons Framework.” Journal of Information Policy 8 (2018): 116166.Google Scholar
Selbst, Andrew D.Contextual expectations of privacy.” Cardozo L. Rev. 35 (2013): 643.Google Scholar
Shvartzshnaider, Yan, Sanfilippo, Madelyn, and Apthorpe, Noah. “Privacy Expectations In the Wild: Integrating Contextual Integrity and Governing Knowledge Commons for Empirical Research.” http://ssrn.com/abstract=3503096Google Scholar
Shvartzshnaider, Yan, Tong, Schrasing, Wies, Thomas, Kift, Paula, Nissenbaum, Helen, Subramanian, Lakshminarayanan, and Mittal, Prateek. “Learning privacy expectations by crowdsourcing contextual informational norms.” In Fourth AAAI Conference on Human Computation and Crowdsourcing. 2016.Google Scholar
Tanczer, Leonie, Neira, Isabel Lopez, Parkin, Simon, Patel, Trupti, and Danezis, George. “Gender and IoT Research Report.” University College London, white paper (2018): www.ucl.ac.uk/steapp/sites/steapp/files/giot-report.pdfGoogle Scholar
Valdez, André Calero and Ziefle, Martina. “The users’ perspective on the privacy-utility trade-offs in health recommender systems.” International Journal of Human-Computer Studies 121 (2019): 108121.Google Scholar
Zimmer, Michael. “Addressing Conceptual Gaps in Big Data Research Ethics: An Application of Contextual Integrity.“ Social Media+ Society 4, no. 2 (2018): https://doi.org/2056305118768300AU: DOI no?Google Scholar
Zou, Yixin, Mhaidli, Abraham H., McCall, Austin, and Schaub, Florian. “ ‘I’ve Got Nothing to Lose’: Consumers’ Risk Perceptions and Protective Actions after the Equifax Data Breach.” In Fourteenth Symposium on Usable Privacy and Security ({SOUPS} 2018), pp. 197216. 2018.Google Scholar

References

Altman, Irwin. 1975. “The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding.” Brooks/Cole Publishing Company, Monterey California.Google Scholar
Auger, James. 2013. “Speculative Design: Crafting the Speculation.” Digital Creativity 24 (1): 1135. https://doi.org/10.1080/14626268.2013.767276.Google Scholar
Badillo-Urquiola, Karla, Page, Xinru, and Wisniewski, Pamela. 2018. “Literature Review: Examining Contextual Integrity within Human-Computer Interaction.” Available at SSRN 3309331.Google Scholar
Barth, A., Datta, A., Mitchell, J. C., and Nissenbaum, H.. 2006. “Privacy and Contextual Integrity: Framework and Applications.” In 2006 IEEE Symposium on Security and Privacy (S&P’06), 15 pp. 184–198. Berkeley/Oakland, CA: IEEE. https://doi.org/10.1109/SP.2006.32.Google Scholar
Beck, Eevi. 2002. “P for Political: Participation Is Not Enough.” Scandinavian Journal of Information Systems 14 (1). https://aisel.aisnet.org/sjis/vol14/iss1/1.Google Scholar
Benthall, Sebastian, Gürses, Seda, and Nissenbaum, Helen. 2017. “Contextual Integrity through the Lens of Computer Science.” Foundations and Trends in Privacy and Security 2 (1): 169.CrossRefGoogle Scholar
Bjerknes, Gro, Ehn, Pelle, Kyng, Morten, and Nygaard, Kristen. 1987. Computers and Democracy: A Scandinavian Challenge. Gower Pub Co.Google Scholar
Bossen, Claus, Dindler, Christian, and Iversen, Ole Sejer. 2016. “Evaluation in Participatory Design: A Literature Survey.” In Proceedings of the 14th Participatory Design Conference: Full Papers – Volume 1, 151160. PDC ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2940299.2940303.Google Scholar
Bullek, Brooke, Garboski, Stephanie, Mir, Darakhshan J., and Peck, Evan M.. 2017. “Towards Understanding Differential Privacy: When Do People Trust Randomized Response Technique?” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 38333837. CHI ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3025453.3025698.Google Scholar
Cavoukian, Ann and others. 2009. “Privacy by Design: The 7 Foundational Principles.” Information and Privacy Commissioner of Ontario, Canada 5.Google Scholar
Chowdhury, Omar, Gampe, Andreas, Niu, Jianwei, von Ronne, Jeffery, Bennatt, Jared, Datta, Anupam, Jia, Limin, and Winsborough, William H. 2013. “Privacy Promises That Can Be Kept: A Policy Analysis Method with Application to the HIPAA Privacy Rule.” In Proceedings of the 18th ACM Symposium on Access Control Models and Technologies, 314. ACM.Google Scholar
Cohen, Julie E. 2019. “Turning Privacy Inside Out.” Theoretical Inquiries in Law 20 (1). www7.tau.ac.il/ojs/index.php/til/article/view/1607.Google Scholar
DiSalvo, Carl, Clement, Andrew, and Pipek, Volkmar. 2012. “Communities: Participatory Design for, with and by Communities.” In Routledge International Handbook of Participatory Design, 202230. Routledge.Google Scholar
DiSalvo, Carl, Jenkins, Tom, and Lodato, Thomas. 2016. “Designing Speculative Civics.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 49794990. CHI ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2858036.2858505.Google Scholar
Dwork, Cynthia. 2006. “Differential Privacy.” In Proceedings of the 33rd International Conference on Automata, Languages and Programming – Volume Part II, 112. ICALP’06. Berlin, Heidelberg: Springer-Verlag. https://doi.org/10.1007/11787006_1.Google Scholar
Dwork, Cynthia and Roth, Aaron. 2013. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends® in Theoretical Computer Science 9 (3–4): 211407. https://doi.org/10.1561/0400000042.Google Scholar
Ehn, Pelle. 1988. “Work-Oriented Design of Computer Artifacts.” PhD Thesis, Arbetslivscentrum.Google Scholar
Fairfield, Joshua and Engel, Christoph. 2017. “Privacy as a Public Good.” In Privacy and Power, edited by Miller, Russell A., 95128. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781316658888.004.Google Scholar
Friedman, Batya, ed. 1997. Human Values and the Design of Computer Technology. Stanford, CA, USA: Center for the Study of Language and Information.Google Scholar
Gordon, Eric and Baldwin-Philippi, Jessica. 2014. “Playful Civic Learning: Enabling Lateral Trust and Reflection in Game-Based Public Participation.” International Journal of Communication 8: 759786.Google Scholar
Gordon, Eric and Walter, Stephen. 2016. “16. Meaningful Inefficiencies: Resisting the Logic of Technological Efficiency in the Design of Civic Systems.” The Playful Citizen, 310.Google Scholar
Green, Ben. 2019. The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. MIT Press.Google Scholar
Grönvall, Erik, Malmborg, Lone, and Messeter, Jörn. 2016. “Negotiation of Values As Driver in Community-Based PD.” In Proceedings of the 14th Participatory Design Conference: Full Papers – Volume 1, 4150. PDC ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2940299.2940308.Google Scholar
Gürses, S. and del Alamo, J. M.. 2016. “Privacy Engineering: Shaping an Emerging Field of Research and Practice.” IEEE Security Privacy 14 (2): 4046. https://doi.org/10.1109/MSP.2016.37.Google Scholar
Gürses, Seda and Joris, van Hoboken. 2018. “Privacy after the Agile Turn.” The Cambridge Handbook of Consumer Privacy. April 2018. https://doi.org/10.1017/9781316831960.032.Google Scholar
Gürses, Seda, Troncoso, Carmela, and Diaz, Claudia. 2011. “Engineering Privacy by Design.” Computers, Privacy & Data Protection 14 (3): 25.Google Scholar
Huybrechts, Liesbeth, Benesch, Henric, and Geib, Jon. 2017. “Institutioning: Participatory Design, Co-Design and the Public Realm.” CoDesign 13 (3): 148159.Google Scholar
Kanstrup, Anne Marie. 2003. “D for Democracy: On Political Ideals in Participatory Design.” Scand. J. Inf. Syst. 15 (1): 8185.Google Scholar
Kumar, Priya. 2018. “Contextual Integrity as a Conceptual, Analytical, and Educational Tool for Research,” 5.Google Scholar
Kumar, Priya, Naik, Shalmali Milind, Devkar, Utkarsha Ramesh, Chetty, Marshini, Clegg, Tamara L., and Vitak, Jessica. 2017. “‘No Telling Passcodes Out Because They’Re Private’: Understanding Children’s Mental Models of Privacy and Security Online.” Proc. ACM Hum.-Comput. Interact. 1 (CSCW): 64: 164: 21. https://doi.org/10.1145/3134699.Google Scholar
Lessig, Lawrence. 2000. Code and Other Laws of Cyberspace. New York, NY, USA: Basic Books, Inc.Google Scholar
Lodato, Thomas and Carl, DiSalvo. 2018. “Institutional Constraints: The Forms and Limits of Participatory Design in the Public Realm.” In Proceedings of the 15th Participatory Design Conference: Full Papers – Volume 1, 5: 15:12. PDC ’18. New York, NY, USA: ACM. https://doi.org/10.1145/3210586.3210595.Google Scholar
Madden, Mary and Rainie, Lee. 2015. “NUMBERS, FACTS AND TRENDS SHAPING THE WORLD.” Pew Research Center. www.pewinternet.org/2015/05/20/americans-attitudes-about-privacy-security-and-surveillance/.Google Scholar
Martin, Kirsten E. 2012. “Diminished or Just Different? A Factorial Vignette Study of Privacy as a Social Contract.” Journal of Business Ethics 111 (4): 519539. https://doi.org/10.1007/s10551-012–1215-8.Google Scholar
Martin, Kirsten E. and Nissenbaum, Helen. 2015. “Measuring Privacy: An Empirical Test Using Context To Expose Confounding Variables.” SSRN Scholarly Paper ID 2709584. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2709584.Google Scholar
Marttila, Sanna, Botero, Andrea, and Saad-Sulonen, Joanna. 2014. “Towards Commons Design in Participatory Design.” In Proceedings of the 13th Participatory Design Conference: Short Papers, Industry Cases, Workshop Descriptions, Doctoral Consortium Papers, and Keynote Abstracts – Volume 2, 912. PDC ’14. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2662155.2662187.Google Scholar
Marwick, Alice E. and boyd, danah. 2014. “Networked Privacy: How Teenagers Negotiate Context in Social Media.” New Media & Society 16 (7): 10511067. https://doi.org/10.1177/1461444814543995.Google Scholar
Mir, Darakhshan J., Shvartzshnaider, Yan, and Latonero, Mark. 2018. “It Takes a Village: A Community Based Participatory Framework for Privacy Design.” In 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 112115. IEEE.Google Scholar
Muller, Michael J. 2009. “Participatory Design: The Third Space In HCI.” Human-Computer Interaction. March 2, 2009. https://doi.org/10.1201/9781420088892–15.Google Scholar
Nissenbaum, Helen. 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.CrossRefGoogle Scholar
Nissenbaum, Helen. 2019. “Contextual Integrity Up and Down the Data Food Chain.” Theoretical Inquiries in Law 20 (1): 221–56. https://doi.org/10.1515/til-2019–0008.Google Scholar
Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. The Political Economy of Institutions and Decisions. Cambridge; New York: Cambridge University Press.Google Scholar
Park, Yong Jin. 2013. “Digital Literacy and Privacy Behavior Online.” Communication Research 40 (2): 215–36. https://doi.org/10.1177/0093650211418338.Google Scholar
Petronio, Sandra and Altman, Irwin. 2002. Boundaries of Privacy: Dialectics of Disclosure. Albany, UNITED STATES: State University of New York Press. http://ebookcentral.proquest.com/lib/bucknell/detail.action?docID=3408055.Google Scholar
Pilemalm, Sofie. 2018. “Participatory Design in Emerging Civic Engagement Initiatives in the New Public Sector: Applying PD Concepts in Resource-Scarce Organizations.” ACM Trans. Comput.-Hum. Interact. 25 (1): 5:15: 26. https://doi.org/10.1145/3152420.Google Scholar
Regan, Priscilla. 2016. “Response to Privacy as a Public Good.” Duke Law Journal Online, February, 5165.Google Scholar
Regan, Priscilla M. 1986. “Privacy, Government Information, and Technology.” Public Administration Review 46 (6): 629–34. https://doi.org/10.2307/976229.Google Scholar
Regan, Priscilla M. 2000. Legislating Privacy: Technology, Social Values, and Public Policy. University of North Carolina Press.Google Scholar
Regan, Priscilla M. 2002. “Privacy as a Common Good in the Digital World.” Information, Communication & Society 5 (3): 382405. https://doi.org/10.1080/13691180210159328.Google Scholar
Regan, Priscilla M. 2015. “Privacy and the Common Good: Revisited.” In Social Dimensions of Privacy: Interdisciplinary Perspectives, edited by Roessler, B. & Mokrosinska, D., 5070, Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781107280557.004.Google Scholar
Sanfilippo, Frischmann and Strandburg, . 2018. “Privacy as Commons: Case Evaluation Through the Governing Knowledge Commons Framework.” Journal of Information Policy 8: 116. https://doi.org/10.5325/jinfopoli.8.2018.0116.Google Scholar
Shilton, Katie. 2018. “Values and Ethics in Human-Computer Interaction.” Foundations and Trends® Human–Computer Interaction 12 (2): 107–71. https://doi.org/10.1561/1100000073.Google Scholar
Shilton, Katie, Burke, Jeff, Estrin, Deborah, Hansen, Mark, and Srivastava, Mani B. 2008. “Participatory Privacy in Urban Sensing.” http://scholarworks.umass.edu/esence http://escholarship.org/uc/item/90j149pp.pdf.Google Scholar
Shvartzshanider, Yan, Balashankar, Ananth, Wies, Thomas, and Subramanian, Lakshminarayanan. 2018. “RECIPE: Applying Open Domain Question Answering to Privacy Policies.” In Proceedings of the Workshop on Machine Reading for Question Answering, 7177.Google Scholar
Shvartzshnaider, Yan, Tong, Schrasing, Wies, Thomas, Kift, Paula, Nissenbaum, Helen, Subramanian, Lakshminarayanan, and Mittal, Prateek. 2016. “Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms.” In Fourth AAAI Conference on Human Computation and Crowdsourcing.Google Scholar
Simonsen, J., and Robertson, T.. 2012. Routledge International Handbook of Participatory Design. Routledge International Handbooks. Taylor & Francis. https://books.google.com/books?id=l29JFCmqFikC.Google Scholar
Strandburg, Katherine J., Frischmann, Brett M., and Madison, Michael J.. 2017. “The Knowledge Commons Framework.” In Governing Medical Knowledge Commons, edited by Strandburg, Katherine J., Frischmann, Brett M., and Madison, Michael J., 918. Cambridge Studies on Governing Knowledge Commons. Cambridge University Press. https://doi.org/10.1017/9781316544587.002.Google Scholar
Teli, Maurizio, Lyle, Peter, and Sciannamblo, Mariacristina. 2018. “Institutioning the Common: The Case of Commonfare.” In Proceedings of the 15th Participatory Design Conference: Full Papers – Volume 1, 6:16:11.PDC ’18. New York, NY, USA: ACM. https://doi.org/10.1145/3210586.3210590.Google Scholar
Weitzman, Elissa R., Kaci, Liljana, and Mandl, Kenneth D.. 2010. “Sharing Medical Data for Health Research: The Early Personal Health Record Experience.” Journal of Medical Internet Research 12 (2): e14. https://doi.org/10.2196/jmir.1356.Google Scholar
Wong, Richmond Y. and Khovanskaya, Vera. 2018. “Speculative Design in HCI: From Corporate Imaginations to Critical Orientations.” In New Directions in Third Wave Human-Computer Interaction: Volume 2 – Methodologies, edited by Filimowicz, Michael and Tzankova, Veronika, 175202. Human–Computer Interaction Series. Cham: Springer International Publishing. https://doi.org/10.1007/978–3-319–73374-6_10.Google Scholar
Wong, Richmond Y. and Mulligan, Deirdre K.. 2019. “Bringing Design to the Privacy Table: Broadening ‘Design’ in ‘Privacy by Design’ Through the Lens of HCI.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 262:1262: 17.CHI ’19. New York, NY, USA: ACM. https://doi.org/10.1145/3290605.3300492.Google Scholar
Wood, Alexandra, Altman, Micah, Bembenek, Aaron, Bun, Mark, Gaboardi, Marco, Honaker, James, Nissim, Kobbi, O’Brien, David R, Steinke, Thomas, and Vadhan, Salil. 2018. “Differential Privacy: A Primer for a Non-Technical Audience.” Vand. J. Ent. & Tech. L. 21: 209.Google Scholar

References

Frischmann, Brett M., Madison, Michael J. and Strandburg, Katherine Jo, eds. Governing Knowledge Commons. Oxford University Press, 2014.Google Scholar
Frischmann, Brett and Selinger, Evan, Re-Engineering Humanity. Cambridge University Press, 2018.Google Scholar
Gorham, Ashley Elizabeth, Nissenbaum, Helen, Sanfilippo, Madelyn R, Strandburg, Katherine J., and Verstraete, Mark. “Social Media Is Not a Context: Protecting Privacy Amid ‘Context Collapse’.” Proceedings of American Political Science Association (APSA), 2020 Virtual Annual Meeting & Exhibition. September 10, 2020. http://tinyurl.com/ybfnz4bb.Google Scholar
McGinnis, Michael Dean. Polycentricity and Local Public Economies: Readings from the Workshop in Political Theory and Policy Analysis. University of Michigan Press, 1999.CrossRefGoogle Scholar
Ostrom, Elinor. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990.Google Scholar
Sanfilippo, Madelyn, Frischmann, Brett, and Strandburg, Katherine. Privacy as Commons: Case Evaluation through the Governing Knowledge Commons Framework, 8 Journal of Information Policy 116–66 (2018)Google Scholar
Strandburg, Katherine J., Frischmann, Brett M. and Madison, Michael J., eds. Governing Medical Knowledge Commons. Cambridge University Press, 2017.Google Scholar
Figure 0

Figure 8.1 The Institutional Analysis and Development (IAD) framework

Figure 1

Figure 8.2 The Governing Knowledge Commons (GKC) framework

Figure 2

Table 8.1 Types of rules

Figure 3

Figure 8.3 Cyber regime complex map (Nye, 2014, 8)

Figure 4

Table 9.1 Conceptual overlap between CI and Institutional Grammar (GKC) parameters

Figure 5

Table 9.2 Survey scenarios with corresponding aspects of the GKC framework

Figure 6

Table 9.3 Smart home GKC-CI parameters selected for information flow survey questions

Figure 7

Figure 9.1 Example baseline information flow question

Figure 8

Figure 9.2 Example question with varying condition parameters

Figure 9

Figure 9.3 Where respondents learn about the privacy implications of IoT devices

Figure 10

Figure 9.4 Average perceptions of information flows by parameter

This figure illustrates the average participant opinion of information flows controlled to specific examples of information type and subject, modalities, recipients, and senders.
Figure 11

Figure 9.5 The impact of specific parameters in changing respondent perceptions of information flows.

This figure indicates the average change in perceptions in response to specific examples for each parameter. It does not indicate initial perceptions, in contrast to Figure 9.4.
Figure 12

Figure 9.6 User actions in response to third-party sharing scenarios

Figure 13

Table 9.4 Average perceptions of information flow appropriateness gauged by respondent subpopulations. For each subcommunity we calculate the number of respondents and the average perception score across information flows including consequence, condition, and aim.

Figure 14

Table 11.1 Voice-shaped commons breakdown(Case studies in this volume are in bold)

Figure 15

Table 11.2 Exit-shaped commons breakdown(Case studies in this volume are in bold)

Figure 16

Table 11.3 Imposed commons governance

Figure 17

Table 11.4 Updated GKC framework (with supplementary questions in bold)

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×