Policy Significance Statement
Artificial intelligence is rapidly being integrated into cities, often leaving policymakers unable to keep up. When corporations are given control of the integration of AI into urban centres, they can dismiss current regulations in favour of their own governance mechanisms. This paper proposes an integration of a feminist policy praxis into smart city development. A feminist approach addresses the risks of rapid datafication while integrating an intersectional, impact-based community engagement process in smart city governance and design. This ensures that civilians can be protected under a rights-based legal structure that understands the multifaceted risks posed by AI development, while also giving citizens the tools to hold corporate actors accountable.
1. Introduction
As urban infrastructure becomes interwoven with algorithmic processes, questions must be asked about who these technologies serve. While corporate partnerships regarding urban AI development promote “smart” technology as beneficial for all inhabitants, these technologies reflect histories of social marginalization, exploitation, and surveillance (Benjamin, Reference Benjamin2019; Eubanks, Reference Eubanks2018; O’Neil, Reference O’Neil2016; Scheuerman et al., Reference Scheuerman, Paul and Brubaker2019). This research analyses the case of Sidewalk Labs’ Quayside project as demonstrative of greater issues facing public–private partnerships in urban AI governance. While the Quayside project was officially abandoned due to “financial constraints” (Doctoroff, Reference Doctoroff2020) resulting from the COVID-19 pandemic, an in-depth analysis demonstrates that the state of algorithmic governance in Toronto, and Canada more broadly, reflects a multifaceted story—one that highlights the importance of citizen participation, a rejection of corporate governance, and an urban environment that reflects the needs of its residents. Most importantly, it epitomizes critical considerations for policymakers, as this analysis raises questions of how justice and power are wielded in the age of smart cities. In proposing a feminist approach to smart city governance, this research demonstrates the need to address the potential risks of data universalism with a policy strategy that accounts for the multifaceted and contextual makeup of the urban environment.
2. Data in the smart city
In Andrejevic and Burdon’s (Reference Andrejevic and Burdon2015) depiction of the “sensor society”, infrastructures become infused with systems designed to facilitate the mass capture of data. The sensor society departs from the traditional mode of data collection as targeted and discrete, instead favouring continuous, constant data capture processes. Subjects in this case are reduced to mere points within a greater data system, absolving context from these interactions. Rob Kitchin (Reference Kitchin, Kitchin, Lauriault and McArdle2017) further articulates this desire for more data in the city in his depiction of data-driven urbanism, in which the desire for Big Data systems influences and controls urban performance and responses (Kitchin, Reference Kitchin, Kitchin, Lauriault and McArdle2017). In this condition, a city’s success is not measured through local and situated analyses, but through the collection, amelioration, and mobilization of data according to a framework that prioritizes pragmatic and efficient solutions. As more data are gathered and evaluated for progress, additional data simultaneously need to be captured to enable that search for progress. Yet, those “producing” the data—often through simply existing within the environment itself—are unable to fully understand their contributions, as the means of control often lies outside of the user. As such, the sensor society is a phenomenon in which the passive, constant collection of data is implicated within a broader system of control.
In the smart community, the sensor society comes to life. Initiatives to build “smart cities” seek to integrate large-scale data collection into urban environments. While fully functional smart communities currently remain as theoretical aspirations, these aspirations rely on the simultaneous integration of networked infrastructure, Big Data, machine learning processes, and data analytics to create solutions to urban problems (Kitchin, Reference Kitchin, Kitchin, Lauriault and McArdle2017; Botero Arcila, Reference Botero Arcila2022). Proposals for smart communities may differ in the type of infrastructure they aim to transform, but are united in their desire to integrate ICTs into everyday environments to produce an optimized city experience (Powell, Reference Powell2021). This idea of the optimiszed city, however, did not begin with the advent of ICTs. Rather, this practice reflects a larger view of the city as a business venture, made up of services and interests to be purchased, controlled, and sold (Frug, Reference Frug1999). Following the Second World War, urban scientists began experimenting with the city as a logistical system of control, demonstrating their presence as “clear precursors to contemporary calls for cities to be built from the internet up” (Mattern, Reference Mattern2021, 64). In the 1980s and 1990s, machine learning developments and ICT infrastructure expansion aimed to integrate data into the very foundation of urban life (Kitchin, Reference Kitchin, Kitchin, Lauriault and McArdle2017). With this combination of infrastructure and a decades-long research programme into data collection, it is not a surprise that corporate actors embraced calls for a “smart” city through large-scale investment in data-driven urbanism (Kitchin, Reference Kitchin, Kitchin, Lauriault and McArdle2017), as it unites the desire to sell services in the city with a push to find optimised solutions to urban issues.
Among those intrigued with the notion of the smart city as an optimization problem was Daniel Doctoroff, the Chief Executive Officer of Sidewalk Labs. Doctoroff’s work implementing the LinkNYC programme in 2015 resulted in international praise for the corporation, despite numerous concerns surrounding privacy and corporate surveillance.Footnote 1 From then on, Sidewalk Labs sought to increase their reach through the implementation of a large-scale “smart” community located in Toronto, Ontario, through a series of sensors that would create the infrastructure necessary for self-driving cars, environmentally safe and temperature-controlled buildings, smart trash removal, AI-moderated health records and more digital interventions (Sidewalk Labs, 2019a). Each of these interventions created the ability to gain meaningful (and profitable) data about residents’ whereabouts, habits, and practices. They work collaboratively to produce the “smart” sensor society, where constant asymmetrical and opaque data collection enables predictive technology that can be marketed and sold. The reduction of civilian activity to data within this “smart community” was presented as objectively beneficial—not only for the economy but for citizens themselves (Waterfront Toronto, 2017).
A feminist study of smart urbanism is necessary to challenge these data cultures and assert the need for a localized form of data governance. Smart infrastructures collect data to be used in evaluation schemas that are presented as neutral, yet these schemas themselves are built upon histories of isolation, control, and unequal categorisation. Kate Crawford (Reference Crawford2021) highlights how the mugshots being used to train facial recognition algorithms within the NIST dataset are promoted as neutral and devoid of context. However, behind these images are consequences, as “neither the people depicted in the photographs nor their families have any say about how these images are used and likely have no idea that they are part of the test beds of AI” (Crawford, Reference Crawford2021, 92). Despite being “publicly available”, these photographs are not neutral but reflect broader histories within the criminal justice system, where communities of colour are often subjected to increased surveillance and police presence (Maynard, Reference Maynard2017; Browne, Reference Browne2015; Benjamin, Reference Benjamin2019). While this is just one example, this depoliticisation of data is part of “a shift from image to infrastructure, where the meaning or care that might be given to the image of a person, or the context behind a scene, is presumed to be erased at the moment it becomes part of an aggregate mass that will drive a broader system,” (Crawford, Reference Crawford2021, 94). Within the smart city, the mass capture of data enabled by an always-on surveillance regime results in a collection process where individual context is obscured. A focus on the Quayside project demonstrates the inherent locality of smart cities, a locality that underscores the need to re-contextualize complex urban issues. Only through this process can we develop the context necessary to build “smart-er” solutions that do not rely on narratives of objective and universal data. Applying a feminist lens to both this case and its future policy implications is an integral step to building these solutions, as it works directly to challenge universalism and expose invisible power relations. Developing stronger algorithmic governance procedures requires this rejection to centre values of meaningful participation and accountability procedures, so that these structures of black-boxed power (Pasquale, Reference Pasquale2015) can continue to be exposed and challenged.
3. Sidewalk labs and the pitfalls of corporate algorithmic governance
In the mid-2000s, Waterfront Toronto, an organiszation created in collaboration with the federal, provincial, and municipal government branches, purchased a plot of land that would later house the Quayside project for $68 million (Auditor General of Ontario (AG), 2019). This land was purchased to “build affordable housing, provide public access to the water’s edge, enable streetcar track extensions, locate an energy plant and enable other development opportunities” (AG, 2019, 662). Nearly one decade later Waterfront Toronto published a Request for Proposals (RFP) that aimed to innovate and transform the space. On October 17, 2017, 6 months after the initial RFP, Sidewalk Labs was announced as the succeeding bid (Waterfront Toronto, 2017). This announcement came incredibly quickly, as the initial finalists were selected a mere 6 weeks after the RFP was issued (AG. 2019, 44). The discovery of communications between the Chief Planning and Design Officer of Waterfront Toronto and the CEO of Sidewalk Labs were presented as potential evidence of unfair competition, as the two were reportedly engaged in frequent communication between 2016 and the issuance of the RFP. Even before the development of a proposal for smart infrastructure, questions about the ethics of public–private partnerships in Toronto were arising.
Throughout the next two and a half years, Sidewalk Labs would host five public roundtables, establish a Residents Panel, and publish the long-awaited Master Innovation and Development Plan (MIDP) (Sidewalk labs, 2019a). Yet, with each ‘public engagement’ activity, Sidewalk Labs faced numerous questions surrounding the logistics and monetisation of data. Residents and academics questioned the ambiguous proposal of sensors within the neighbourhood which would enable smart decision-making, asking about control, management of personal information, and anonymity (Wylie, Reference Wylie2018; Public Consultation Question List, n.d.; Zarum, Reference Zarum2019). As an answer to these concerns, Sidewalk Labs developed a data protection structure; the data collected in the community would be known as “urban data”—information gathered in the public realm, publicly accessible spaces, and certain private buildings (Sidewalk Labs, 2019a)—and held in a trust. Yet, the category of “urban data” is non-existent within Canadian legislation, leaving no precedent for an Urban Data Trust to work from, other than that of Sidewalk’s own Responsible Data Use Guidelines (Sidewalk Labs, 2019b).Footnote 2 While data trusts have been presented as potential solutions to ambiguous data governance (Delacroix and Lawrence, Reference Delacroix and Lawrence2019), Sidewalk Labs’ proposal highlights how accountability becomes difficult when corporate actors ignore diverse needs to define their own data governance methods.
3.1. First pitfall: Failure to create informed consent
When implementing any new urban innovation, informed consent is an important factor in guaranteeing the protection of citizens. As a practice, informed consent prioritises the digital rights of citizens, making them an active part of the process. Within the Quayside project, Sidewalk Labs treated citizen feedback as an additive feature, rather than a central aspect of the project’s development. The first pitfall I will examine is Sidewalk Labs’ failure to adhere to a proper informed consent model. This failure resulted in minimal citizen engagement, making it difficult to achieve public trust.
Within Toronto, there are three pieces of legislation that are especially important in terms of public–private partnerships and data collection: Municipal Freedom of Information and Protection of Privacy Act (MFIPPA 1990), Freedom of Information and Privacy Protection Act (FIPPA, 1990) and Personal Information Protection and Electronic Documents Act (PIPEDA, 2000). These three acts work in conjunction to provide strict privacy protections regarding the collection of personal information, with PIPEDA working at the federal level, FIPPA at the provincial level, and MFIPPA at the municipal level. While the three legislations differ in scope—PIPEDA covering private entities and FIPPA/MFIPPA covering public institutions—all three highlight the importance of consent and the ability to “opt out” as essential to privacy protection. PIPEDA creates the conditions in which corporations are deterred from using unethical data practices through ten principles to safeguard the collection of personal data: accountability; identifying purposes; consent; limiting collection; limiting use, disclosure, and retention; accuracy; safeguards; openness; individual access; challenging compliance (PIPEDA Fair Information Principles, 2019). FIPPA and MFIPPA complement this practice, as they apply to public institutions and aim to protect an individual’s right to privacy while ensuring that data is being collected for specific purposes such as administration or law enforcement (Beamish, Reference Beamish2019, 2). In proposing “urban data” as a new category, Sidewalk Labs was able to sidestep these regulations that only apply to personal data, subsequently neglecting the need for consent and participation.
In failing to adhere to the requirements of existing legislation, Sidewalk Labs violated Canadians’ personal and collective privacy rights, as informed consent was impossible to achieve within smart city infrastructure. (CCLA, 2019, 4). Smart neighbourhoods present an unacknowledged risk of coercion for civilians, as often individuals living, working, or passing through these areas have no alternative to the “smart” services existing within the neighbourhood, creating (Beamish, Reference Beamish2019). As citizens would be unable to meaningfully opt out of data collection within the space, the potential for surveillance and commoditization presented a contradiction to the precedent established by federal privacy legislation, in addition to the rights to Freedoms of Assembly and Association, Life, Liberty and Security of the Person, and Unreasonable Search or Seizure as outlined in the Canadian Charter of Rights and Freedoms (CCLA, 2019, 9). As Ellen Goodman makes clear in her Affidavit regarding informed consent in the Quayside neighbourhood, “consent is insufficiently “knowing” when the user does not understand the technology being agreed to or the practical consequences of agreeing. Consent is insufficiently “free” when a person must choose between consent and the loss of an important function or asset. The project poses both of these dangers to a significant degree” (Goodman, Reference Goodman2020, 13). Sidewalk Labs aimed to impose infrastructure that would have strong impacts on citizens’ daily lives, and as such, should have treated informed consent with precarity and caution. If people do not understand the implications of mass data capture, they will be unable to hold corporate actors and governments accountable. This becomes especially essential when technology is integrated into urban spaces, as simply telling people to not enter a neighbourhood is not always a possibility.
Yet, these issues are not solely representative of the Quayside project. Rather, the current global structure of data regulation itself relies on an interlocking value system, where the ideas of notice and consent are at the forefront. Canada’s PIPEDA, as well as the UK’s Data Protection Act (2018), and both the EU and UK General Data Protection Regulation (GDPR, 2016, Information Commissioner of the UK, 2021) share a commitment to a regulatory data framework in which individuals can both know when their information is being processed, and consent to its collection. Similar to PIPEDA’s Fair Information Principles, the GDPR outlines seven key principles with which personal information is to be governed: Lawfulness, fairness, and transparency; Purpose limitation; Data minimisation; Accuracy; Storage limitation; Integrity and confidentiality; and Accountability (GDPR, 2016, 5.1). These sets of principles outline the values behind these policy decisions, encouraging practices of transparency and acceptance.
On deeper reflection, these principles are far from effective in promoting safe data practices, as they all place an overwhelming onus of responsibility on the individual to find, understand, and then challenge or accept the collection of their data. Here, the desire for pragmatic solutions to complex problems sees transparency as a direct path to accountability. However, these legislative principles fail to account for the full problem at hand—a lack of digital literacy and a feeling of powerlessness when face-to-face with Big Data (Obar, Reference Obar2015). Data protection in its current form takes seeing over understanding, where an increase in transparency is equated with an increase in knowledge, and subsequently an increase in control (Ananny and Crawford, Reference Ananny and Crawford2018). Yet, being given information does not mean that meaningful change will occur. Nor does it mean that those who are supposed to be holding a system accountable will be able to achieve those goals. Rather, a sole focus on transparency fails to reach a critical stage, which recognizes the deployment of smart infrastructure as reflective of collective harms and meaningful engagement with the construction of community-based systems. The implementation of intelligent decision-making and data collection cannot be regulated with a sole focus on individuality. Instead, data governance needs to develop clear policies aligned with feminist protocols of challenging power to provide residents and citizens with the tools to critically evaluate this information (Kern, Reference Kern2019). Informed (and participatory) consent within digital policy is an incredibly important practice for ensuring ethical technological intervention. Integrating a new type of data alongside no clear legislative protections fails to ensure that legitimate consent is possible. As the Quayside project highlights, developing new standards built solely on the perception of transparency fails to address the need for collective support and engagement.
3.2. Second pitfall: promotion of corporate governance
To regulate this new category of data, Sidewalk Labs created the Urban Data Trust—an external regulatory body responsible for making decisions surrounding data in Quayside. The roles of the Urban Data involved the “approval and management of data collection devices placed in the public realm, as well as addressing the challenges and opportunities arising from data use, particularly those involving algorithmic decision-making” (Sidewalk Labs, 2019a, p.420). The data-sharing agreements were to be reminiscent of data licence agreements and enforceable in court (Sidewalk Labs, 2019a). However, Sidewalk Labs also stipulated that this trust was not a legal trust, and failed to identify what these court proceedings would look like or how data breaches could be legally enforceable. Further, the cases themselves would be evaluated not according to Canadian legislation, but Sidewalk Labs’ own Responsible Data Use Guidelines” (Sidewalk Labs, 2019b). The establishment of their own criteria for responsible data use, combined with the creation of an entirely new form of data, essentially makes Sidewalk Labs the new expert on the ethical use of data in smart cities. Rather than turning to the government for perspectives, a corporate actor sought to create the new rules of the game.
The issue at hand involves the notion of corporate governance as being essential to the implementation of the Quayside project. The increasing prevalence of corporate governance can be seen in the Trust’s ability to turn residents’ data into assets. The process of creating an asset-based understanding of data is outlined by Artyushina (Artyushina, Reference Artyushina2022) as constitutive of five interconnected processes: “introducing new legal definitions for the data, retrofitting city infrastructure with data-tracking devices, creating a self-certification regime for data collectors, accumulating the data collected in the smart city in one physical location, and establishing IP-intensive data sharing agreements” (Artyushina, Reference Artyushina2022, 8). In turning residents’ data into assets for evaluation, the data trust therefore aimed to establish data as an infrastructural element of urban life, rather than as part of a transactional relationship, redefining the rules of data governance according to Sidewalk Labs’ own procedure.
Further, there was a distinct lack of government consultation, ultimately resulting in harsh consequences and a failure to protect the rights of residents. Unlike its previous operating practices, Waterfront Toronto did not adequately consult with any levels of government regarding the Sidewalk Labs project (Auditor General, 2019, 650). The scope of the project, from self-driving vehicles to data collection, falls under multiple provincial and federal ministries and city departments, but Waterfront Toronto did not adequately consult with any of them before signing an initial agreement (AG, 2019, 650). In 2002, Waterfront Toronto established an Intergovernmental Steering Committee made up of representatives from the municipal, federal, and provincial governments (Eidelman, Reference Eidelman2013). Yet, Sidewalk Labs and Waterfront Toronto failed to adequately use the committee’s expertise and knowledge to its advantage. The steering committee was consistently in the dark regarding the decisions surrounding Quayside, as the Auditor General of Ontario found that the committee was only made aware of the name of the successful bidder 5 days before the public announcement (Auditor General, 2019, 649). Upon further investigation, the Auditor General found that the steering committee itself was not provided with a framework or guide to support its decision-making process (Auditor General, 2019, 681).
Following the proposal of the Urban Data Trust, Waterfront Toronto’s Digital Strategy Advisory Panel (DSAP) rejected the notion of urban data entirely (DSAP, 2020). Instead, they asserted that data collected through Quayside would be understood within the existing Canadian legal structure. Yet, given the limited control and input the Intergovernmental Steering Committee was given in regard to the Quayside project, the ways in which these laws would impact the governance processes still appeared vague. These concerns about government participation and the establishment of new trusts and categories of data governance present an important consideration moving forward, as there needs to be a rigorous process to protect the rights of citizens so that it is not corporations establishing the rules and trades of the game.
Public–private partnerships present complexities in governance, especially with regard to data. When viewed through a feminist lens that aims to see the “not-seen” (Smith, Reference Smith, Dubrofsky and Magnet2015), these complexities represent deeper systemic issues regarding mass surveillance and human rights. While these proposals are marketed under the guise of public benefit, enabling corporate governance provides more drastic consequences for civilians. As Sidewalk Labs aimed to promote an environment where technology solves all issues, the Quayside project ignored the risks that a hyper-surveilled space poses to certain populations. Creating seemingly innovative solutions without considering their full socio-political consequences creates the potential for further marginalization of people who are already experiencing direct harm from these technologies at multiple intersectional levels. Where Sidewalk Labs may promote increased surveillance as an efficiency or safety mechanism, it poses risks to people of colour, women, and gender minorities who are misrecognised by this technology more frequently than others (Buolamwini and Gebru, Reference Buolamwini and Gebru2018; Marshall, Reference Marshall2019). When data is mobilized as a source of power, collecting more of it presents the risk of upholding hierarchies of discrimination. Technology does not impact everyone equally, and the assumption that employing algorithms would “solve” the problems of a city vastly ignores the issues that technology cannot fix. In proposing to integrate an entire neighbourhood with surveillance technology, Sidewalk Labs subsequently isolated marginalized residents from safely accessing their own city.
If technology is to be integrated into cities, it needs to occur in a process where civilians are able to meaningfully engage with the development procedures, while knowing that their rights are thoroughly protected under the law. Within Toronto, the proposal to build a city “from the internet up” (Sidewalk Labs, 2019a) came alongside a series of different infrastructural changes. Yet, Sidewalk Labs is not the only corporation, nor the first one, to attempt to integrate corporate governance into the inner workings of a city.Footnote 3 Rather, the push for corporate governance within the Quayside project reflects a larger practice of corporate capture in the city. As corporate actors are on the cutting edge of innovation, research and development, their technology becomes integrated into these urban environments through an increasing reliance on public–private partnerships. As ICTs make data collection easier than ever, data are emerging as a key resource that—if not properly protected—can be exploited in search of profit (Sadowski, Reference Sadowski2019). The challenge to public–private partnerships in urban “smart” environments highlights that for corporate actors, there is an increased incentive to collect data en masse within the city. Here, the initial investment is not the sole source of profit. Rather, the ability to constantly collect and distribute key informational data about a community promises a stronghold on a valuable resource in the 21st century (Hollands, Reference Hollands2015). As corporate actors retain their position as the experts in the field of smart infrastructure, they can promote their own forms of universalist application and self-regulation without an acknowledgement of the economic benefits this data collection provides to corporate actors. In favouring universalism over a localized understanding of urban issues, smart cities ignore context and systemic issues in search of increased profit.
4. Combatting corporate governance: establishing feminist protocols in urban technology development
In May 2020, the Quayside project was abandoned. While this is only one case of a public–private partnership created to develop a “smart” neighbourhood, these initiatives exist within the broader scope of corporate tech projects worldwide that aim to present the collection and usage of data as universally beneficial. Yet urban life cannot, and should not, be reduced to mere “objective” data points. Data is not a universal, uncomplicated answer. In promoting an intersectional feminist praxis within data policy, I build from a feminist critique of universalism (Haraway, Reference Haraway1988) to argue for a viewing of data—and cities themselves—as local and contextualised, representing the multiple interactions that take place alongside the data itself. This approach challenges dominant systems of power while acknowledging that “we must make connections between all of these practices and begin addressing inequities in material…that take seriously the knowledge, experiences, and contributions of people of colour” (Cahaus, Reference Cahaus2022, p.6). To view data as contextualized, policymakers must look at data as reminiscent of greater sociotechnical contexts related to interpretation and production (Loukissas, Reference Loukissas2019). This involves a direct rejection of data collection as an autonomous process, but rather as one with disparate impacts that are embedded in the data cultures that govern and shape these sociotechnical relations (Roberge and Seyfert, Reference Roberge and Seyfert2016; Bates, Reference Bates, Kitchin, Lauriault and McArdle2017). Even when created from seemingly good intentions, data-driven urbanism, as articulated through a corporate drive for profit and endless collection, has its shortcomings. The current approach to public–private partnership that centres on data-driven urbanism is predicated on values of efficiency and technological solutionism that are often far removed from urban plans, meaningful accountability measures, and participation initiatives (Lauriault et al., Reference Lauriault, Bloom and Landry2018). In exposing and establishing data cultures as reflective of systemic biases, I bring to light how the Quayside project’s emphasis on a one-size-fits-all solution failed to address the multifaceted ways people are affected by their daily lives within Toronto, and establish the need to move away from self-regulation and towards collective data governance structures.
Despite prominent solutionist rhetoric, there are alternative ways to address urban problems that do not promise universal solutions, but highlight the strength of a localised, contextual approach. I argue that when focussed on community guidance, technology can be used as an important tool to recognise the multifaceted ways people experience urban life, shifting the power from corporations to those living in the city itself. To demonstrate this, I turn towards members of the community who are working towards changing the ways both cities and algorithms are designed. Learning from these interventions, policymakers can understand the strength of an intersectional and localized approach to algorithmic governance within the city. In the following section, I outline a critical approach to the construction of a feminist smart city, in which policy and participation play key roles. Taking a feminist approach to this development involves understanding the relationship between policy and participation as one of mutual influence, where the experiences of marginalized and at-risk communities—those who are often “not-seen”—are valued. Here I articulate two primary processes with which a feminist smart city can be developed: through participation in design and localized policy.
4.1 Participation in design
While urban bureaucracy often faces a common complaint of slowness, I argue for a leaning into the idea of slowness. From a feminist perspective, moving slowly involves a critical analysis of who is “not-seen” and subject to a hyper in/visibility in decision-making (Smith, Reference Smith, Dubrofsky and Magnet2015). This process integrates collaboration from the start, working to develop technological solutions in tandem with the communities they are meant to serve, rather than imposing corporate technology as a “universal” benefit. It also involves concrete accountability measures, where the public can understand what happens to their data. The slowness experienced by urban bureaucracy is often accompanied by confusion and frustration about being “out of the loop” with regard to the process. My articulation here emphasises the need for residents to be viewed as direct collaborators in the process of urban development, rather than as an additive feature later on in the process. Taking this approach to designing a smart city means putting the experiences and potential consequences for those impacted front and centre. Drawing from the key principles of an open smart city as articulated by Open North’s open smart cities project (Lauriault et al., Reference Lauriault, Bloom and Landry2018), I emphasise that a feminist perspective sees slowness not as a hindrance, but as a potential to mobilize technology and data when warranted according to ethical, accessible, and transparent standards.Footnote 4 This approach therefore builds upon calls to recognise that data and technology are not the solution to many of the systemic issues cities face, nor are there always quick fixes (Mattern, Reference Mattern2021; Powell, Reference Powell2021).
Where Sidewalk Labs sought to integrate public opinion into their project as an additive feature, an open smart city would build itself with public interest and contribution as initial stepping stones. A design justice framework would help organise citizen involvement. In this approach, designers are required to look to the community for guidance. Sasha Costanza-Chock (Reference Costanza-Chock2020) highlights designers engaging in this process as practitioners who aim to incorporate the ways in which the members of the community are already working together to face a challenge. Taking a design justice framework involves not only considering the perspectives of community members but genuinely involving them in the design process itself. This important work is already being done by groups publishing the Critical Engineering Manifesto (Reference Oliver, Savičić and Vasiliev2021), Feminist Data Manifest-No (Reference Cifor, Garcia, Cowan, Rault, Sutherland, Chan, Rode, Hoffmann, Salehi and Nakamura2019), and the More than Code Report (Reference Costanza-Chock, Wagoner, Taye, Rivas, Schweidler and Bulllen2018), which each highlight the importance of design as a process that involves meaningful engagement with those most impacted by technology. To embed a city with just design practices and open smart city frameworks, there is no need to reinvent the wheel, but there is a need to reflect on these critical interventions in current design theory.
4.2 Localized governance
However, it must be understood that design cannot solve everything. Design as a process goes hand-in-hand with developing clear policy initiatives to uphold the rights of residents. Currently, there is a call to redefine Canadian privacy legislation to reflect the necessary conditions of the digital age. As of early 2024, Bill C27—The Digital Charter Implementation Act—has yet to be passed. The proposals for Bill C27 involve integrating a comprehensive AI legislation based on a risk-assessment framework, while also replacing PIPEDA with an act that addresses the need for Canadians to be able to access, erase, correct, and transfer data about themselves (Bill C27, 2022) While this new approach to data governance addresses the need to develop strategies for the digital age, Bill C27 has already received criticism for its inability to address collective rights (Duncan and Wong, Reference Duncan and Wong2022), lack of a focus on shared prosperity and community engagement (Brandusescu and Sieber, Reference Brandusescu and Sieber2023), and its failure to integrate an intersectional, human rights-based framework (Kim and Thomasen, Reference Kim and Thomasen2023). Governance in the age of artificial intelligence requires meaningful citizen engagement to address the multifaceted ways in which technology used in urban spaces has intersectional, far-reaching impacts. Therefore, a critical policy intervention involves moving away from the self-regulatory precedent, and ensuring that the human rights of residents are not sacrificed in the development of these policies.
This articulation builds from the current push within critical AI studies to approach artificial intelligence development from a localized, context-based approach. This draws from Katz and Nowak’s (Reference Katz and Nowak2017) articulation of new localism in which “local jurisdictions are increasingly taking it upon themselves to address a broad range of environmental, economic, and social challenges, and the domain of technology is no exception” (Verhulst, Reference Verhulst, Brandusescu and Reia2022, 80). Within artificial intelligence communities, there has been an increase in specifically localized approaches to governance as a way to enable public participation in artificial intelligence development. Recent examples of AI localism include San Francisco’s ban on AI-powered facial recognition technology (Conger et al., Reference Conger, Fausset and Kovaleski2019), New York City’s rules on automated decision-making in hiring (Francis, Reference Francis2023), Helsinki and Amsterdam’s public registries of AI systems used in local government (Team AI Regulation, 2020), Montreal’s City Council motion against facial recognition (Serebin, Reference Serebin2020), and Barcelona’s citizen watch initiatives (March and Ribera-Fumaz, Reference March, Ribera-Fumaz, Karvonen, Cugurullo and Caprotti2018). A key element of this appeal to localized governance highlights that it works as a decentralised, rather than fragmented approach. Here, a decentralised approach aims to learn from the other local regulatory strategies to build a specific and contextual regulatory response, rather than applying them universally. While there are certain inalienable rights—particularly to privacy—this approach builds upon these rights to encourage a collaborative policy approach, while utilising the skills that other municipalities have developed to find what fits best with a community’s goals and needs.
Beyond ensuring governance principles in abstract, the notion of accountability in artificial intelligence policy involves creating a culture of trust. To this extent, Brudvig highlights the importance of consent as an ongoing project involving “a) revoking tools or systems that are not democratically agreed upon; b) opening up spaces for local creation of new tools and systems; c) decentralised governance or custodianship of data and knowledge systems; d) inclusion of local leaders in decision-making; and e) interdisciplinary collaboration in spaces of technology creation, policy and decision-making” (Brudvig, Reference Brudvig, Brandusescu and Reia2022, 29). Each of these acts reflects a commitment to creating a transparent and accountable city that prioritises the needs of its residents. Gaining meaningful transparency itself is a process that engages not only the public but also stakeholders, governing bodies, and corporations. Ensuring the interpretability of transparency, therefore, is necessary to develop this culture of trust, as accountability can only be enacted when civilians can understand how and why a system needs to change (Brauneis & Goodman, Reference Brauneis and Goodman2018, 132). Here I second Robert Brauneis and Ellen Goodman’s call for a shift away from the pull model of transparency in algorithmic governance—where individuals have to pull information that they wish to know—and instead call for a push model—in which those making decisions make their information publicly accessible and interpretable. Implementing accessible policy therefore goes hand-in-hand with ensuring participatory methods that engage the public at each stage of development, to ensure that all interests are recognised. In prioritising the development of a localized approach to policy, smart cities can be developed with the interests of residents at the forefront.
5. Conclusion: rethinking the current approach to AI governance
The consequences of Sidewalk Labs’ Quayside project are reflective of a broader issue in the governance of smart infrastructure. This project reflected deeper concerns about the consequences of public–private partnerships in the development of these programmes, as they often fail to address the disproportionate risk that algorithmic decision-making poses to marginalised groups. As I have detailed, there is a need to restructure the ways privacy is governed in relation to smart city development. When corporate actors are able to develop and set their own precedent for the storage, collection and usage of data, citizens are not ensured that their rights will be protected by legislation. Now more than ever, there is a need for comprehensive and strong governance mechanisms to ensure that the multifaceted risks of surveillance technology within smart cities are addressed. While new attempts to govern technological development in Canada have their particular shortcomings, there are clear benefits to developing civilian-centred governance regimes. Throughout this analysis, I have highlighted the need to develop systems of algorithmic governance that meaningfully engage those who will be put in harm’s way through the mass integration of smart technology in urban infrastructure. This approach is best conceived through a critical feminist praxis, which approaches smart city development through the simultaneous promotion of participation in design and localized policymaking. In this way, smart cities can be designed from the outset with an understanding of the ways marginalized populations are put at risk by surveillance technology in urban spaces, while also incorporating a governance programme that upholds the rights of citizens and gives them meaningful and accessible accountability tools. Enacting both of these approaches concurrently provides the potential for a revitalized approach to navigating governance in the smart city, in which technology is implemented as a method to support communities, rather than as a way to extract data from them.
Data availability statement
None.
Acknowledgments
The author is grateful for the administrative support and feedback provided by Dr. Jonathan Sterne and Dr. Alex Ketchum of McGill University, Montreal, as well as to the anonymous reviewers who provided essential feedback.
Author contribution
Conceptualization—L.M; Data curation—L.M; Formal analysis—L.M; Funding acquisition—L.M; Investigation—L.M; Methodology—L.M; Project administration—L.M; Writing—original draft—L.M; Writing—review and editing—L.M.
Provenance
This article is part of the Data for Policy 2024 Proceedings and was accepted in Data & Policy on the strength of the Conference’s review process.
Funding statement
None.
Competing interest
None.
Comments
No Comments have been published for this article.