Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-22T13:31:06.577Z Has data issue: false hasContentIssue false

Artificial Intelligence and Data Harvesting: An Interview with Carissa Véliz

Published online by Cambridge University Press:  28 June 2023

Carissa Véliz*
Affiliation:
Faculty of Philosophy, University of Oxford, Oxford, UK Institute for Ethics in AI, University of Oxford, Oxford, UK
Stephen Law*
Affiliation:
Department of Continuing Education, University of Oxford, Oxford, UK
*
*Corresponding authors. Email: [email protected], [email protected]
*Corresponding authors. Email: [email protected], [email protected]

Abstract

An exploration of the risks and benefits of AI, particular regarding privacy.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of The Royal Institute of Philosophy

Carissa is here interviewed by THINK editor Stephen Law.

Stephen Law: Your recent book, Privacy is Power: How and Why You Should Take Back Control of Your Data, addresses issues in digital privacy and surveillance, and how internet companies are harvesting more and more of our personal data. What data are these companies harvesting, and for what purpose? What ethical issues does their activity raise?

Carissa Véliz: All the data you can possibly imagine: what you search for, what you eat, how fast you drive, who you sleep with, your weight, your car and other possessions, how much you earn, how much you spend, your health record, your location data, and much, much, more. They collect so much data to

earn money. Sometimes they sell that data to insurance companies, banks, prospective employers, governments, or marketing companies. Sometimes they use that data to sell access to you through personalized ads.

The data economy raises all kinds of ethical issues. Arguably, you are not consenting to that data collection, because much of it happens without you knowing about it, and even when you formally ‘consent’, it is not really informed consent, because you can't possibly know what kinds of inferences will be made from that data or where it might end up. And data collection is not harmless. It can have grave consequences, from you being denied a loan, or a job, or housing, to social consequences like having our democracies damaged, through data firms like Cambridge Analytica trying to sway elections using personalized propaganda. Having so much personal data stored is also a national security risk, as it can be used for intelligence purposes.

SL: What is Artificial Intelligence? Should we be particularly concerned about the application of Artificial Intelligence to the harvesting of personal data? Could you give a concrete example of how AI is being used?

CV: Artificial intelligence (AI), roughly, is when algorithms display behaviour that is either intelligent or mimics intelligence.

One of the reasons to be concerned about AI is how it's being used to make inferences about people. For instance, AI can be used to infer sexual orientation or other sensitive information about people from data that doesn't seem all that sensitive, like music taste. Other concerns about AI using personal data to make decisions have less to do with privacy and more to do with bias, discrimination and unfairness.

SL: What should we, as individuals, do to protect ourselves against invasion of our privacy? And what should governments do?

CV: We can use privacy-friendly devices and apps. Instead of Google Search, use DuckDuckGo; instead of WhatsApp, use Signal; instead of Gmail, use ProtonMail. We can ask companies to delete our data. We can respect other people's privacy to create a respectful culture. Governments should ban the trade in personal data. We don't buy or sell votes, and for many of the same reasons, we shouldn't buy or sell personal data.

SL: Can you illustrate how bias, discrimination and unfairness might result from applying AI to our personal data?

CV: There are many examples. A few years back Amazon designed an algorithm to hire employees, and the algorithm turned out to be sexist; it was biased against women. What happened was that the algorithm used historical data, and in the past ten years, Amazon had mostly hired men, so anything on a CV that made it stand out as being that of a woman (e.g. having been a part of the women's soccer team) signalled to the algorithm that that type of person was not the kind of person who had been a successful Amazon employee.

SL: Taking a large step back, what's distinctive about the contribution that you, as a philosopher, bring to the discussion of these issues?

CV: A few things. Philosophers can offer conceptual analyses that can be useful in making ethical decisions about public matters. Philosophical analysis can lead to better decisions, and to better explaining (and justifying) a decision once it's made. Conceptual analyses can sharpen debates, shorten them, sometimes make them less repetitive and inconclusive.

Conceptual analyses include:

  • Clarifying concepts: Make sure people are talking about the same thing. On occasion, such clarification may lead to problems dissolving (Wittgenstein) – some disagreements amount to misunderstandings.

  • Providing nuance: Like other disciplines, academic ethics has also developed a precise technical language that can provide more nuance than ordinary language about morality (e.g. permissible, impermissible, required, supererogatory, etc.). Implications: some proposals seem like a good idea until we cash-out undesirable theoretical or practical implications.

  • Contradictions: Public discourse, from the media to Parliament, is filled with fallacies. Philosophers can identify faulty arguments.

  • Questions of fact vs value: A continuing source of confusion in public debates is whether something is a fact. Consider the example of death. We used to think that whether someone is dead was a medical or biological question. Then came bioethics and successfully argued that it is partly a question of value (what do we mean by death? the death of the body? Of the person? Of consciousness?). From the point of view of ethics, the most important question has become: when does someone lose the rights and interests typical of a living person?

On the theoretical side:

  • Ethical theories can be helpful guidelines when thinking about new practical cases. In turn, sometimes practical cases make evident the limits or mistakes of our theories and help us improve theories. Those improved theories can be useful for future cases. One of the results that can be appreciated is progress throughout the history of philosophy: consensus is reached in some issues, and even when it is not, the theories that result from decades of debate are much more polished than their original versions. Today's consequentialism is much more nuanced than, say, Bentham's.

Philosophers can also be good at identifying moral problems. Before the development of bioethics, many medical practices that today are analysed under the lens of ethics were not thought to be ethically problematic. For example, not informing patients of their diagnosis, randomizing patients to treatment or placebo without informing them that they were involved in research, allowing students to practise invasive examinations on anaesthetized patients without their consent. All these things used to be done by the medical profession without a second thought. The first step for improving ethical practices is identifying moral problems in the first place.

Philosophers can also inspire moral thought by encouraging public debates on important questions. And philosophy can also offer its experience in matters of ethics, from normative ethics to medical ethics, business ethics, and beyond.

SL: As AI develops further, what would you be most concerned about? What are the most significant moral issues AI raises, beyond digital privacy?

CV: In a nutshell, we have to think about how to design AI in a way that, both in the short and the long run, we can look back and be happy that we developed it in the first place. And by ‘we’ I mean society. It's not enough for AI to be profitable for a few people. AI has to benefit humankind. Without good governance, we could be worse off having AI than if we'd never invented the thing in the first place. It could lead to growing inequality, unfairness (including racism and sexism), and to the destruction of our natural resources, among other problems. It could even bring down democracy.