I. Introduction
In this essay, I explore ethical considerations that might arise from the use of collaborative filtering algorithms on dating apps. Collaborative filtering is used to build recommendations for users on online platforms. They learn from the preferences of other users who exhibit similar behavior in order to predict the preferences of a target user and recommend content or products that match those predictions. Collaborative filtering systems have been deployed successfully on different platforms such as Amazon and Google. While using collaborative filtering to sort through products or movies is harmless, using the same systems to power dating apps could raise some issues. These considerations are especially relevant as a majority of new couples meet online: a 2017 survey shows 39 percent of 3,510 surveyed couples in the United States met online, a higher percentage than other methods of meeting (27 percent met at a restaurant or bar, and 20 percent met through friends).Footnote 1 Another 2013 survey shows that between 2005 and 2013, close to 35 percent of couples in the United States met their spouse online, with about half of those meetings happening on dating sites.Footnote 2 Through recommender systems, dating apps are increasingly influencing whose profile users can see or match with, and so who they date and potentially marry.
I start by explaining how collaborative filtering algorithms can predict preferences to build recommendations for users. I then show that race is a confounding factor in dating app recommendation systems. And finally, I argue that collaborative filtering can affect user behavior. My goal is to establish that filtering algorithms can homogenize user behavior and deepen existing patterns of sexual and romantic bias. Since users have little control over the process, and since race plays a confounding role in how user preference is determined, this process might be worth a closer look.
II. Collaborative Filtering
I am concerned with dating apps that use algorithms to recommend potential matches to users. For example, Tinder makes recommendations to users both in their “Top Pick” section (a collection of ten recommended profiles that a user is issued daily) and in the more general pool of user profiles by showing recommended profiles first. Another example is Hinge’s “Most Compatible” feature, which pairs two users every day based on the users’ past activity on the app and their interests.Footnote 3 These apps usually show users one profile at a time, giving them two options: if they are sexually or romantically interested, they will “like” the user or “swipe right” on their profile, and if they are not, then they will “swipe left.” If two users are interested in one another, they match and can start a conversation. The data from this process is used to make future recommendations, and determine what profile is shown next. The algorithms that power recommended matches are usually inaccessible to the user and to the public, but we have strong reason to believe that those algorithms are similar to other collaborative recommender systems. I will start by looking at how collaborative filtering algorithms predict preferences to build recommendations.
With the amount of content available, recommender systems are crucial to help users choose from the abundance of movies, news articles, or products available on online platforms. Collaborative filtering algorithms filter the abundance of choices to specific recommendations that are predicted to match the user’s preference. The idea behind collaborative filtering is that if groups of users show similar patterns of preferences, the preferences of one user can be predicted from the past behavior of similar users. In other words, by collecting data on the preferences of users collectively, the algorithm predicts the preferences of an individual user, builds recommendations that match those preferences, and filters the content the user can access on the platform. Recommendations prioritize some options to the user over others, while filtering limits the choices that are available to the user. A simple example: Let’s say that most online shoppers who buy chips also buy salsa. By collecting data on user shopping behavior, a filtering algorithm learns the high correlation between buying chips and buying salsa. When target users add chips to their virtual cart, they are grouped with all the previous users who bought chips and salsa. Their future interest will be predicted based on the past behavior of those users collectively; this prediction then leads to a message which should be familiar to readers who have shopped online: “You may also like” salsa. Simply put, because most people who buy chips also buy salsa, if a target user were to buy chips, the algorithm would predict that the user may respond favorably to a recommendation to buy salsa.
The sheer quantity of data available and the relative ease of creating recommender systems that are blind to content make collaborative filtering algorithms both practical and effective. First, an enormous amount of implicit data can be gathered from a simple interaction between the user and the platform. Explicit data about preference can be gathered through rating systems (a star ranking on a product or a comment left on a page). But implicit data about user preference is easier to gather and does not require users to spend time rating content or products. Implicit data includes anything from users’ shopping history, to what products they look at, which link they click, and how much time they spend looking at a certain page. Second, both explicit and implicit data does not require any information about the content of the recommendation (for example, the quality of the product, the genre of movie) or any knowledge about the user (for example, demographics). Data about content and demographics is extremely hard to gather, so a recommender system that can be effective without it is preferable.
Since collaborative filtering algorithms work with abundant and reliable data, they have been deployed to filter recommendations on popular platforms. And they work! Recommendations are extremely successful in influencing user behavior. “At Netflix, 2/3 of the movies watched are recommended; at Google, news recommendations improved click-through rate (CTR) by 38%; and for Amazon, 35% of sales come from recommendations.”Footnote 4 Filtering can also be highly effective: Iyengar and Lepper show that when we are given less choice, we act faster, whether that is buying a product, watching a movie, or chatting with a matchFootnote 5: “they ran an experiment where they had two stands of jam on two different days. One stand had 24 varieties of jam while the second had only six. The stand with 24 varieties of jam only converted 3% of the customers to a sale, while the stand with only six varieties converted 30% of the customers. This was an increase in sales of nearly ten-fold!”Footnote 6 On the surface, recommender systems are beneficial to both users and platforms. On the one hand, users will be able to make a choice that they will be satisfied with much faster, wasting less time browsing and filtering through the results themselves. On the other, by showing recommended items first and filtering other unwanted items, a business will have both a higher customer satisfaction rate and better sale, click, or watching rates.
However, those advantages might come at a cost. By simulating a community of users interacting with items on an online platform, Chaney, Stewart, and Engelhardt show that collaborative recommender systems increase homogeneity in users’ behavior without necessarily increasing utility (see Figure 1).Footnote 7 Since users are more likely to make the choice that is recommended to them, users will tend to make the same choice as users around them since they receive similar recommendations. The collaborative filtering system picks up on those choices, and in turn prioritizes them in its recommendations. The recommendations are then amplified through a feedback loop: users choose recommended products, and products are recommended because users choose them.
Recommender systems learn users’ preferences through their interaction on the platform, which leads to recommendations that impact users’ interaction. This results in a feedback loop.Footnote 8
The ethical implications of an increase in homogeneity of behavior are not necessarily obvious when we consider products on Amazon, music on Spotify, or movies on Netflix. However, other empirical research examines YouTube’s recommender systemFootnote 9 to consider how our beliefs might be affected by filtering algorithms. This research is motivated by anecdotal reports of increased recommendations of conspiracy theories on the platform. The study confirms that some topics such as “natural foods” or “firearms” are likely to lead the viewer, through a series of recommendations, to videos that promote conspiracy theories. This is one example of a larger phenomenon that Alfano, Carter and Cheong call technological seduction, “technologically-mediated cognitive pressures and nudges that subtly but systematically induce acceptance of problematic beliefs”.Footnote 10 Recommendations can then turn reasonable searches into extreme recommendations because of learned correlations.
When we consider the context of dating apps, the user is not browsing through products or news but through a potential dating pool. If filtering algorithms can homogenize behavior and polarize beliefs, are they able also to affect our romantic and sexual desires? My goal is to offer the reader reasons to believe so. The effect that filtering algorithms on dating apps might have on user sexual and romantic behavior is ignored in the extensive research around designing collaborative filtering algorithms for dating apps.Footnote 11 I will mainly focus on race since there is established empirical research I can rely on. I suspect that dating apps can shape different kinds of preferences and behaviors, and my discussion might generalize from race to other issues.
III. Race and Online Dating
The first step toward building a collaborative filtering algorithm is to figure out how to group similar users together. For example, on Google News and on YouTube, people are often grouped together on a political spectrum. This allows conservative users to receive news from conservative sources, and liberal users to receive news from liberal sources. This grouping leads to the creation of epistemic structures known as filter bubbles.Footnote 12 In filter bubbles, relevant voices are excluded by omission, amplifying how confident we are in our beliefs since they are reinforced by “echoing” testimonies. When algorithms impose epistemic filters on us, important views are excluded from the information we receive, which can in turn lead to an inflated sense of self-confidence.Footnote 13 On dating apps, there is strong reason to believe that race is an important grouping factor, and if dating app users are grouped by race, then mechanisms similar to filter bubbles could segregate the potential dating pool of users along racial lines, reinforcing existing patterns of preferences and homogenizing behavior.
Some dating apps allow users to identify their race and the race that they would prefer in a romantic or sexual partner. If users choose to share this data, the algorithm can easily group people by race and learn their preferences from the explicit data it has access to. But even when users refuse to state any race or racial preference, the collaborative data still allows the algorithm to make predictions and recommendations that might fall along racial lines. One example is the dating app Coffee Meets Bagel. With anecdotal stories of users only receiving recommendations of their own race, even when they had no stated preferences, the app developers explained:
Currently, if you have no preference for ethnicity, our system is looking at it like you don’t care about ethnicity at all (meaning you disregard this quality altogether, even so far as to send you the same every day). Consequently we will send you folks who have a high preference for [users] of your own ethnic identity, we do so because our data shows even though users may say they have no preference, they still (subconsciously or otherwise) prefer folks who match their own ethnicity. It does not compute "no ethnic preference" as wanting a diverse preference. I know that distinction may seem silly, but it’s how the algorithm works currently.Footnote 14
The upshot here is that algorithmic filtering can override individual preference, even when such preference is explicitly stated, because the preferences of users collectively might form better predictions of successful matches. In other words, the algorithm makes predictions based on implicit aggregate data rather than explicit individual data, as if it can predict your preferences better than you do.
Other dating apps do not ask their users to explicitly state their race or ethnicity. However, as mentioned, filtering algorithms can still pick up on patterns of behavior while being blind to content. Christian Rudder, founder of OkCupid, explains that “racial neutrality is only in theory” since the algorithm can easily guess the race of users based on other characteristics on their profiles. Rudder says that “one of the easiest ways to compare a black person and a white person (or any two people of any race) is to look at their match percentage,” which is OkCupid’s way to determine compatibility.Footnote 15 To return to our simple example, the algorithm does not need to know about the relationship between “chips” and “salsa” in order to learn to recommend salsa to anyone who buys chips. All that is needed is a high correlation between buying one and buying the other. Similarly for dating apps, the algorithm need not know anything about the race of the users, but if people of the same race or ethnicity behave similarly, then the algorithm will be able to group them together without users stating their race on their profile.
Indeed, there is overwhelming evidence that shows that dating platform users segregate themselves along racial lines. OkCupid published, between 2009 and 2014, data about their users that reflects their racial preferences. The data shows that, overall, people show a preference for others of their own race. Men, except for black men, are significantly less likely to rate a black woman’s profile favorably compared to the profile of women of other races. Asian men and black men are subject to this same bias, except from women of their own race.Footnote 16 Another survey of over six thousand heterosexual internet dating profiles shows that white men and white women are significantly less likely to be excluded from dating and sexual considerations: “Asians, blacks and Latinos are more likely to include whites as possible dates than whites are to include them.”Footnote 17 Black people are ten times more likely to reach out to a white person than white people are to reach out to a black person.Footnote 18 A number of other empirical studies confirms those trends: users on online dating platforms seem to segregate themselves based on race, exclude people of color from consideration (except those of their own race), and generally show a preference for white men and women.Footnote 19 This exclusionary behavior is extremely common on dating apps for gay and queer men. Since users tend to be anonymous, many state their preferences explicitly in their profiles: “no blacks, no Asians,” “white only,” and so on.Footnote 20
Before we move on, let me make a couple of quick points about the data. With those numbers in mind, I want to reiterate that the algorithm does not need to classify users by their race or ethnicity to make recommendations that follow racial categories. Take, for example, the profile of a heterosexual black man on an app like Tinder. Asian women will statistically rate the profile of black men lower than the profile of other men. The algorithm can learn not to recommend his profile to users who exhibit similar patterns of preferences (other Asian women), without knowing anything about the race of the users. Second, note that the racial demographic of dating apps reflects the larger demographics of Internet users in the United States. For example, on OkCupid, about 80 percent of users are white (compared to 78 percent of Internet users).Footnote 21 And so, if we consider the larger dataset that the algorithm is learning from, it will lean toward the racial preferences of white users. Regardless of how users are grouped, race will be a strong confounding factor in their recommendations. For a great example of how such data can affect recommendations, I direct the reader to MonsterMatch.Footnote 22 The website allows users to build a fictional dating app profile and swipe right and left on profiles of monsters and humanoids. It simulates the algorithm that powers dating apps and shows users exactly which profiles were left out from their dating pool and for what reasons.
To sum up, the collaborative filtering algorithms that power dating apps learn to classify users based on their race since the preferences of racial groups are usually similar enough to warrant such grouping. Racial groups on dating apps tend to segregate themselves, preferring people of their own race. Generalizing beyond racial groups, users tend to show a preference for white users, men seem to show a bias against black women, and women seem to show a bias against Asian men. Since correlations lead to recommendations through filtering, users on dating app will be recommended other users of their own race. And if they are grouped with users regardless of race, users will be recommended white users at higher rates, heterosexual men will have fewer black women in their recommendations, and heterosexual women will have fewer Asian men in their recommendations.
IV. Shaping Our Sexual and Romantic Preferences
The last step in my argument is to establish that the recommendations that result from these filtering algorithms can affect user behavior. On platforms like Google and YouTube, affecting user behavior and preferences is exactly why filtering algorithms are deployed in the first place. We have also seen evidence that such filtering works: the power of recommendations is extremely effective in creating structures such as filter bubbles. It is not surprising that this effect could extend to the dating realm, when the same technologies are deployed to filter who we might find romantically or sexually attractive. I will end the section by raising some potential issues with the influence of algorithmic matchmaking.
First, dating apps are excluding users from others’ dating pool as a result of the collaborative filtering. The effects of filtering are obvious: if you don’t see someone’s profile, then you cannot match or start a conversation with that person. Second, dating apps are actively suggesting some users as “good matches,” and recommendations make dating app users more willing to interact with others. A research conducted by OkCupid concludes that “when we tell people they are a good match, they act as if they are [even] when they should be wrong for each other.”Footnote 23 The power of recommendations is especially relevant when considering the literature on implicit bias. The imagery that we are exposed to can greatly influence the implicit biases that we hold toward groups of people.Footnote 24 Through their recommendations, dating apps can influence who users see as a “good match,” affecting who they consider to be desirable. Mechanisms similar to Alfano, Carter, and Cheong’s technological seductionFootnote 25 could then be at play on dating apps: pressures and nudges that subtly but systematically affect who we match with, talk to, and eventually date. As online dating platforms become increasingly popular, there is no doubt that filtering that happens outside of users’ control affects their romantic and sexual behavior.
Filtering and recommendations can even ignore individual preferences and prioritize collective patterns of behavior to predict the preferences of individual users. This effectively homogenizes the behavior of those who are grouped together: they will receive the same recommendations and will tend toward matching with the same people. Not only do users have no control over what group they are placed in, the algorithm is likely to pick up on racial categories to form those groups, which ignores the preferences of users whose preferences deviate from the statistical norm. The recommender system can further amplify this process through a feedback loop: if users are repeatedly recommended others of their own race, they will match with people of their own race at higher rates than with others. The algorithm can then use this as further data and additional evidence to continue with its existing pattern of recommendations. And so, if dating apps are influencing users’ behavior, they do so by homogenizing this behavior through collaborative recommender systems and deepening racial biases through feedback loops.
The reader might consider that there is absolutely no harm done in the process. After all, we see no issue with recommender systems on Amazon or Spotify prioritizing some products over others. Even recommender systems that amplify filter bubbles are not obviously reprehensible, but only when they might lead to epistemically questionable practices and false beliefs. Indeed, one might think that dating apps are even more tempting now with the power of algorithmic matchmaking, even when the patterns that the algorithm learns and amplifies show deep racial biases. After all, sexual desires, and desires in general, resist moral criticism. We think of our preference for a certain body type or hair color to be out of our control and deeply personal. It would be strange for someone to praise us for being attracted to someone or blame us for our lack of attraction to someone else. Megan Mitchell and Mark Wells argue that we are morally justified from excluding certain people from our dating pool.Footnote 26 Xiaofei Liu also shows that there is nothing wrong with what he calls “simple looksism”—that certain physical features are “deal breakers” for our sexual or romantic consideration is perfectly okay.Footnote 27 If the algorithm is successfully and accurately able to determine and predict users’ sexual and romantic preferences, then whatever patterns dating apps pick up on and extend should be irrelevant to a moral evaluation of the algorithmic filtering.
Yet, this comes in sharp contrast with another intuition some readers might have: to exclude everyone of a certain race from any romantic or sexual consideration seems problematic. After all, as we have seen in Part II, patterns of romantic and sexual attraction in the United States often reflect larger patterns of exclusion. Liu, for example, argues that there is a morally relevant difference between simple looksism and racial looksism. He argues that racial looksism is an overgeneralization: it assumes that people of a certain race will always look a certain way, when race does not determine the way a person will look.Footnote 28 Additionally, Mitchell and Wells argue that racialized sexual and romantic biases have morally relevant social meaning grounded in a history of discrimination including, for example, prohibitions on interracial marriage.Footnote 29 If this is true, then dating apps are contributing to this wrong by exacerbating problematic sexual and romantic biases, as they can homogenize and deepen exclusionary preferences.
This leads to a tension in the design choice that a dating app might adopt. If we accept the obligation to resist deepening racial bias through filtering, then recommender systems ought to be designed in a way that avoids racially exclusive recommendations. But why should the algorithm resist the preferences of users who do hold such biases? At this point, we are asking dating apps to serve a function beyond the one that we started with, which was simply to learn user preferences and build recommendations based on them. An algorithm that resists biased preferences cannot do so without serving the preferences of some and not others, which is exactly the problem that we started with.
Regardless, I believe the issue lies deeper than biased recommendations: users have absolutely no control over the filtering that determines who they see on dating apps. As mentioned, stated preferences are sometimes overridden by algorithmic predictions. Using collaborative data in the context of dating apps seems to override extremely personal sexual and romantic desires. One interesting suggestion might address the tension we have encountered: Hutson et al. argue that with randomized recommendations, users can break out of the patterns that the algorithm reinforces.Footnote 30 This does not mean that the project of filtering is scrapped altogether. Rather, random recommendations will be part of the filtered results to allow users to explore beyond the algorithm’s limit. If dating apps allow their users to branch out from what the algorithm considers a safe match, they could break patterns that the recommender system amplifies.
V. Conclusion
I discuss in this essay concerns that might be raised by the use of collaborative filtering on dating apps. I argue that collaborative filtering is especially effective at homogenizing behavior and amplifying existing patterns of preference. Dating app users often segregate themselves by race, showing a preference for people of their own race. Other racial biases are also at play online when we look at larger patterns of preference and behavior. Deploying collaborative filtering algorithms on dating apps can then homogenize the behavior of users of the same race, and deepen existing racial biases among online daters.
One goal of this essay is to show the extent to which recommender systems can influence user behavior. Extensive research shows how effective recommender systems are on shopping platforms or social media. But if recommender systems can affect what we buy and what we watch, and if those same systems are deployed on dating apps, then we can establish that they are also influencing who we date. Another goal of this essay was to bring attention to how collaborative filtering algorithms can learn from and amplify existing patterns of behavior. There has been recent interest in news filtering and the creation of filter bubbles: existing beliefs are echoed through news recommendations to artificially augment how confident we should be in those beliefs. Similarly, a dating app user who exhibits certain patterns of sexual or romantic preferences will have those patterns exacerbated through a feedback loop. Finally, I hope that this essay is a first step toward bringing together recent work on algorithmic justice, with the rich literature on sexual and romantic desires. It is extremely challenging to think about how our desires are shaped and if they could hold moral value. Looking at dating apps allows us to study these issues in a controlled and artificial environment; I hope, however, that my discussion does not avoid the hard questions by simplifying the reality of dating, but rather sets up a framework to address these questions. The ethics of dating cannot be divorced from discussions of online dating: as I mentioned, more new couples meet online than by any other method.