Loading...
Digital InclusionDigital Literacy

Digital Detox 2.2: Data and Digital Redlining

aerial photo of a city street with a red line painted down the middle, flanked by a building and a stand of trees
[This post was sent as a newsletter to our Digital Detox participants. We will post all Digital Detox newsletters to our blog, so stay tuned!]

Written by Dr. Amy Collier, Associate Provost for Digital Learning

Have you ever heard of Facebook’s “real name” policy? Facebook’s policy requires users to sign up for accounts using their real names, saying that the policy helps to reduce harassment by fake accounts. Seems reasonable, yes? But, who gets to decide what a “real name” is? And how might the policy be wielded to deny access or to exclude/marginalize people?

In 2016, Dr. Tressie McMillan Cottom faced the darker side of the real name policy after she responded to a public Facebook post and showed support for students protesting at her university. Someone reported Dr. Cottom for being in violation of the real name policy (her username was Tressie McPhd) and Facebook locked and shut down her account. Other scholars reported similar experiences, often when other users were trying to silence them.

The real name policy has also impacted people whose real names get inappropriately flagged as fake, such as Native American tribal names or Tamil names (which do not have surnames). According to the Electronic Frontier Foundation, transgender people and other members of the LGBTQ community are also impacted disproportionately by the policy. 

Facebook’s policy is a form of digital redlining–technological policies and practices that discriminate across groups of people, particularly marginalized people. Digital redlining is used as a verb to highlight the intentional actions of technology companies, internet service providers, and others that discriminate against specific people and groups and reinforce class divisions (Gilliard, 2017).

Digital redlining usually involves both individual profiling and group profiling, using streams of data collected from apps, platforms, electronic transactions, public records, medical records, metadata, and other data mined or purchased to provide differentiated experiences and opportunities to users or groups of users. This differentiation is sold to us as a good thing (as “personalization”) but it is often laden with discrimination, and decisions about what we see and have access to are often obscured and made inscrutable (what Frank Pasquale calls “black box society”). To make matters worse, we have little to no control over what data are collected and how they are used by these companies.

Facebook’s real name policy is just one form of digital redlining we see in today’s digitally-mediated and data-hungry world. Here are other examples:

  • Stark and Diakopolous (2016) found that Uber’s surge-pricing algorithms, which redistribute drivers to serve higher demand areas, favor white neighborhoods and that the “association between people of color and wait times holds true even when household income, poverty rates, and population density are accounted for.”
  • Internet Service Providers, including Google’s Fiber Project, have ignored low-income urban and rural settings in their plans to deliver high-speed/broadband internet service (see also this Atlanta Black Star article). In 2018, inadequate access to internet service led thousands of people in Arkansas to lose their Medicaid coverage because of new requirements for Medicaid recipients to log 80 hours of work per month, and to log those hours via an online tool.
  • An investigation by the Wall Street Journal showed that Staples and Home Depot (and other retailers) adjusted online prices based on data about the online shopper. Shoppers in higher income areas were shown better deals and pricing options. The ACLU, reporting on this investigation, noted that brick-and-mortar stores will soon begin tracking shoppers’ phones (and all associated data) when they enter stores and adjusting offer and pricing based on those data.
  • In 2015, Facebook applied for a patent for a tool that allows analysis of a user’s friend network and provided the following use-case: “When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual […]. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.” While it’s unclear that Facebook has actually allowed this practice, it is presumably possible for a bank to use Facebook data, including data protected by anti-discrimination laws, to make lending decisions. Banks and other companies “can smuggle proxies for race, sex, indebtedness, and so on into big-data sets and then draw correlations and conclusions that have discriminatory effects.” (Taylor and Sadowski, 2015

Take Action

Digital redlining is a complex and difficult problem to solve, especially since platforms like Facebook have business models built on these practices. We believe, however, that there are actions you can take to help protect yourself and others. Here are some ideas:

Protect yourself (and your loved ones) by reducing your data footprint

Advocate for change

Teach about digital redlining

Keep Reading!



Sharing on social media?
Please use #DLINQdigdetox


Next newsletter >>
Biased, who me?
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: