Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

ProPublica Reveals Discriminatory Pricing By Computer Algorithms

ROBERT SIEGEL, HOST:

With each interaction we have online, companies collect data about us - what car we own, how big our mortgage is. Companies keep files on us, and they use those files to decide what to sell us and at what price. That's right. The price of a product is not the same for everyone. You may be getting a better deal or a worse deal than your friend or your co-worker.

Well, Julia Angwin, a senior reporter at ProPublica, has spent the year with her team looking at algorithms, including those that companies use to decide what they will charge us for a product. And she joins us now. Welcome to the program.

JULIA ANGWIN: It's great to be here.

SIEGEL: How are companies using our information to set prices?

ANGWIN: So basically every website you visit creates itself the moment you arrive, and you know that because you see that your ads are customized to you. But in fact the whole page could well be customized, and we have found that there are cases where companies determine the price of the product to you based on where you live.

The one we found most recently was Princeton Review with their online SAT course had different prices in different ZIP codes across the nation ranging from $6,600 to $8,400.

SIEGEL: Yeah, one finding that you report is that because of that, Asians were nearly twice as likely to be charged a higher price in SAT prep courses by Princeton Review than non-Asians because they're Asian or because they live in certain ZIP codes.

ANGWIN: As far as we could tell - and the company said it was not an intentional discrimination against Asians. They drew their lines for their ZIP code pricing in a way that mostly captured a lot of high-income areas but actually also managed to capture a lot of low-income, Asian areas. So Queens, N.Y., is highly Asian but not a high-income area. And we don't really know what went into their algorithm to make it turn out that way.

SIEGEL: We're talking about algorithms, but should we be talking about the humans who design the algorithms?

ANGWIN: The humans behind these algorithms - I don't have any reason to believe that any of them had any intentional biases. I do think that the real question that we've been raising in our reporting is, is there anyone checking for these unintentional biases?

SIEGEL: If in the case of Princeton Review you come away thinking there is no intent to discriminate by charging more for the same service in Queens, N.Y., than somewhere else, why are they charging that in that case, and are you sure it's not evident to somebody in on the algorithm that, hey, there's money to be made there from these Asian immigrant families? They're desperate for their kids to get into a good college.

ANGWIN: I mean the truth is I'm not sure. It's very hard to prove intent. All I can say as a reporter is that I have no evidence that there was an intention to discriminate against Asians, but there may well have been. The truth is that's why we called our series Black Boxes.

We can't see inside these algorithms that are black boxes. We can really only look at the outputs and say, well, what it looks like is it's discriminatory to Asians, but I don't know what you guys put into your inputs to get it that way.

SIEGEL: Well if artificial intelligence is not neutral, what's the solution - to have uniform pricing or to look back at the algorithms and correct for census data when the disparities in prices are arrived at?

ANGWIN: I think the solution is to build some type of accountability into our emerging systems of algorithms just the way that when credit scores were introduced, they were a very innovative algorithm that had huge implications for people about whether they get a loan or buy a home. And we put in place some rules around that, which is that you're allowed to see the data that is used to calculate that score, and you're allowed to dispute it.

And yes, people who have gone through that process will say it's not as clean or as great a process as it could be, but it seems to me that for high-stakes algorithms that we're increasingly using in our lives today, we want to put in some similar types of accountability measures so that people have a way to fight back if this algorithm makes a decision that they want to dispute.

SIEGEL: Julia Angwin, thanks for talking with us about it.

ANGWIN: It was great to be here.

SIEGEL: That's Julia Angwin, senior reporter at ProPublica. Transcript provided by NPR, Copyright NPR.