Skip to content

The FTC can finally regulate online privacy protections

  • by

Over the past decade, business models that rely on harvesting and selling huge amounts of consumer data have proliferated despite being deceptive, damaging and downright creepy.

Known as commercial surveillance, companies are collecting vast amounts of data about people and using that data for purposes only tangentially connected to their own products or services, such as profiting from selling that data or targeting ads.

This can lead to restrictions on opportunity and access, increased prices, intrusions on personal privacy, and expanded oversight by law enforcement — and these harms systematically and disproportionately affect Black and brown communities, as outlined by a Public Citizen report.

The bad news is that the US lags woefully behind other countries in providing privacy protections online. The good news is that the Federal Trade Commission is finally on the case.

And it’s about time! A June poll by Morning Consult found that more than 80% of voters in both parties want stronger data privacy protections. Years of polling on this issue have found similar or even greater levels of support.

In August, the FTC issued an Advanced Notice of Proposed Rulemaking exploring the possibility of regulating commercial surveillance. The deadline for the public to comment on the proposal is today.

In an ideal world, Congress would pass a national online privacy law to set minimum safety standards. But with two years of divided government on the horizon and bipartisan agreement on these issues difficult to reach, the FTC’s commercial surveillance rulemaking is likely to be the most important battleground for new federal privacy safeguards online.

The scope of the FTC’s rulemaking is wide, because the practice of commercial surveillance has become so pervasive. Here are three of the major issues the agency needs to address.

First, clicking “accept” on excessive fine print is not consent.

Many of today’s data privacy protections are based on long, vague, take-it-or-leave-it terms of service agreements that users must accept. It would take dozens of hours to read all the terms of service a typical person is forced to “agree” to.

It’s not reasonable to expect everyday people to read dozens of pages of fine print. Moreover, clicking an “accept” button is not meaningful consent to the wide range of abusive and intrusive practices that make up the surveillance economy, as legal research by George Washington University shows.

Second, algorithms linked to commercial surveillance harm civil rights.

Data from commercial surveillance can be used to target ads to specific consumers, but under current rules it is not limited to that use. Predictive algorithms — formulas that try to anticipate the likelihood of certain social and behavioral outcomes — worsen racial discrimination and bias, which can lead to serious economic, physical and social harm.

Among other things, algorithms drive up prices, lower credit scores, make it harder for people to obtain loans, reduce educational opportunities, increase criminal penalties, diminish health outcomes, foster employment disparities, and can lead to more intrusive law enforcement surveillance.

And third, commercial surveillance harms competition, product quality and affordability.

Monopolistic companies can collect and process hordes of data on their own, which they then use to give their own products preferential treatment on their platforms. This not only excludes competitors’ products but also stops people from buying better or cheaper products.

In turn, this prevents startups and entrepreneurs who invent better and cheaper products from profiting off their innovations. In some cases, it stops people from innovating entirely.

Shutting out new businesses and new ideas is particularly harmful to communities of color, who are excluded from building wealth as a result of anti-competitive behavior from Big Tech companies.

In addition, monopolistic companies usually have lower wages, reduce worker leverage, and exacerbate existing social inequality.

Here’s what the FTC can do to solve these problems.

Most crucially, the FTC can limit what data is collected about people in the first place by outlining acceptable uses of data and banning those that are contrary to consumer interests.

Collecting less information would eliminate some of the problems that stem from predictive algorithms and clear the way for newer competitors who don’t have the advantage of a giant data hoard.

In addition, greater transparency, periodic auditing, and testing requirements would help mitigate the damage to consumers and begin to bring civil rights principles to the internet.

After the FTC reviews the comments it receives on its proposal to regulate, the agency will likely develop and put forth a proposed rule, which would also be subject to a public comment period. In addition, the agency will likely hold public hearings on any new rule it proposes.

Today’s comment deadline marks the beginning of what will hopefully be a strong national online privacy standard — and with any luck, one that brings us in line with the rest of the world.

Emily Peterson-Cassin is the digital rights advocate for Public Citizen. She wrote this column for The Dallas Morning News.

We welcome your thoughts in a letter to the editor. See the guidelines and submit your letter here.

Leave a Reply

Your email address will not be published. Required fields are marked *

BPISSUENEWS