Six Lines

The Internet with a Regulatory Face

Posted by Aaron Massey on 31 May 2014.

Maciej Cegłowski’s talk at Beyond Tellerrand titled The Internet with a Human Face has been making the rounds recently. It’s a great writeup of what I imagine was a great talk. It’s worth reading, and I recommend it. In fact, it’s sort of a pre-requisite for this post because my goal here is to splash some cold water on his prescriptions for solving the problems that he identifies in his “incredibly dark rant about how the Internet is alienating and inhuman.”

This is your last warning! If you read further, you’re going to see some spoilers to his talk!

Part way through the talk, Maciej proposes regulation as a solution to the alienating, inhuman parts of the Internet, saying “It should be illegal to collect and permanently store most kinds of behavioral data.” A lot of web developers, programmers, geeks, and techies feel this way. Technology policy experts, however, recognize that statements like this1 are a ridiculous simplification of reality. Note that I’m not saying they are a simplification of “political reality,” but of actual, real-world, reality.

Simple statements that seem to be obvious solutions to technology policy problems sometimes fail to actually solve anything for legitimate reasons. To demonstrate this, I would like to walk through each of the eight ideas that Maciej lays out, and give an equally concise, similarly casual counterpoint to the core idea. I don’t want to wordsmith this, though that would almost certainly have to happen for a real policy proposal. Instead, I want to address his thoughts at a conceptual level. Here’s his first proposal:

(1) Limit what kind of behavioral data websites can store. When I say behavioral data, I mean the kinds of things computers notice about you in passing—your search history, what you click on, what cell tower you’re using.

It’s very important that we regulate this at the database, not at the point of collection. People will always find creative ways to collect the data, and we shouldn’t limit people’s ability to do neat things with our data on the fly. But there should be strict limits on what you can save.

What if users actually want their behavioral data stored? Yes, this data can be used for advertising or surveillance, but it can also be used to make life more convenient. Some people want this convenience and are willing to expose their data to get it. Other’s don’t. Any generic solution, whether a limitation or an “anything goes” policy, will leave people unhappy.

(2) Limit how long they can keep it. Maybe three months, six months, three years. I don’t really care, as long as it’s not fifty years, or forever. Make the time scale for deleting behavioral data similar to the half-life of a typical Internet business.

Attempting to put time limits into regulatory policy hasn’t worked well in the past. The U.S. Constitution says that intellectual property rights should be secured for “limited times.” For copyright, the initial terms were 14 years of protection, renewable for another 14 years if the author were still alive. Now the terms are the life of the author plus 70 years.2 What do you want to bet that these rights are renewed again the next time Disney’s copyright for Mickey Mouse comes due? The USA PATRIOT Act has sunset provisions that would have caused portions of the Act to expire at the end of 2005. They have been extended twice since then and are still active. Time limits can work, but politically, changing time limits is much easier than changing “actual” policy. Be careful what you wish for.

(3) Limit what they can share with third parties. This limit should also apply in the event of bankruptcy, or acquisition. Make people’s data non-transferable without their consent.

Sharing data with third parties has many of the same problems as Maciej’s first suggestion. Some want it, others don’t. Also, anyone familiar with the Do Not Track debates would agree that the devil in the details is much harder to handle than one might otherwise believe.

(4) Enforce the right to download. If a website collects information about me, I should be allowed to see it. The EU already mandates this to some extent, but it’s not evenly enforced.

This rule is a little sneaky, because it will require backend changes on many sites. Personal data can pile up in all kinds of dark corners in your system if you’re not concerned about protecting it. But it’s a good rule, and easy to explain. You collect data about me? I get to see it.

(5) Enforce the right to delete. I should be able to delete my account and leave no trace in your system, modulo some reasonable allowance for backups.

I’ve included two points here because they both have essentially the same problem. Both proposals may actually be far worse than the problems they are trying to solve.

Shortly after Maciej’s talk, Google implemented their response to the European requirement that people should have a “right to be forgotten.” Essentially, if you want to be “forgotten” by Google, then you have to provide some personal information–including a valid photo ID, the articles to be “forgotten,” and the relevant law from your country that has jurisdiction. Their approach was described by Julia Powles from the University of Cambridge as “disappointingly clever.” My response: “What’s the alternative?” Think about this with a security mindset. Trouble with your business? See if you can get Google to “forget” your competition! Trying to get that girl’s phone number? Request a download of all her information from Google! That’s not even mentioning other serious first amendment conflicts to this general approach.

(6) Give privacy policies teeth. Right now, privacy policies and terms of service can change at any time. They have no legal standing. For example, I would like to promise my users that I’ll never run ads on my site and give that promise legal weight. That would be good marketing for me. Let’s create a mechanism that allow this.

Privacy policies do have legal standing in the United States, but it’s a bit complicated. In short, the Federal Trade Commission can hold companies accountable for the statements they make in their privacy policies. Even if those policies change, the promises made may still apply. The FTC warned Facebook when they bought WhatsApp that the original privacy promises WhatsApp made would have to be respected. You can argue that the regulatory process surrounding this should be more transparent or that it should be more consistently enforced, but it isn’t non-existent. Google paid a $22.5M fine from the FTC for violating their own privacy policy. If you want to take advantage of them, the mechanism that allows this is, well, your privacy policy.

(7) Let users opt-in if a site wants to make exceptions to these rules. If today’s targeted advertising is so great, you should be able to persuade me to sign up for it. Persuade me! Convince me! Seduce me! You’re supposed to be a master advertiser, for Christ’s sake!

The opt-in versus opt-out debate has been going on for years, and it’s a policy proposal that is unrecoverably political, which was something I wanted to avoid in this post. I’m not going to be able to summarize it well, but I will say that it feels like a bit of a red herring. Neither an opt-in nor an opt-out will “solve” privacy problems. Studies have shown that vast majority of people end up simply sticking with the default choices, whether they were opted-in or opted-out. Neither approach aligns with the preferences people say they want.

I also want to address an assumption Maciej is making here that’s common across technology policy debates: that using the latest and greatest technology created by civilization is a fundamental human right. Sometimes policy wonks take the position that you are “opting-in” when you choose to use the technology. Thus, if you choose to use a cell phone, then you’re choosing to be tracked wherever you go. Lots of people think this is ridiculous because cell phones are so convenient that they have become virtually essential for daily life. When did that transition from new-fangled gadget to essential, everyday tool happen?

There’s a tension here: When does a technology become so important to our lives that we consider it a “right” to use it? Louis C.K. expresses some of this tension so well in his Everything’s Amazing and Nobody’s Happy video. Maciej seems to be the guy on the airplane complaining about the lack of WiFi 10 minutes after being told that it was an experimental possibility for his flight. Maciej picks on Google’s YouTube throughout his talk without ever acknowledging that it’s basically a miracle. We can upload videos from anywhere in the world and share them with anyone else in the world over a network connection! When did watching cat videos on YouTube become a fundamental human right? If Google’s chosen business model to support YouTube is targeted advertising, then shouldn’t the fact that this technology is astonishing qualify as the persuasion that Maciej is looking for?

(8) Make the protections apply to everyone, not just people in the same jurisdiction as the regulated site. It shouldn’t matter what country someone is visiting your site from. Keep it a world-wide web.

Unfortunately, this is pure fantasy. Jurisdictions matter. They always have. For the Internet, the starting point was probably the LICRA v. Yahoo! case, but I believe this case was less about the world-wide web being restricted or balkanized into something less-than-world-wide and more about the web growing into something that actually mattered world-wide.

So that’s it. Those are my thoughts, as plain-spoken as I can make them. I don’t want to make light of, or imply there is no validity to, Maciej’s concerns, but it helps to take a step back and get some perspective. It’s startlingly easy to forget that many Western assumptions don’t actually hold world wide. Look at North Korea at night. In some ways, the suggestion that it’s essential for us to address Maciej’s concerns is an insult to people who would probably kill to have the sort of first world problems his solutions are intended to address.

The Internet is so young and complex, with new challenges that must be addressed. I care deeply about these challenges, and I share many of Maciej’s concerns. I wouldn’t have given up my career as a software engineer to go to grad school and get a PhD studying this stuff if I didn’t care about meaningful approaches to security, privacy, and regulatory compliance in software. We are still at the beginning of the computer revolution. And yes, we still have to figure out what to do about privacy on the Internet. But it’s not an easy problem to solve, and sometimes the obvious solution isn’t actually all that obvious or even all that much of a solution.

  1. Another example statement from C.G.P. Grey: “The Internet is amazing and that’s because of the rules which govern how it works, an important one of which is Net Neutrality: treating all data equally.” Treating all data equally is idiotic, and even people who support net neutrality don’t actually want that. Network management would become dramatically more expensive, and people would begin to wonder why spam, viruses, and other malware are suddenly propagating like wildfire. 

  2. For works created by a corporation, the terms are 120 years after creation or 95 years after initial publication.