The Promise and Peril of Big Tech
Posted by Aaron Massey on 06 Aug 2018.
I came across two otherwise unrelated news stories back to back that highlighted the promise and the peril of big tech rather succinctly. The first is a story about Google offering alerts to organizations using their G Suite product if Google suspects that they have been targeted by state-sponsored attacks:
Google is now implementing the new protection feature within the G Suite Admin console, admins will have the opportunity to receive alerts whenever attacks could be attributed to a nation-state actor.
Every time an attack will be detected, admins can choose to secure the account hit by the hackers and can also opt to alert the victim.
The alerts don’t necessarily imply that the account has been hacked or that the organization has been compromised in a massive attack.
This is something that simply requires a single, huge, connected platform to do and do well. A random small firm with a custom content management system isn’t going to be able to detect, let alone handle, a situation like this. It’s also a great example of the entrenched advantages that big tech firms have over small startups. The press loves to hail small startups in garages as always being able to compete on a level playing field against the big tech firms, but the reality is that big tech has real advantages in infrastructure that are difficult to compete with directly. Small firms simply aren’t well-positioned to track or understand sophisticated, subtle, long-term state sponsored attacks.
The second story is from Rolling Stone about big tech’s response to Alex Jones. His work has been banned or removed from iTunes, YouTube, and Facebook, raising important questions about the responsibilities that big tech platforms have:
As many pointed out last week, the Jones ban was not a legal speech issue – not exactly, anyway. No matter how often Jones yelped about “Hitler levels of censorship,” and no matter how many rambling pages he and his minions typed up in their “emergency report” on the “deep state plan to kill the First Amendment,” it didn’t change the objectively true fact their ban was not (yet) a First Amendment issue.
The First Amendment, after all, only addresses the government’s power to restrict speech. It doesn’t address what Facebook, Google, YouTube and Twitter can do as private companies, enforcing their terms of service.
These issues are not new, but they are evolving. Perhaps the best example of this is the recent changes to Section 230 of the Communications Decency Act of 1996. Public discourse on private platforms can be arbitrarily shaped by removing “objectionable” content or positioning “virtuous” content – both of which the platform controller gets to define. Is this what we want for our society? If not, can we clearly articulate a better alternative? Regardless, this problem doesn’t exist without big tech platforms. If the everyone who wanted to participate in this conversation had to setup their own server on their own domain and attract readers based entirely on their own efforts and marketing, then the platforms wouldn’t be in this position.
I’m writing about both of these articles simultaneously because they struck me as quintessential examples of the tradeoffs of big tech. Big tech and society both benefit because the infrastructure of big tech can scale so well, but we all also acquire a responsibility to actively manage our relationship with content provided by that infrastructure. This has been true for decades, and it’s not going away. It’s just going to flare up from time to time. This is not the first time I’ve written about this issue, and I’m sure it won’t be the last. Understanding technology policy and the implications of regulation will remain critical for computing professionals, indefinitely.