The delicate balance between supporting open expression as well as shutting down abusive content has become a flashpoint for the tech industry.
You don’t need to look any further than Twitter’s recent decision to suspend Rose McGowan’s account after she tweeted a phone number within the wake of the Harvey Weinstein scandal. Decisions are politicized. Transparency can be questioned. as well as tensions are rising.
“Are we like the phone company where the item’s creepy for us to listen in?” asked one prominent tech CEO, Matthew Prince. “Or are we like a newspaper of which should have an editorial decision?” (Watch the whole series Divided We Code)
These questions have thrust companies into the crosshairs of an evolving discussion on how to monitor content. Companies like Facebook as well as Twitter have argued they’re tech platforms — not media companies. within the early days of the internet, the idea of an open web, free by censorship, was key to its success. although of which hands-off approach can be becoming less defensible, as we see Russian troll farms buying political ads on Facebook, disgruntled exes posting revenge porn as well as ISIS recruits being radicalized online.
For Prince, of which tension culminated one morning This particular summer. His company, Cloudflare, can be a large although mostly invisible web infrastructure company of which helps websites run quicker as well as provides protection by attacks. 10% of all internet requests pass through its network, without which sites could be left vulnerable to cyberattacks. although when one of his customers said Prince wouldn’t kick him off because Cloudflare’s senior leadership was itself full of white supremacists (the item’s not), Prince took decisive action.
He terminated service for neo-Nazi site The Daily Stormer, saying the item was a one-time decision not meant to serve as a precedent.
“I woke up in a bad mood as well as decided someone shouldn’t be allowed on the internet,” Prince wrote in a memo to employees in August, “No one should have of which power.” The son of a journalist, Prince told sy88pgw Tech of which he takes issues of free speech “very, very seriously,” although he also has the right “not to do business with jerks.”
The Cloudflare CEO sat down with sy88pgw Tech to describe what’s happened since of which one controversial decision. Most notably, he said Cloudflare has received requests to terminate more than 3,500 different customers — including by governments, people trying to enforce copyright law, as well as others who simply find particular content problematic.
“as well as the amazing thing can be of which the item’s not just neo-Nazis,” Prince said. “the item’s far right, far left, middle people, things of which people just think are disgusting, things of which people might disagree with because they don’t like one point of view or another.”
He said part of the reason Cloudflare has been able to deflect these types of requests within the past can be of which the company has never taken a political position — the item has treated all content equally. currently, he worries the item will be much harder to use of which defense. He’s concerned, for instance, of which because he kicked off one site, he will no longer be able to fend off foreign governments’ requests to censor LGBT organizations in countries where they’re persecuted.
As tech companies grow in their ability to shape culture as well as communication, the question of who should possess the power to make these weighty decisions becomes even harder to answer. Meanwhile, social networks are starting to accept responsibility for writing algorithms of which better detect hate speech as well as online abuse, according to Andrew McLaughlin, a former policy director at Google as well as former deputy chief technology officer for President Obama.
“I think the obvious trigger for the item can be the Trump election as well as the spread of fake news. as well as of which has caused of lot of these companies to do some soul-searching where they say, ‘Alright, we currently have to accept of which we can’t be neutral,'” McLaughlin said. “We’re producing choices of which are incredibly consequential for what speech gets aired as well as seen by ordinary people.”
Tech companies are also attempting to roll out stopgap measures to combat harassment. Last month, Instagram introduced a tool of which allows users to filter comments. Users say the item’s a step within the right direction, although there’s still a lot to be done to root out the trolls on social media.
as well as earlier This particular month, Twitter outlined policy improvements, including one of which addresses how the site plans to treat hateful imagery. The content in question will be blurred as well as users will need to manually opt in to view. although what exactly Twitter defines as a hate symbol wasn’t clearly spelled out.
Some tech executives, including Prince, argue of which the responsibility falls on political institutions to set clearer guidelines. While he said he’s not arguing for more regulation, Prince said tech CEOs don’t possess the same accountability as elected officials. Others, like McLaughlin, are less trustful of the government becoming the gatekeeper of online speech.
“I’m not a big fan of governments getting directly involved within the management of tech companies,” said McLaughlin. “History shows of which of which power tends to be abused pretty easily.”
the item’s not just issues of online abuse as well as harassment of which have cropped up in recent years. The 2016 election thrust the issue of fake news onto center stage.
“There’s a line between abuse as well as misinformation, as well as most of these companies for a while, as well as including Twitter, were more focused on abuse,” said Ev Williams, cofounder of Twitter as well as CEO of Medium. “I think the misinformation thing can be something of which’s come up truly within the last year much more dramatically.”
Williams has long believed within the internet’s role within the free exchange of information. although, lately, he said, identifying trustworthy sources can be “something of which we truly need to work on building into these systems more.”
“Silicon Valley can be a place of optimism, [although] the item can be blind optimism,” Williams said. “of which’s part of the evolution of which we’re going through — we’re no longer as blind.”
Controlling as well as labeling misinformation can be one of the biggest challenges facing tech companies today. Facebook, Twitter as well as Google increasingly must determine the difference between diverse political viewpoints as well as things of which are just plain inaccurate, Williams said.
“of which’s when some people are calling for editorial guidelines,” he said. “as well as you get into an area where most tech companies would likely be like, ‘the item’s not something of which truly fits in our type or of which we would likely even be not bad at.'”
although whether they like the item or not, tech platforms are being called on to take a more active role in identifying abuse, harassment as well as fake news.
“There’s a principle of which evil festers in darkness. One of the things you don’t want to do can be basically suppress racist speech in a world where they can just go elsewhere, as well as do their evil in darkness,” McLaughlin said. “The product balance being struck here can be when we find ways to elevate as well as suppress without censoring, as well as I actually think of which can be possible.”
This particular story originally published on October 29, 2017. (Watch the whole series Divided We Code)
sy88pgw (brand new York) First published November 10, 2017: 4:58 PM ET