Privacy

‘Scraping’ Is Just Automated Access, and Everyone Does It

Deep Links - Tue, 04/17/2018 - 14:05

For tech lawyers, one of the hottest questions this year is: can companies use the Computer Fraud and Abuse Act (CFAA)—an imprecise and outdated criminal anti-“hacking” statute intended to target computer break-ins—to block their competitors from accessing publicly available information on their websites? The answer to this question has wide-ranging implications for everyone: it could impact the public’s ability to meaningfully access publicly available information on the open web. This will impede investigative journalism and research. And in a world of algorithms and artificial intelligence, lack of access to data is a barrier to product innovation, and blocking access to data means blocking any chance for meaningful competition.

The CFAA was enacted in 1986, when there were only about 2,000 computers connected to the Internet. The law makes it a crime to access a computer connected to the Internet “without authorization” but fails to explain what this means. It was passed with the aim of outlawing computer break-ins, but has since metastasized in some jurisdictions into a tool to enforce computer use policies, like terms of service, which no one reads.

Efforts to use the CFAA to threaten competitors increased in 2016 following the Ninth Circuit’s poorly reasoned Facebook v. Power Ventures decision. The case involved a dispute between Facebook and a social media aggregator, which Facebook users had voluntarily signed up for. Facebook did not want its users engaging with this service, so it sent Power Ventures a cease and desist letter and tried to block Power Ventures’ IP address. The Ninth Circuit found that Power Ventures had violated the CFAA after continuing to provide its services after receipt of the cease and desist letter and having one of its IP address blocked.

After the decision was issued, companies—almost immediately—started citing the case in cease and desist letters, demanding that competitors stop using automated methods to access publicly available information on their websites. Some of these disputes have made their way to court, the most high profile of which is hiQ v. LinkedIn, which involves automated access of publicly available LinkedIn data. As law professor Orin Kerr has explained, posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

The web is the largest, ever-growing data source on the planet. It’s a critical resource for journalists, academics, businesses, and everyday people alike. But meaningful access sometimes requires the assistance of technology, automating, and expediting an otherwise tedious process of accessing, collecting and analyzing public information. This process of using a computer to automatically load and read the pages of a website for later analysis is often referred to as “web scraping.”[1]

As a technical matter, web scraping is simply machine automated web browsing. There is nothing that can be done with a web scraper that cannot be done by a human with a web browser. And it is important to understand that web scraping is a widely used method of interacting with the content on the web: everyone does it—even (and especially) the companies trying to convince courts to punish others for the same behavior.

Companies use automated web browsing products to gather web data for a wide variety of uses. Some examples from industry include manufacturers tracking the performance ranking of products in the search results of retailer websites, companies monitoring information posted publicly on social media to keep tabs on issues that require customer support, and businesses staying up to date on news stories relevant to their industry across multiple sources. E-commerce businesses use automated web browsing to monitor competitors’ pricing and inventory, and to aggregate information to help manage supply chains. Businesses also use automated web browsers to monitor websites for fraud, perform due diligence checks on their customers and suppliers, and to collect market data to help plan for the future.

These examples are not hypothetical. They come directly from Andrew Fogg, the founder of Import.io, a company that provides software that allows organizations to automatically browse the web, and are based on Import.io’s customers and users. And these examples are not the exception; they are the rule. Gartner recommends that all businesses treat the web as their largest data source and predicts that the ability to compete in the digital economy will depend on the ability to curate and leverage web data. In the words of Gartner VP Doug Laney, “Your company’s biggest database isn’t your . . . internal database. Rather it’s the Web itself.”

Journalists and information aggregators also rely on automated web browsing. The San Francisco Chronicle used automated web browsing to gather data on Airbnb properties in order to assess the impact of Airbnb listings on the San Francisco rental market, and ProPublica used automated web browsing to uncover that Amazon’s pricing algorithm was hiding the best deals from its customers. The Internet Archive’s web crawlers (crawlers are one specialized example of automated web browsing) work to archive as much of the public web as possible for future generations. Indeed Google’s own web crawlers that power the search tool most of us rely on every day are simply web scraping “bots.”

During a recent Ninth Circuit hearing in hiQ v. Linkedin, LinkedIn tried to analogize the case to United States v. Jones, arguing that hiQ’s use of automated tools to access public information is different “in kind” than manually accessing that same information, just as long-term GPS monitoring of someone’s public movements is different from merely observing someone’s public movements.

The only thing that makes hiQ’s access different is that LinkedIn doesn’t like it. LinkedIn itself acknowledges in its privacy policy that it, too, uses automated tools, to “collect public information about you, such as professional-related news and accomplishments” and makes that information available on its own website—unless a user opts out via adjusting their default privacy settings. Question: How does LinkedIn gather that data on their users? Answer: web scraping.

And of course LinkedIn doesn’t like it; it wants to block a competitor’s ability to meaningfully access the information that its users post publicly online. But just because LinkedIn or any other company doesn’t like automated access, that doesn’t mean it should be a crime.

As law professor Michael J. Madison wrote, resolving the debate about the CFAA’s scope “is linked closely to what sort of Internet society has and what sort of Internet society will get in the future.” If courts allow companies to use the CFAA to block automated access by competitors, it will threaten open access to information for everyone.

Some have argued that scraping is what dooms access to public information, because websites will just place their data behind an authentication gate. But it is naïve to think that LinkedIn would put up barriers to access; LinkedIn wants to continue to allow users to make their profiles public so that a web search for a person continues to return a LinkedIn profile among the top results, so that people continue to care about the maintenance of their personal LinkedIn profiles, so that recruiters will continue to pay for access to LinkedIn recruiter products (e.g., specialized search and messaging), and so that companies will continue to pay to post job advertisements on LinkedIn. The default setting for LinkedIn profiles is public for a reason, and LinkedIn wants to keep it that way. It wants to participate in the open web to drive their business but use the CFAA to suppress competitors and avoid accepting the web’s open access norms.

The public is already losing access to information. With the rise of proprietary algorithms and artificial intelligence, both private companies and governments are making high stakes decisions that impact lives with little to no transparency. In this context, it is imperative that courts not take lightly attempts to use the CFAA to limit access to public information on the web.

[1] The term “scraping” comes from a time before APIs, when the only way to build interoperability between computer systems was to “read” the information directly from the screen. Engineers used various terms to describe this technique, including “shredding,” “scraping,” and “reading.” Because the technique was largely only discussed in engineering circles, the choice of terminology was never widely debated. As a result, today many people still use the term “scraping,” instead of something more technically descriptive—like “screen reading” or “web reading.”

An earlier version of this article was first published by the Daily Journal on March 27, 2018.

Related Cases: hiQ v. LinkedInFacebook v. Power Ventures
Categories: Privacy

55 Infosec Professionals Sign Letter Opposing Georgia’s Computer Crime Bill

Deep Links - Tue, 04/17/2018 - 09:35

In a letter to Georgia Gov. Nathan Deal, 55 cybersecurity professionals from around the country are calling for a veto for S.B. 315, a state bill that would give prosecutors new power to target independent security researchers.

This isn’t just a matter of solidarity among those in the profession. Georgia represents our nation’s third largest information security sector. The signers have clients, partners, and offices in Georgia. They attend conferences in Georgia. They teach and study in Georgia or recruit students from Georgia. And they all agree that S.B. 315, which would create a new crime of "unauthorized access," would do more harm than good.

Read the letter from the 55 information security specialists in opposition to S.B. 315.

The signers include top academics such as Harvard Kennedy School Lecturer Bruce Schneier, Kennesaw State University lecturer Andy Green, and Keith Watson, Information Security Manager at Georgia Tech's College of Computing. Executives at HackerOne, Eyra Security, Enguity Technology Corp., R3ality Inc., and Covici Computer Systems are also calling for a veto. The names include some of the top professionals in the field, such as John Vittal, former director of technology for Verizon, and Peter G. Neumann, Chief Scientist at SRI International’s Computer Science Lab, as well as engineers from Google, Cox Communications, and Dell Technologies, signing in their personal capacities.

The letter calls out two particular problems with the legislation.

First, the bill potentially “creates new liability for independent researchers that identify and disclose vulnerabilities to improve cybersecurity.” Although the bill exempts “legitimate business activities,” this term is not defined in a meaningful way, leaving ambiguity for how the law would be enforced by prosecutors.

Second, the bill includes an exemption for “active defense” measures, which is also left perilously undefined. As the researchers write, “this provision could give authority under state law to companies to ‘hack back’ or spy on independent researchers, unwitting users whose devices have been compromised by malicious hackers, or innocent people that a company merely suspects of bad intentions.”           

S.B. 315 would provide district attorneys and the attorney general with broad latitude to selectively prosecute researchers who shed light on embarrassing problems with computer systems. The signers want Gov. Deal to know that the bill would not only harm Georgia’s information security sector, but also make people nationwide less safe by chilling research that could bring light to vulnerabilities. 

We wholeheartedly agree. If you live in Georgia, please join the effort and tell Gov. Deal to veto S.B. 315 immediately.

Take Action 

Veto S.B. 315

Categories: Privacy

The California Senate Utilities Committee’s Net Neutrality Analysis Might as Well Have Been Written by AT&T

Deep Links - Mon, 04/16/2018 - 20:48

S.B. 822, Senator Scott Wiener’s net neutrality bill, is currently pending in the California legislature. It’s a bill that prioritizes consumers over large ISPs, creating strong net neutrality protections. Unsurprisingly, AT&T and the rest of the giant telecom companies don’t like it. And unfortunately for Californians, the report on the bill issued by the California Senate Committee on Energy, Utilities, and Communications parrots several misleading arguments by the large ISPs.

S.B. 822 does a lot of things, but the biggest objections AT&T has—and which the committee seems to be comfortable agreeing with—are with provisions that cut into their bottom line. S.B. 822 bans blocking, throttling, and paid prioritization for any ISP looking to get money from the state. ISPs want paid prioritization because it would let them charge companies extra for faster connections. ISPs argue that if they can’t make more money that way, they’ll have to charge customers more. ISPs also want to be given taxpayer money without these requirements. The committee’s recommendations play right into the ISPs’ hands, even though they don’t make sense.

The Committee Appears to Believe ISPs Don’t Make Enough Money

The foundational argument within the Committee’s analysis against many of the provisions that protect net neutrality is the premise that if ISPs can’t charge more parties extra, all of us will need to pay more for broadband. And while it is a fair assumption that ISPs would love to jack up the prices even more, it is not true that ISPs are drowning in debt and need the money from the new revenue streams opened up by the repeal of the FCC’s net neutrality and privacy provisions.

ISPs are not hurting for profit. In fact, large ISPs like Comcast, AT&T, and Verizon have made billions in profits for the last ten years (according to public information EFF assembled), and have been so incredibly resilient that only hurricanes and the Great Recession put a dent in their margins. In fact, when you look at markets with competitive entry either from community broadband or alternatives to the cable industry you find low prices and 21st Century gigabit fiber infrastructure.

Small ISPs like Sonic can sell gigabit broadband at $40 a month, cities with medium size populations like Chattanooga are able to self-provision (and make profit) and deploy gigabit broadband at $70 a month, Google Fiber was able to sell gigabit broadband at $70 a month as well in Orange County, California, and San Francisco’s proposed community broadband project estimates it can deliver high-speed broadband for low-income people at around $30 a month.

So how is it that these companies and local governments can provide 21st-century internet at an affordable price, while huge ISPs rake in billions per quarter charging high prices, yet policymakers in Sacramento think ISPs are in deep need of charging more for Internet access to upgrade? Perhaps they missed the reporting that revealed ISPs immediately moving to raise prices after network neutrality was repealed, but nothing indicates that having network neutrality rules will result in people paying more for Internet access. The problem isn’t that ISPs can’t afford to give us better service, it’s that we lack competition in the high-speed Internet access market and ISPs are not restrained from raising broadband prices. Network neutrality just ensures they can’t also use their gatekeeper power to stop competitive alternatives (particularly in the video market) from getting to broadband users.

The Committee Is Ignoring the Fact that Certain Zero-Rating Programs Violate Network Neutrality

Zero-rating is the process where ISPs exempt certain content from counting against your data cap. The problem EFF has been seeing with zero-rating practices is that the ISP has a huge incentive to exempt their own content while keeping competitors under the cap. When faced with content that doesn’t count towards the cap versus better content that does, people will likely settle for the zero-rated content.

The committee’s analysis acknowledges that the FCC was going to look at zero-rating practices on a case-by-case basis and then recommends that the state not look into zero-rating practices. Notably absent from the analysis are the FCC (under Chairman Wheeler) concerns about serious consumer harms regarding current zero-rating practices (particularly from AT&T). Those concerns didn’t change, but the FCC did.

It is very likely that the zero-rating practices as engaged by AT&T and Verizon were going to be found in violation of the 2015 Open Internet Order’s ban on data discrimination. That is, the ban on ISPs treating In late 2016, near the end of the Obama Administration, the FCC issued a detailed analysis and expressed serious concerns that both companies were probably violating network neutrality.

 

However, one of the first acts by President Trump’s new FCC Chairman Ajit Pai was to terminate all investigations into the conduct of AT&T and Verizon and rescind the legal findings the FCC made on zero-rating. Nothing in the history of zero-rating practices indicates that ISPs exempt content from data caps in order to benefit the consumer. Without fail, the ISPs have chosen winners and losers (very often choosing their own vertically integrated services) because the value and incentive to do so is too great to resist.

 

According to the CTIA’s own study, 84% of consumers look more favorably on zero-rated data and are more willing to consume zero-rated content. That means that even if the content is better, it may not succeed the way zero-rated content does, just because a consumer might avoid something that will count against their data cap. Instead of taking all of these facts into account, the California committee’s report trots out the tired ISP talking point that zero-rating is good for low-income users despite the fact that customers' overall monthly bill remains unchanged (and has never gone down in real dollars).

The Committee Appears to Believe That California Taxpayers Should Continue to Subsidize an AT&T Not Held Accountable to Net Neutrality Protections

The last favor to the big ISPs the committee gives is to gut S.B 822’s requirement that taxpayer subsidies that go to ISPs (including AT&T) be conditioned on network neutrality. The bill as it’s currently written makes receiving taxpayer money dependent on adhering to net neutrality principles. Today, California spends $100s of millions on ISPs, including AT&T, as part of its California broadband subsidy program to help expand broadband deployment so that underserved communities finally get broadband access.

The Committee expresses concerns that ISPs will decline the money and not build broadband networks with it because they do not want to be required to operate under network neutrality. These exact same “concerns” were expressed in 2009 by AT&T when Congress created the $4.4 billion broadband grant program as part of the American Recovery and Reinvestment Act and the Obama Administration decided to condition federal broadband subsidies to require network neutrality. But guess what? More than 2,000 companies applied for subsidies to build broadband with projects that were ready to deploy that very year. Without a doubt, even if AT&T decided to not take the subsidies, there will be alternative companies ready to move in and accept the state’s investment. But despite what history has shown us, the Committee appears in ready agreement with AT&T on being concerned about asking too much of them while handing them taxpayer money.

It's a shame that the Committee on Energy, Utilities, and Communications hasn’t seized on the chance to make California a leader in net neutrality and instead authored a report that looks like the telecom lobby wrote it. But this is only the report, the actual committee hearing is tomorrow and it will be broadcast on the Internet. That means there’s still time to tell the committee members to support the bill with its net neutrality protections intact and not accept this sham analysis.

Categories: Privacy

Busting Two Myths About Paid Prioritization

Deep Links - Mon, 04/16/2018 - 18:48

Eight out of 10 Americans support net neutrality, which makes opposing it a bad look for both politicians and corporate PR. So everyone says something along the lines of being in favor of net neutrality or an Internet Bill of Rights. Every time, however, giant Internet service providers (ISPs) and the politicians on their side, leave room for paid prioritization.

Paid prioritization allows ISPs to charge for some Internet services to be sped up, while all the rest are slowed down. One of the common ways to describe it is that it creates Internet “fast lanes.” A better analogy is that ISPs get to charge protection money from large Internet companies in a classic “That’s a nice Facebook you have there, shame if something happened to it” fashion.

ISPs effectively get to choose the winners and losers of the Internet market place this way. A company like, say, Netflix can afford to pay a princely sum to make sure its service gets to users as quickly and cleanly as possible. The man in his dorm room that just invented a better version of Netflix in his spare time cannot. Paid prioritization favors the existing Internet landscape and hobbles innovation. (Mark Zuckerberg brought up this exact conundrum in his testimony in front of Congress last week.)

Needless to say, a ban on paid prioritization is essential to true network neutrality. So when bills trumpeting an “open” Internet don’t include it, they’re not being honest about their purpose. Neither the Representative Marsha Blackburn’s proposed bill nor Senator John Kennedy’s do anything about paid prioritization. Both bills are called “the Open Internet Preservation Act,” playing on the fact that FCC’s set of net neutrality protections—which included a paid prioritization ban—was called the “Open Internet Order.” But neither bill is adequate to actually secure an open Internet.

On April 17, the House Energy and Commerce Subcommittee on Communications and Technology is holding a hearing about paid prioritization, and there are two major misconceptions to get straight before that happens.

Paid Prioritization Is Not Necessary for Things Like Remote Surgery

One deeply misleading argument in favor of paid prioritization is that it’s necessary for certain services. Autonomous vehicles, remote surgery, and public safety communications all got namechecked when an AT&T executive made his case, all areas where a drop in high-speed connectivity could be devastating to people’s lives.

It’s a disingenuous argument because providing faster Internet connections to those kinds of services is not banned under net neutrality protections. In the 2015 Open Internet Order—and in proposed and passed state laws—ISPs could engage in “reasonable network management.” Reasonable network management means that ISPs can slow things down, speed things up, and even block things in the interests of making sure a service like remote surgery works as intended.

Paid prioritization isn’t the same as making sure a surgery goes off without a hitch—it’s letting ISPs double-dip by letting them charge consumers for access to Internet services and then turn around and charge Internet services for better access to consumers.

ISPs want people to be confused about the difference between the two, but make no mistake: paid prioritization isn’t about saving lives, it’s about making money.

A CDN Is Not the Same as Paid Prioritization

A content delivery network (CDN) is used by some Internet companies to improve their service. A CDN caches its customer’s content on servers at many locations around the Internet. Since the data is closer to the user and there’s less strain on any individual server, the user benefits by getting their content faster. The network benefits because its total traffic is made less redundant, leaving capacity for other data. Content providers benefit by reducing the strain on their servers (and having a first line of defense against attacks).

By placing and maintaining these servers, CDNs create additional infrastructure then allow companies to purchase their services in a way that makes the entire Internet faster. In essence, they’re creating a bigger pie and only charging for the pieces they tacked on.

In contrast, paid prioritization is a redistribution of the existing network services that an ISP provides. One person’s internet or application is only made faster under paid prioritization by slowing someone else’s down, and that’s not fair or healthy to the Internet or anyone who uses it.

And while CDNs do mean that some companies are paying for their service to be faster and better, CDNs work to make the Internet work better by adding to the system in place. Paid prioritization works by slowing down or degrading things and charging companies to be treated better.

Furthermore, under paid prioritization, while a company may use a CDN to actually make a service better, an ISP can undo all that work by simply refusing to deliver content without being paid extra. ISPs control that vital connection to users, acting as a chokepoint that companies have to pay.

ISPs and their advocates rely on lawmakers and the public not knowing the subtle and technical distinction between “paid prioritization,” “CDNs,” and “reasonable network management” in order to appear pro-net neutrality. A law without a ban on paid prioritization is net neutrality in name only.

 

Categories: Privacy

Protecting Email Privacy—A Battle We Need to Keep Fighting

Deep Links - Mon, 04/16/2018 - 14:18

We filed an amicus brief in a federal appellate case called United States v. Ackerman Friday, arguing something most of us already thought was a given—that the Fourth Amendment protects the contents of your emails from warrantless government searches.

Email and other electronic communications can contain highly personal, intimate details of our lives. As one court noted, through emails, “[l]overs exchange sweet nothings, and businessmen swap ambitious plans, all with the click of a mouse button.” In an age where almost all of us now communicate via email, text, or some other messaging service, electronic communications are, in effect, no different from letters, which the Supreme Court held were protected by the Fourth Amendment way back in 1878.

Most of us thought this was pretty uncontroversial, especially since another federal appellate court held as much in a 2010 case called United States v. Warshak. However, in Ackerman, the district court added a new wrinkle. It held the Fourth Amendment no longer applies once an email user violates a provider’s terms of service and the provider shuts down the user’s account.

Background on the Case

It’s hard to conceive how an agreement with your email provider to deliver and store your emails could eviscerate your Fourth Amendment rights. But that’s what the district court decided in Ackerman. AOL shut down Ackerman’s email account after its automated anti-child pornography filters were triggered by an image attached to one of his emails. Following federal law, AOL sent the email and attachments to the National Center for Missing and Exploited Children (NCMEC), which searched them, leading to an indictment on child pornography charges. Ackerman pleaded guilty but reserved his right to argue that the evidence used against him shouldn’t have been allowed because it was obtained through searches of his email without a warrant.

The case, which is now on appeal to the 10th Circuit Court of Appeals, has already been up to this court once before. In 2016, the Tenth Circuit, in an opinion written by then-judge Neil Gorsuch, determined NCMEC acted as a government agent when it opened Ackerman’s email and its attachments. The appellate court also held that because NCMEC did so without a warrant, its actions might have violated the Fourth Amendment if Ackerman had a reasonable expectation of privacy in his email. The Tenth Circuit then sent the case back to the district court to address that question.

Back at the district court, the government argued, and the court held, that Ackerman did not have a reasonable expectation of privacy in the single email and attachments after Ackerman violated AOL’s TOS and AOL shut down his account. The court reasoned that because AOL’s TOS prohibited users from engaging in illegal activity and said the company could take legal action for violations, Ackerman was on notice that there would be no objective reason for him to expect privacy over his emails once his account was suspended.

The District Court’s Logic Doesn’t Make Sense

The district court’s reasoning is simply wrong. Under the court’s logic, your Fourth Amendment rights rise or fall based on unilateral contracts with your service providers—contracts that almost none of us even read. As we argued in our brief, a company’s TOS should not dictate your constitutional rights. Companies draft terms of service to govern how their platforms may be used; these are rules about the relationship between you and your email provider, not you and the government.

Internet companies’ TOS—those lengthy, annoying, legalese missives that users must agree to—control what kind of content you can post, how you can use the platform, and how platforms can protect themselves against fraud. The terms of these contracts are extremely broad. Actions that could cause a provider to terminate your account for TOS violations include not just criminal activity such as distributing child pornography but also—as defined solely by the provider—things like sending an email containing a racial epithet, sharing a news article with your team at work without permission from the copyright holder, or marketing your small business to all of your friends without their advance consent. While some might find activities such as these objectionable or annoying, they shouldn’t result in losing your Fourth Amendment right to privacy over your emails.

Given the vast amount of storage many email providers offer, most of us now hold onto email for years. Accounts can hold tens of thousands of private, personal messages, photos, and videos, revealing intimate details about our private and professional lives. In 2010 the U.S. Circuit Court of Appeals for the Sixth Circuit recognized the important privacy interests we have in our email, and ruled in Warshak that email users have a Fourth Amendment-protected expectation of privacy in the communications they store with their email providers. This ruling has been adopted by every court that has dealt directly with the question of Fourth Amendment rights and email content. It is recognized by all the major communications service providers, who require a warrant before turning over contents of user communications to the government. And it’s followed by the government, which regularly seeks warrants to access user communications.

The trial court’s ruling in Ackerman allows private agreements like TOS to trump bedrock Fourth Amendment protections for private communications. The ruling doesn’t just affect child pornography cases—anyone whose account was shut down for any violation of TOS could lose Fourth Amendment protections over all the emails in their account. The Tenth Circuit should not allow such a sweeping invalidation of constitutional rights to stand.

Related Cases: Warshak v. USAWarshak v. United States
Categories: Privacy

Congress Held 10 Hours of Hearings on Facebook. What’s Next?

Deep Links - Fri, 04/13/2018 - 20:02

After grilling Mark Zuckerberg for ten hours this past week, the big question facing Congress is, “What’s next?” The wide-ranging hearings covered everything from “fake news” to election integrity to the Cambridge Analytica scandal that spurred the hearings in the first place. Zuckerberg’s testimony did not give us much new information, but did underline what we already knew going in: Facebook’s surveillance-based, advertising-powered business model creates real problems for its users’ privacy rights.

But some of those problems can be fixed. As Congress considers what to do next, here are some of our suggestions.

DO Ask For Independent Audits

Facebook mentioned cooperating with FTC audits, but we’re not clear on whether or not Facebook is allowing independent auditors to inspect the data. If we allow Facebook to control the outside world’s visibility into its data collection practices, we can never be exactly sure if Facebook is actually complying with its own assertions. Facebook, along with other large tech companies that handle massive amounts of user data, should allow truly independent researchers to regularly audit their systems. Users should not have to take the company’s word on how their data is being collected, stored, and used.

DO Consider The Impact On Future Social Media Platforms

Tech giants come and go, and that is a good thing. In the mid-1990s, for example, it was hard to imagine a world where Microsoft was not the dominant force in the tech world. In the early 2000s, AOL email addresses and Instant Messenger were ubiquitous. Today, social media is dominated by a few platforms, but they too can be deposed. We need to make sure new regulations don’t forestall that possibility. If Congress decides to “do something” to address the problems it sees with Facebook, it’s worth considering how legislative proposals might help or hinder potential competitors.

For example, without Section 230 of the Communications Decency Act of 1996, Facebook could not have moved out of Mark Zuckerberg’s dorm room in 2004. Conversely, heavy-handed requirements, particularly requirements tied to specific kinds of technology (i.e. tech mandates) could stifle competition and innovation. Used without care, they could actually give even more power to today’s tech giants by ensuring that no new competitor could ever get started

As a massive global company, Facebook has the resources to comply with anything Congress throws at it. But smaller competitors may not.

DO Watch Out For Unintended Effects On Speech

Several Senators and Representatives asked questions about how Facebook decided to remove content from their platform, accusing Facebook of bias and political censorship. Facebook has also been in the news recently for removing accounts and pages linked to Russian bots attempting to undermine American political discourse.

Creating a more transparent and neutral platform may sound like a worthy goal, but if Congress is going to write legislation, it should ensure that transparency and user control provisions don’t accidentally undermine online speech. For example, any disclosure laws must take care to protect user anonymity.

Additionally, the right to control your data should not turn into an unfettered right to control what others say about you—as so-called "right to be forgotten" approaches can often become. If true facts, especially facts that could have public importance, have been published by a third party, requiring their removal may mean impinging on others’ rights to free speech and access to information. A free and open Internet must be built on respect for the rights of all users.

DON’T Allow Big Tech To Tell Congress How To Regulate

Several times during his testimony, Mr. Zuckerberg called for privacy regulations both for ISPs and for platforms. While we agree privacy protections are important for both of these types of businesses, they shouldn’t be conflated. The rules we need for ISPs may be significantly different from those needed for platforms. In any event, Congress shouldn’t allow the tech giants to write their own rules given their strong incentives to favor the needs of shareholders over those of the public.

For example, it will be interesting to see how Facebook implements the EU’s General Data Privacy Protections (GDPR) for their non-European users. But if Congress tries to implement something similar here, we should all be watching to make sure Big Tech doesn’t gut the most important provisions.

DON’T Treat Social Media The Same As Traditional Media

The foundation of a functional democracy is the ability to communicate freely with one other and our elected officials. Like television and radio before it, social media is now a crucial vehicle for that civic discussion. However, the rules that govern traditional media cannot be the same rules that govern social media. While that may seem obvious to some, Sen. Ted Cruz has already called for the fairness doctrine to incorrectly apply to digital communications platforms.

Additionally, Congress should not be taken in by the assertion that AI filters on social media platforms will magically fix all discourse problems. Overbroad censorship is inevitable, and marginalized groups will be the ones most affected. The ability of the public to freely communicate with each other, without government interference, was so important to the country's founders that not only did they put the right to free speech and a free press at the top of the list of Constitutional amendments, they also included, in the Constitution itself, an independent agency to facilitate ordinary communication: the U.S. Postal Service. We have to be able to talk to each other, and Congress should be careful to protect that essential cornerstone of democracy.

DO Talk to Technologists, Engineers, and Internet Lawyers

We’ve seen lots of jokes about the Senate hearings sounding like tech support talking to your grandparents about how to fix their Facebook. It’s not a surprise that many Senators don’t know the technical ways that Facebook works – and that’s actually okay. Participating in a large and complicated branch of government requires a different set of skills than running a technology company, and those skills don’t necessarily overlap with writing or understanding code. The country’s lawmakers didn’t have to be mechanics to legislate basic vehicle safety, nor did they have to be indigent widows to create the Social Security Administration.

What they do have to do is talk to some experts. Congress should be looking to a wide variety of technologists, engineers, and lawyers with deep experience in tech law and policy for advice on any proposals. As Rep. Chaffetz put it in a very different context, time to bring in the nerds.

Bottom Line

Congress needs to get this right. Balancing our right to privacy with our rights to communicate and innovate may be hard, but it’s a task worth doing right.

Categories: Privacy

Large ISPs that Orchestrated the Repeal of the Open Internet Order Ask California’s Legislature to Stand Down and Just Let Them Win Already

Deep Links - Fri, 04/13/2018 - 18:01

The fight to protect Internet freedom is coming to California this month as the Senate Energy and Utilities Committee (April 17) and Senate Judiciary Committee (April 24) have scheduled hearings and votes on Senator Wiener’s S.B. 822, comprehensive legislation that would utilize the tools available to the state of California to promote net neutrality. As these critical dates approach the large ISPs have filed their opposition (see attached) and it is worth looking at what they say in the context of what they have been doing in D.C. and in the courts. It is also important to see what they are not saying to California Senators.

Parties that Decimated Federal Law are Decrying States Acting in Response

While opponents of S.B. 822 profess to prefer a federal solution, they have never really supported network neutrality at the federal level either. In fact, they spent more than $26 million to support the FCC’s effort to repeal network neutrality and are likely spending millions in California right now to sustain their victory. The money spent helps explain how the FCC reached a decision opposed by roughly 8 out of 10 Americans across the political spectrum.

The ultimate resolution to protecting network neutrality across the country is going to include restoring the 2015 Open Internet Order’s protections. That can happen in three ways: the FCC loses in court, the FCC reverses course, or, most likely, Congress passes a new law. Each of these scenarios are very likely years in the making and, in a matter of weeks, the so-called “Restoring Internet Freedom Order” will take effect. That leaves a very long gap of time for companies like Comcast and AT&T to strike exclusive deals with dominant Internet companies like Facebook to begin prioritizing their services and ensure no future small Internet competitors can compete and replace them (it was not that long ago when Facebook supported AT&T’s antitrust violating merger with T-Mobile).

ISPs Oppose Net Neutrality Because They Want It to Be Legal for Them to Charge More for Access Under Paid Prioritization

The large ISPs pretend they support network neutrality by proclaiming their support for a law banning blocking and throttling. What they consistently leave out in all of their letters is their desire to legalize paid prioritization, the ability for them to pick winners and losers determined by how much they can pay the ISP. This is an especially serious problem when considering the high-speed access market gives more than half of all Americans one choice. Notably, Comcast abandoned its pledge to not engage in paid prioritization the moment the FCC began its process to repeal network neutrality protections and no major ISP has ever fully committed to not begin sorting out the Internet by who can pay them more. They are already relying on their allies in Congress to promote their goal to charge more for Internet access simply because they have the leverage to demand more money.

Making paid prioritization legal gives Comcast, AT&T, and Verizon full control on deciding which Internet products and services get preferential treatment and that has enormous value. In fact, a recent study by Adobe found that close to half of Internet users simply switch to a different service if it is slow loading with up to 85 percent switching if it is a video service that is slow loading. The power to harm online services by slowing them down for other services willing to pay extra is the central danger to a free and open Internet particularly as large ISPs now are vertically integrated with content companies. There is such an extraordinary temptation to self-deal and favor their own content to the detriment of alternatives that it is the central antitrust claim by the Department of Justice’s lawsuit against AT&T’s merger by Time Warner. As they aptly stated, AT&T with control over shows like HBO has “the incentive and ability to use…that control as a weapon to hinder competition.” This is also why zero-rating is a problem (also addressed by SB 822) in the context of companies like AT&T exempting their product (DirecTV) from its own data caps and distorting the market.

The Biggest Myth ISPs perpetrate on Sacramento is There is No Network Neutrality Problem and Repealing Network Neutrality is a Return to the Status Quo

The worst talking point goes to US Telecom, which is effectively AT&T and Verizon, saying we have never had a network neutrality problem. The history of net neutrality is full of violations by ISPs. It is almost humorous that a very old talking point by companies like AT&T used more than ten years ago finds new life at the state level. It is as if the Republican-led FCC that sanctioned Comcast for throttling Bit-Torrent was a figment of our imagination or AT&T itself blocking Skype, Google Voice, or FaceTime (let alone zero-rating its own product DirecTV, which the FCC expressed concerns about until Chairman Ajit Pai was sworn into office).

What the FCC did in 2017 will likely go down as the worst Internet policy decision in history and that is because it was such a radical departure. Despite the fact that the ISP market is more concentrated than ever and that even the Trump Administration’s Department of Justice worries about ISPs exerting power to harm competition this FCC concluded that it was proper for it to absolve itself of responsibility. There is nothing normal about that decision when compared to the previous decades of FCCs that regularly promoted network neutrality and took action against ISPs that violate it. And after years of litigation and losing against ISPs under its efforts to promote network neutrality under Title I of the Communications Act, it is completely insincere to argue that returning ISPs to Title I status is going back to FCC regulation as intended.

If all of this nonsense large ISPs like AT&T, Comcast, and Verizon are pushing at your elected state officials in Sacramento has you upset, then you need to take action and make sure your voice is heard as SB 822 comes to a vote.

Take Action

Tell California's State Senators to Stand up for Net Neutrality

Categories: Privacy

Facebook Doesn't Need To Listen Through Your Microphone To Serve You Creepy Ads

Deep Links - Fri, 04/13/2018 - 16:04

In ten total hours of testimony in front of the Senate and the House this week, Mark Zuckerberg was able to produce only one seemingly straightforward, privacy-protective answer. When Sen. Gary Peters asked Zuckerberg if Facebook listens to users through their cell phone microphones in order to collect information with which to serve them ads, Zuckerberg confidently said, “No.”

What he left out, however, is that Facebook doesn’t listen to users through their phone microphones because it doesn’t have to. Facebook actually uses even more invasive, invisible surveillance and analysis methods, which give it enough information about you to produce uncanny advertisements all the same.

Users' fear and even paranoia about hyper-targeted ads is warranted—just not for the exact reasons they might think.

Suspicions that Facebook listens to its users’ conversations have been swirling for years, prompting statements of denial from Facebook leadership and former employees. Facebook does request microphone permissions to handle any videos you post, as well as to identify music or TV shows when you use the “Listening to” status feature. But technical investigations have confirmed that you can be confident the Facebook app is not surreptitiously turning on your phone mic and listening in on your conversations.

But how does Facebook know to serve you an ad for a specific product right after you talk about it? What explains seeing ads for things you have never searched for or communicated about online? The list is long. Instead of listening to your conversations through your phone, Facebook:

Tracking and analysis methods like these power not only those too on-the-nose ads, but also invasivePeople You May Knowrecommendations.

Users are onto this. If you have ever been creeped out by an ad for a product popping up right after you were talking out loud about it, your fear and even paranoia are warranted—just not for the exact reasons you might think. No matter how Facebook achieves its frighteningly accurate ads and suggestions, the end result is the same: an uncomfortable, privacy-invasive user experience.

But Zuckerberg’s testimony this week and other recent statements have made it clear that he is not listening to users’ legitimate feedback and concerns here. Putting words into the mouths of millions of users, Zuckerberg said during his testimony that Facebook users prefer a “relevant” ad experience—that is, a highly targeted one:

What we found is that even though some people don’t like ads, people really don’t like ads that aren’t relevant. And while there is some discomfort for sure with using information in making ads more relevant, the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not.

If that were the case, Congress would not have called Facebook’s CEO to testify on privacy concerns. And recent polls confirm that, while some users like targeted ads, the majority of users do not consider targeted ads “better” than traditional forms of advertising, and 63% would like to see less of them.

Zuckerberg condescendingly called the idea that Facebook is listening in via phone mics a “conspiracy theory.” But users are confused because Facebook has so far refused to be more up-front about how the company collects and analyzes their information. This lack of transparency about what is really going on behind the Facebook curtain is what can lead users to jump to technically inaccurate—but emotionally on-point—explanations for creepy ad phenomena.

Categories: Privacy

Building the “Great Collective Organism of the Mind” at The John Perry Barlow Symposium

Deep Links - Fri, 04/13/2018 - 11:42

Individuals from the furthest corners of cyberspace gathered Saturday to celebrate EFF co-founder, John Perry Barlow, and discuss his ideas, life, and leadership.

The John Perry Barlow Symposium, graciously hosted by the Internet Archive in San Francisco, brought together a collection of Barlow’s favorite thinkers and friends to discuss his ideas in fields as diverse as fighting mass surveillance, opposing censorship online, and copyright, in a bittersweet event that appropriately honored his legacy of Internet activism and defending freedom online.

Thanks to the magic of fair use, you can relive the Symposium any time by visiting the Internet Archive. Video begins at 48:00.

%3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Fyoutube-Oaci9vlg_Sc%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20height%3D%22431%22%20frameborder%3D%220%22%20width%3D%22575%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org

After a touching opening from Anna Barlow, John Perry Barlow’s daughter, EFF Executive Director Cindy Cohn kicked off the speaker portion of the event:

“To me, what Barlow did for the Internet was to articulate, more and more beautifully than almost anyone, that this new network had the possibility of connecting all of us. He saw that the Internet would not be just a geeky hobby or toy like ham radios, or only a military or academic thing, which is what most folks who knew about it believed.  Starting from the Deadheads who used it to gather, he saw it as a new lifeblood for humans who longed for connection, but had been separated.”

EFF Executive Director Cindy Cohn.

While the man himself may not have been present, Barlow’s connection—and influence—was palpable throughout the Symposium, with a dozen distinguished speakers and hundreds in attendance conversing, delivering remarks, and offering up questions about the past, the present, the future, and Barlow’s impact on all of it. The first speaker (and EFF’s co-founder along with Barlow), Mitch Kapor, told the audience: “I can feel his generous and optimistic spirit right here in the room today inspiring all of us.”

EFF co-founder Mitch Kapor with Pam Samuelson.

Barlow’s genius, said Kapor, was that in 1990, while most Internet usage was research- and military-based, he “absolutely nailed the Internet’s essential character and what was going to happen.”

Samuelson and Barlow speak with Bruce Lehman, head of the USPTO in 1996.

Pam Samuelson, Distinguished Professor of Law and Information at the University of California, Berkeley, pointed out that Barlow’s 1994 treatise on copyright in the age of the Internet, The Economy of Ideas, has been cited a whopping 742 times in legal literature. But he didn’t just give lawyers an article to cite—Barlow helped the world understand that copyright had a civil liberty dimension and galvanized people to become copyright activists at a time when traditional notions of information access would be shaken to their core.

Freedom of the Press Foundation's Trevor Timm.

Trevor Timm described Barlow as “the guiding light” and “the organizational powerhouse” of the Freedom of the Press Foundation, which he co-founded with Barlow in 2012. On the day the organization launched, Timm recalled, Barlow wrote: “When a government becomes invisible, it becomes unaccountable. To expose its lies, errors, and illegal acts is not treason, it is a moral responsibility. Leaks become the lifeblood of the Republic.” His hope was that the organization would inspire a new generation of whistleblowers—and the next speaker, Edward Snowden, made clear he’d achieved this goal, telling the audience: “He raised a message, sounded an alarm, that I think we all heard. He did not save the world, none of us can—but maybe he started the movement that will.”

Whistleblower Edward Snowden talks about Barlow's impact.

The speakers answered questions on Facebook privacy, their disagreements with Barlow (of which there were many, ranging from the role of government overall to whether copyright was alive or dead), and what comes next in our understanding of the web. Cory Doctorow, EFF Special Advisor and emcee of the Symposium alongside Cindy Cohn, answered this in “Barlovian” fashion: “We could sit here and try to spin scenarios until the cows come home and not get anything done, or we can roll up our sleeves and do something.”

EFF’s former Executive Director (and current director of the Tor Project) Shari Steele began the second panel, discussing Barlow’s deeply-held belief in the First Amendment, insistence on hearing opposing viewpoints, and interest in bringing together diverse opinions: “That’s how he thrived...He was always encouraging people to talk to each other—to have conversations where you normally maybe wouldn’t have thought this was somebody you would have something in common with. He was fascinating, dynamic, and helped us create an Internet that has all sorts of fascinating and dynamic speech in it.”

Shari Steele, John Gilmore, and Joi Ito.

John Gilmore, EFF Co-founder and Board Member, invoked French philosopher and anthropologist Teilhard de Chardin, whose ideas Barlow specifically referenced in his writings. Barlow’s interest in mind-altering experiences, like taking LSD, said Gilmore, wasn’t just related to his love of the Internet: it came from the exact same place, an interest in creating the “great collective organism of mind” that Barlow hoped we might one day become.

Steven Levy, author and editor at large at Wired.

Author Stephen Levy, the writer of Hackers, thought that though Barlow may be well known as a writer of lyrics for the Grateful Dead, he will possibly be even better known by his words about the digital revolution. In his view, Barlow was a terrific writer and a master storyteller “capable of pulling off a quadruple-axle level of nonfiction difficulty.” His gift was to be able to not only “explain what was happening to the out-of-it Mr. Joneses of the world, but to encapsulate what was happening, to celebrate it, and to warn against its dangers in a way that would enlighten even the...people who knew the digital world—and to do it in a way that the reading was a pure pleasure.”

Joi Ito, Director of the MIT Media Lab.

Joi Ito, Director of the MIT Media Lab, described Barlow’s sense of humor and optimism—the same “you see when you talk to the Dalai Lama.” Today’s dark moments for the Internet aren’t the end, he said, and reminded everyone that Barlow had an elegant way of bringing these elements together with activism and resolve. His deep sense of humor came “from knowing how terrible the world is, but still being connected to true nature.” Ito also touched upon Barlow's groundbreaking essay A Declaration of the Independence of Cyberspace as a crucial "battle cry for us to rally around," taking the budding cyberpunk movement and helping it become a socio-political one.

The second panel fielded questions on encryption, Barlow’s uncanny ability to show up in the weirdest places, and how we can inspire the next generation of Barlows. Echoing EFF’s mission of bringing together lawyers, technologists, and activists, Joi Ito said that we will need engineers, lawyers, and social scientists to come together to redesign technology and change law, and also change society—and that one of Barlow’s amazing abilities was that he could talk to, and influence, all of these people.

Twenty-seven years later, EFF continues to work at the bleeding edge of technology to protect the rights of the users in issues as diverse as net neutrality, artificial intelligence, opposing censorship, and fighting mass surveillance.

Ameila Barlow reads from the 25 Principles for Adult Behavior.

Amelia Barlow, John Perry’s daughter, thanked the “vast web” of infinitely interesting and radical human beings around the world who he cared about and cared about him. “Never before have you been able to draw more immediately and completely upon him—and I want you to feel that,” she said, before reading his now-famous 25 Principles for Adult Behavior.

Anna Barlow reflects on her father's life.

As Anna Barlow said in her opening remarks, Barlow’s adventures didn’t stop in his later years—they just started coming to him. Some of the most brilliant thinkers in the world showed that this will remain true even while his physical presence is missed. Perhaps the Symposium was one step towards creating the “great collective organism of mind” that Barlow hoped to see us all become. And at the very least, Anna said, he doesn’t have to be bummed about missing parties anymore—because now he can go to all of them.

Cory Doctorow gives parting words on honoring Barlow.

Cory Doctorow closed the Symposium with a request:

“This week—sit down and have the conversation with someone who’s already primed to understand the importance of technology and its relationship to human flourishing and liberty. And then I want you to go varsity. And I want you to have that conversation with someone non-technical, someone who doesn’t understand how technology could be a force for good, but is maybe becoming keenly aware of how technology could be a force for wickedness.

And ensure that they are guarded against the security syllogism. Ensure that they understand too that we need not just to understand that technology can give us problems, but we must work for ways in which technology can solve our problems too.

And if you do those things you will honor the spirit of John Perry Barlow in a profound way that will carry on from this room and honor our friend who we lost so early, and who did so much for us.”

Join EFF

Donate in honor of John Perry Barlow

Categories: Privacy

D.C. Court: Accessing Public Information is Not a Computer Crime

Deep Links - Thu, 04/12/2018 - 18:17

Good news for anyone who uses the Internet as a source of information: A district court in Washington, D.C. has ruled that using automated tools to access publicly available information on the open web is not a computer crime—even when a website bans automated access in its terms of service. The court ruled that the notoriously vague and outdated Computer Fraud and Abuse Act (CFAA)—a 1986 statute meant to target malicious computer break-ins—does not make it a crime to access information in a manner that the website doesn’t like if you are otherwise entitled to access that same information.

The case, Sandvig v. Sessions, involves a First Amendment challenge to the CFAA’s overbroad and imprecise language. The plaintiffs are a group of discrimination researchers, computer scientists, and journalists who want to use automated access tools to investigate companies’ online practices and conduct audit testing. The problem: the automated web browsing tools they want to use (commonly called “web scrapers”) are prohibited by the targeted websites’ terms of service, and the CFAA has been interpreted by some courts as making violations of terms of service a crime. The CFAA is a serious criminal law, so the plaintiffs have refrained from using automated tools out of an understandable fear of prosecution. Instead, they decided to go to court. With the help of the ACLU, the plaintiffs have argued that the CFAA has chilled their constitutionally protected research and journalism.

The CFAA makes it illegal to access a computer connected to the Internet “without authorization,” but the statute doesn’t tells us what “authorization” or “without authorization” means. Even though it was passed in the 1980s to punish computer intrusions, it has metastasized in some jurisdictions into a tool for companies and websites to enforce their computer use policies, like terms of service (which no one reads). Violating a computer use policy should by no stretch of the imagination count as a felony.

In today’s networked world, where we all regularly connect to and use computers owned by others, this pre-Internet law is causing serious problems. It’s not only chilled discrimination researchers and journalists, but it has also chilled security researchers, whose work is necessary to keep us all safe. It is also threatening the open web, as big companies try to use the law as a tool to block competitors from accessing publicly available data on their sites. Accessing publicly available information on the web should never be a crime. As law professor Orin Kerr has explained, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Luckily, Judge John Bates recognized the critical role that the Internet plays in facilitating freedom of expression—and that a broad reading of the CFAA “threatens to burden a great deal of expressive activity, even on publicly accessible websites.” The First Amendment protects not only the right to speak, but also the right to receive information, and the court held that the fact “[t]hat plaintiffs wish to scrape data from websites rather than manually record information does not change the analysis.” According to the court:

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

Judge Bates did not strike down the law as unconstitutional, but he did rule that the statute must be interpreted narrowly to avoid running afoul of the First Amendment. Judge Bates also said that a narrow construction was the most common sense reading of the statute and its legislative history.

Judge Bates is the second judge this year to recognize that a broad interpretation of the CFAA will negatively impact open access to information on the web. Last year, Judge Edward Chen found that a “broad interpretation of the CFAA invoked by LinkedIn, if adopted, could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.”

The government argued that the plaintiffs did not have standing to pursue the case, in part because there was no “plausible threat” that the government was going to prosecute them for their work. But as the judge pointed out, the government has attempted to prosecute “harmless ToS violations” in the past. 

The web is the largest, ever-growing data source on the planet. It is a critical resource for journalists, academics, businesses, and ordinary individuals alike. Meaningful access sometimes requires the assistance of technology to automate and expedite an otherwise tedious process of accessing, collecting and analyzing public information. Using technology to expedite access to publicly available information shouldn’t be a crime—and we’re glad to see another court recognize that.

Related Cases: hiQ v. LinkedIn
Categories: Privacy

New Hampshire Court: First Amendment Protects Criticism of “Patent Troll”

Deep Links - Thu, 04/12/2018 - 13:50

A New Hampshire state court has dismissed a defamation suit filed by a patent owner unhappy that it had been called a “patent troll.” The court ruled [PDF] that the phrase “patent troll” and other rhetorical characterizations are not the type of factual statements that can be the basis of a defamation claim. While this is a fairly routine application of defamation law and the First Amendment, it is an important reminder that patent assertion entities – or “trolls” – are not shielded from criticism. Regardless of your view about the patent system, this is a victory for freedom of expression.

The case began back in December 2016 when patent assertion entity Automated Transactions, LLC (“ATL”) and inventor David Barcelou filed a complaint [PDF] in New Hampshire Superior Court against 13 defendants, including banking associations, banks, law firms, lawyers, and a publisher. ATL and Barcelou claimed that all of the defendants criticized ATL’s litigation in a way that was defamatory. The court summarizes the claims as follows: 

The statements the plaintiffs allege are defamatory may be separated into two categories. The first consists of instances in which a defendant referred to a plaintiff as a “patent troll.” The second is composed of characterizations of the plaintiffs’ conduct as a “shakedown,” “extortion,” or “blackmail.”

These statements were made in a variety of contexts. For example, ATL complained that the Credit Union National Association submitted testimony to the Senate Committee on the Judiciary [PDF] that referred to ATL as a “troll” and suggested that its business “might look like extortion.” The plaintiffs also complained about an article in Crain’s New York Business that referred to Barcelou as a “patent troll.” The complaint alleges that the article included a photo of a troll that “paints Mr. Barcelou in a disparaging light, and is defamatory.”

ATL had filed over 50 lawsuits against a variety of banks and credit unions claiming that their ATMs infringed ATL’s patents. ATL also sent many demand letters. Some in the banking industry complained that these suits and demands lacked merit. There was some support for this view. For example, in one case, the Federal Circuit ruled the several of ATL’s asserted patent claims were invalid and that the defendants did not infringe. The defendants did not infringe because the patents were all directed to ATMs connected to the Internet and it was “undisputed” that the defendants’ products “are not connected to the Internet and cannot be accessed over the Internet.”

Given the scale of ATL’s litigation, it is not surprising that it faced some criticism. Yet, the company responded to that criticism with a defamation suit. Fortunately, the court found the challenged statements to be protected opinion. Justice Brian T. Tucker explained:

[E]ach defendant used “patent troll” to characterize entities, including ATL, which engage in patent litigation tactics it viewed as abusive. And in each instance the defendant disclosed the facts that supported its description and made ATL, in the defendant's mind, a patent troll. As such, to the extent the defendants accused the plaintiffs of being a “patent troll,” it was an opinion and not actionable. 

The court went on to explain that “patent troll” is a term without a precise meaning that “doesn’t enable the reader or hearer to know whether the label is true or false.” The court notes that the term could encompass a broad range of activity (which some might see as beneficial, while others see it as harmful).

The court also ruled that challenged statements such as “shakedown” and comparisons to “blackmail” were non-actionable “rhetorical hyperbole.” This is consistent with a long line of cases finding such language to be protected. Indeed, this is why John Oliver can call coal magnate Robert Murray a “geriatric Dr. Evil” and tell him to “eat shit.” As the ACLU has put it, you can’t sue people for being mean to you. Strongly expressed opinions, whether you find them childish or hilariously apt (or both), are part of living in a free society.

Justice Tucker’s ruling is a comprehensive victory for the defendants and free speech. ATL and Barcelou believe they are noble actors seeking to vindicate property rights. The defendants believed that ATL’s conduct made it an abusive patent troll. The First Amendment allows both opinions to be expressed.

Categories: Privacy

Day of Action: Help California Pass a Gold Standard Net Neutrality Bill

Deep Links - Thu, 04/12/2018 - 13:49

In December of 2017, contrary to the will of millions of Americans, the FCC made the decision to abandon net neutrality protections. On the first day of business in the California state legislature, State Sen. Scott Wiener introduced a bill that would bring back those protections and more for Californians.

S.B. 822 would make getting state money or using state resources contingent on the ISP adhering to net neutrality principles. This includes the practices the FCC banned in the 2015 Open Internet Order—blocking, throttling, and paid prioritization—and picks up where the FCC left off by also tackling the practice of zero rating. This bill is a gold standard of net neutrality legislation and its passage would give California the strongest protections in the country.

Naturally, big ISPs like Comcast, AT&T, and Spectrum (née Time Warner Cable) don’t want to see this pass. That’s why we’re rallying in support of this bill before its hearings in front of the members of the state senate Utilities and Energy Committee and Judiciary Committee.

Californians: use the tool below to send tweets to the members of these committees to tell them to secure a free and open Internet for your state.

Take Action

Tell California's State Senators to Stand up for Net Neutrality

Categories: Privacy

No, Section 230 Does Not Require Platforms to Be “Neutral”

Deep Links - Thu, 04/12/2018 - 13:05

One jaw-dropping moment during the Senate’s hearing on Tuesday came when Sen. Ted Cruz asked Facebook CEO Mark Zuckerberg, “Does Facebook consider itself a neutral public forum?” Unsatisfied by Zuckerberg’s response that Facebook is a “platform for all ideas,” Sen. Cruz continued, “Are you a First Amendment speaker expressing your views, or are you a neutral public forum allowing everyone to speak?”

When members of Congress recite myths about how Section 230 works, it demonstrates a frightening lack of seriousness about protecting our right to speak and gather online.

After more back-and-forth, Sen. Cruz said, “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?” It was a baffling question. Sen. Cruz seemed to be suggesting, incorrectly, that Facebook had to make a choice between enjoying protections for free speech under the First Amendment and enjoying the additional protections that Section 230 offers online platforms.

Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both.

Indeed, one of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts. In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve; since Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts. Former Rep. Chris Cox recalls reading about the Prodigy opinion on an airplane and thinking that it was “surpassingly stupid.” That revelation led to Cox and then Rep. Ron Wyden introducing the Internet Freedom and Family Empowerment Act, which would later become Section 230.

The misconception that platforms can somehow lose Section 230 protections for moderating users’ posts has gotten a lot of airtime lately—even serving as the flawed premise of a recent Wired cover story. It’s puzzling that Sen. Cruz would misrepresent one of the most important laws protecting online speech—particularly just a few days after he and his Senate colleagues voted nearly unanimously to undermine that law. (For the record, it’s also puzzling that Zuckerberg claimed not to be familiar with Section 230 when Facebook was one of the largest Internet companies lobbying to undermine it.)

The context of Sen. Cruz’s line of questioning offers some insight into why he misrepresented Section 230: like several Republican members of Congress in both hearings, Sen. Cruz was raising concerns about Facebook allegedly removing posts that represented conservative points of view more often than liberal ones.

There are many good reasons to be concerned about politically motivated takedowns of legitimate online speech. Around the world, the groups silenced on Facebook and other platforms are often those that are marginalized in other areas of public life too.

It’s foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. Trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would be unconstitutional under the First Amendment. In practice, creating additional hoops for platforms to jump through in order to maintain their Section 230 protections would almost certainly result in fewer opportunities to share controversial opinions online, not more: under Section 230, platforms devoted to niche interests and minority views can thrive.

What’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230. Rather, companies should institute reasonable, transparent moderation policies. Platforms shouldn’t over-rely on automated filtering and unintentionally silence legitimate speech and communities in the process. And platforms should add features to give users themselves—not platform owners or third parties—more control over what types of posts they see.

When Congress passed SESTA/FOSTA, members made the mistake of thinking that they could tackle a real-world problem by shifting more civil and criminal liability to online platforms. When members of Congress recite myths about how Section 230 works, it demonstrates a frightening lack of seriousness about protecting our right to speak and gather online.

Categories: Privacy

User Privacy Isn't Solely a Facebook Issue

Deep Links - Thu, 04/12/2018 - 12:26

During Congressional hearings about Facebook’s data practices in the wake of the Cambridge Analytica fiasco, Mark Zuckerberg drew an important distinction between what we expect from our Internet service providers (ISPs, such as Comcast or Verizon) as opposed to platforms like Facebook that operate over the Internet.

Put simply, an ISP is a service you pay to access the Internet. Once you get online, you run into a whole series of edge providers. Some, like Netflix, also charge you for access to their services. Others, like Facebook and Google, are platforms that you use without paying, which support themselves using ads. There’s a whole spectrum of services that make up Internet use, but the thing they all have in common is that they are gathering data when you use them. How they use it can differ widely.

The divide between ISPs and edge providers is most obvious in the context of the net neutrality debate. Platforms, by and large, want as many people accessing the Internet as possible, as easily as possible. ISPs want to charge customers as much as possible for that access and also want to start double-dipping by charging platforms a fee when you visit their websites, as protection money, so the ISP doesn’t throttle or ‘de-prioritize’ your connection.

Zuckerberg brought up that difference a couple of times during the hearings. He mentioned how he had no ISP choice when he founded Facebook in college and that paid prioritization would have hobbled his new company. Whatever you think of Facebook, it’s not good for the Internet to have ISPs deciding what platforms are allowed to exist and succeed.

The distinction is also apparent in the privacy context. Your ISP is your conduit to everything you do online, so it has the opportunity to be even more invasive of your privacy than Facebook. You can protect yourself with VPNs and HTTPS, but the ISP still has a privileged position and is likely to be able to put together a pretty complete picture of most subscribers’ online habits.

That privileged position means that protecting your privacy vis-à-vis an ISP is a different issue than protecting it with respect to online platforms. Besides, you’re already paying your ISP for services; the idea that you’re willingly trading your privacy in exchange for a service does not apply.

ISPs, however, have attempted to muddy the waters to avoid regulation, by insisting that Congress come up with a ‘one size fits all’ approach to online privacy.

The issue was illustrated during the hearing on Tuesday, when Senator Roger Wicker of Mississippi posed this question:

I understand with regard to suggested rules or suggested legislation, there are at least two schools of thought out there.

One would be the ISPs, the Internet service providers, who are advocating for privacy protections for consumers that apply to all online entities equally across the entire Internet ecosystem.

Now, Facebook is an edge provider on the other hand. It is my understanding that many edge providers, such as Facebook, may not support that effort, because edge providers have different business models than the ISPs and should not be considered like services.

So, do you think we need consistent privacy protections for consumers across the entire Internet ecosystem that are based on the type of consumer information being collected, used or shared, regardless of the entity doing the collecting, reusing or sharing?

 ISPs are not truly advocating for privacy protections. When AT&T takes out a full-page ad in major newspapers about an “Internet Bill of Rights,” it’s not users they are seeking to protect. It’s the profits they can make from things like paid prioritization and monetizing your data. ISPs have a model that lets them make money by charging users, they want to double-dip by charging platforms, and triple-dip by using data for advertising, much the way Facebook does. But unlike Facebook, ISPs don’t rely on ads for their entire revenue stream.

For ISPs, a federal law that prohibits some activities, but leaves them the tactics that make the most money—while preventing states from passing more stringent protections—is their goal.

Both Facebook and ISPs present privacy concerns, but while Facebook is in the spotlight for its practices right now, we should not let ISPs off the hook for this.

No Escape From ISP Practices

As hard as it may be to escape Facebook, ISPs have an even tighter hold on their customers.

Most Americans don’t have a choice when it comes to high-speed Internet, as Zuckerberg mentioned in his testimony. There are a lot of historical reasons for this, but one simple one is that it’s expensive to break into a new ISP market, particularly when the incumbent can temporarily lower prices in that neighborhood and pay for it by jacking up prices elsewhere where they face no competition. Besides that, big ISPs have divided up the nation geographically to avoid competing.

Another factor is that large ISPs benefit from the regulatory landscape at the expense of small, upstart ISPs that might otherwise challenge them. For instance, ISPs did have privacy regulations applied to them, but lobbied Congress and successfully got them repealed. The end of those regulations helped cement large ISP power and block competition. Small ISPs may want to offer a service with privacy protections to users, but the market is already so uneven that they can barely compete. The market can’t provide customers with alternatives that protect privacy, and so regulation of the large ISPs is necessary.

In theory, you can leave Facebook and use Twitter or Snapchat, or a noncommercial platform like Mastodon. In practice, the company’s user base is so large that it’s able to keep users simply because it’s where friends and family already are. Zuckerberg was also asked to name Facebook’s competition, and the closest he could claim was that there are other services that overlap with some of the things Facebook offers.

Badly written laws in reaction to Cambridge Analytica could end up solidifying Facebook’s dominance, as only a company with their resources could comply. Protecting the privacy of Internet users is critically important, and a law that squashed competition to Facebook would only harm it in the long run.

There are a number of things that can be done to make platforms like Facebook accountable for their privacy policies. Making it so that users can truly delete the data these platforms collect, take their data with them when they leave, and understand and customize the privacy policies would go a long way. There are a whole host of things—practical, useful things—that can be done without creating laws that only a company the size of Facebook can afford to follow.

In his answer to Wicker’s question, Zuckerberg said:

I would differentiate between ISPs, which I consider to be the pipes of the Internet, and the platforms like Facebook or Google or Twitter, YouTube that are the apps or platforms on top of that.

I think in general, the expectations that people have of the pipes are somewhat different from the platforms. So there might be areas where there needs to be more regulation in one and less in the other, but I think that there are going to be other places where there needs to be more regulation of the other type.

Zuckerberg wasn’t totally wrong when he said this. ISPs cannot be escaped, collect huge amounts data by virtue of being your conduit to the Internet, and do not need to monetize that data to survive. Subscription edge providers also do not need to monetize data to make money, but still collect some data; Netflix tracking what people watch and for how long, for example. And then there are ad-supported platforms where user data is the basis of their business model.

There are all sorts of ways our privacy is impacted by what happens online. It’s vital all companies make their policies transparent and that there are many options for users to choose from, so that they can choose the trade-offs that they are comfortable with.

Categories: Privacy

Despite What Zuckerberg’s Testimony May Imply, AI Cannot Save Us

Deep Links - Wed, 04/11/2018 - 19:00

Yesterday and today, Mark Zuckerberg finally testified before the Senate and House, facing Congress for the first time to discuss data privacy in the wake of the Cambridge Analytica scandal. As we predicted, Congress didn’t stick to Cambridge Analytica. Congress also grilled Zuckerberg on content moderation—i.e., private censorship—and it’s clear from his answers that Facebook is putting all of its eggs in the “Artificial Intelligence” basket.

But automated content filtering inevitably results in over-censorship. If we’re not careful, the recent outrage over Facebook could result in automated censors that make the Internet less free, not more.

Facebook Has an “AI Tool” For Everything—But Says Nothing about Transparency or Accountability

Zuckerberg’s most common response to any question about content moderation was an appeal to magical “AI tools” that his team would deploy to solve any and all problems facing the platform. These AI tools would be used to identify troll accounts, election interference, fake news, terrorism content, hate speech, and racist advertisements—things Facebook and other content platforms already have a lot of trouble reliably flagging today, with thousands of human content moderators. Although Zuckerberg mentioned hiring thousands more content reviewers in the short term, there is uncertainty whether human review will continue in the long term to have an integral role in Facebook’s content moderation system.

Most sizable automated moderation systems in use today rely on some form of keyword tagging, followed by human moderators. Our most advanced automated systems are far from being able to perform the functions of a human moderator accurately, efficiently, or at scale. Even the research isn’t there yet—especially not with regard to nuances of human communication like sarcasm and irony. Beyond AI tools’ immaturity, an effective system would have to adapt to regional linguistic slang and differing cultural norms as well as local regulations. In his testimony, Zuckerberg admitted Facebook still needs to hire more Burmese language speakers to moderate the type of hate speech that may have played a role in promoting genocide in Myanmar. “Hate speech is very language-specific,” Zuckerberg admits. “It's hard to do [moderation] without people who speak the local language.”

An adequate automated content moderation system would have to adapt with time as our social norms evolve and change, and as the definition of “offensive content” changes with them. This means processing and understanding social and cultural context, how they evolve over time, and how they vary between geographies. AI research has yet to produce meaningful datasets and evaluation metrics for this kind of nuanced contextual understanding.

But beyond the practical difficulties associated with automated content tagging, automatic decision-making also brings up numerous ethical issues. Decision-making software tends to reflect the prejudices of its creators, and of course, the biases embedded in its data. Released just a couple months ago, Google’s state-of-the-art Perspective API for ranking comment toxicity originally gave the sentence “I am a black woman” an absurd 85% chance of being perceived as “toxic”.

Given the fact that they are likely to make mistakes, how can we hold Facebook’s algorithms accountable for their decisions? As research in natural language processing shifts towards deep learning and the training and usage of neural networks, algorithmic transparency in this field becomes increasingly difficult—yet it also becomes increasingly important and paramount. These issues of algorithmic transparency, accountability, data bias, and creator bias are particularly critical for Facebook, a massive global company whose employees speak only a fraction of the languages that its user base does.

Zuckerberg doesn’t have any good answers for us. He referred Congress to an “AI ethics” team at Facebook but didn’t disclose any processes or details. As with most of Congress’s difficult and substantive questions, he’ll have his team follow up.

"Policing the Ecosystem”

Zuckerberg promised Congress that Facebook would take “a more active view in policing the ecosystem,” but he failed to make meaningful commitments regarding the transparency or accountability of new content moderation policies. He also failed to address the problems that come hand-in-hand with overbroad content moderation, including one of the most significant problems: how it creates a new lever for online censorship that will impact marginalized communities, journalists who report on sensitive topics, and dissidents in countries with oppressive regimes.

Let’s look at some examples of overzealous censorship on Facebook. In the past year, high-profile journalists in Palestine, Vietnam, and Egypt have encountered a significant rise in content takedowns and account suspensions, with little explanation offered outside a generic “Community Standards” letter. Civil discourse about racism and harassment is often tagged as “hate speech” and censored. Reports of human rights violations in Syria and against Rohingya Muslims in Myanmar, for example, were taken down—despite the fact that this is essential journalist content about matters of significant global public concern.

These examples are just the tip of the iceberg: high-profile journalists, human-rights activists, and other legitimate content creators are regularly censored—sometimes at the request of governments—as a result of aggressive content moderation policies.

Congress’ focus on online content moderation follows a global trend of regulators and governments around the world putting tremendous pressure on platforms like Facebook to somehow police their content, without entirely understanding that the detection of “unwanted” content, even with “AI tools,” is a massively difficult technical challenge and an open research question.

Current regulation on copyrighted content already pushes platforms like YouTube to employ over-eager filtering in order to avoid liability. Further content regulations on things that are even more nuanced and harder to detect than copyrighted content—like hate speech and fake news—would be disastrous for free speech on the internet. This has already started with the recent passing of bills like SESTA and FOSTA.

We need more transparency.

Existing content moderation policies and processes are almost entirely opaque. How do platform content reviewers decide what is or is not acceptable speech, offensive content, falsified information, or relevant news? Who sets, controls, and provides modifications to these guidelines?

As Facebook is pressured to scale up its policing and push more work onto statistical algorithms, we need to make sure we have more visibility into how these potentially problematic decisions are made, and the sources of data collected to train these powerful algorithms.

We can’t hide from the inevitable fact that offensive content is posted on Facebook without being immediately flagged and taken down. That’s just the way the Internet works. There’s no way to reduce that time to zero—not with AI, not with human moderators—without drastically over censoring free speech on the Internet.

Categories: Privacy

Facebook, This Is Not What “Complete User Control” Looks Like

Deep Links - Wed, 04/11/2018 - 18:49

If you watched even a bit of Mark Zuckerberg’s ten hours of congressional testimony over the past two days, then you probably heard him proudly explain how users have “complete control” via “inline” privacy controls over everything they share on the platform. Zuckerberg’s language here misses the critical distinction between the information a person actively shares, and the information that Facebook takes from users without their knowledge or consent.

Zuckerberg’s insistence that users have “complete control” neatly overlooks all the ways that users unwittingly “share” information with Facebook.

Of course, there are the things you actively choose to share, like photos or status updates, and those indeed come with settings to limit their audience. That is the kind of sharing that Zuckerberg seemed to be addressing in many of his answers to Congressmembers’ questions.

But that’s just the tip of the iceberg. Below the surface are Facebook’s often-invisible methods for collecting and generating information on users without their knowledge or consent, including (but not limited to):

Users don’t share this information with Facebook. It’s been actively—and silently—taken from them.

This stands in stark contrast to Zuckerberg’s claim, while on the record with reporters last week, that “the vast majority of data that Facebook knows about you is because you chose to share it.” And he doubled down on this talking point in his testimony to both the Senate and the House, using it to dodge questions about the full breadth of Facebook’s data collection.

Zuckerberg’s insistence that users have complete control is a smokescreen.

Zuckerberg’s insistence that users have complete control is a smokescreen. Many members of Congress wanted to know not just how users can control what their friends and friends-of-friends see. They wanted to know how to control what third-party apps, advertisers, and Facebook itself are able to collect, store, and analyze. This goes far beyond what users can see on their pages and newsfeeds.

Facebook’s ethos of connection and growth at all costs cannot coexist with users' privacy rights. Facebook operates by collecting, storing, and making it easy to find unprecedented amounts of user data. Until that changes in a meaningful way, the privacy concerns that spurred these hearings are here to stay.

Categories: Privacy

Solutions for a Stalled NAFTA: Stop Pushing So Hard on IP, and Release the Text

Deep Links - Wed, 04/11/2018 - 17:04

The deadline for concluding a modernized North American Free Trade Agreement (NAFTA), originally scheduled for last year, has continued to slip. An eighth and final formal round of negotiations was cancelled last week, and despite earlier optimistic plans that the parties could announce an "agreement in principle" at the Summit of the Americas in Peru this Friday 13 April, these plans have since been abandoned.

An over-optimistic negotiation schedule isn't the only problem here. The other is that United States Trade Representative (USTR) is pushing a hard line on topics such as intellectual property that neither of the other negotiating parties find remotely palatable. As a result, although advances have been made in some other chapters, reports suggest that virtually the whole of the agreement's IP chapter remains up in the air.

In October 2016, as the Trans-Pacific Partnership (TPP) was beginning to falter, Steve Metalitz of the International Intellectual Property Alliance (IIPA) remarked with surprising frankness that "We may well have reached the high water mark of linking IP and trade." Since then, more evidence has emerged that he was correct about this. One example is the suspension of most of the intellectual property chapter from the TPP, when it became the 11-country Comprehensive and Progressive Trans-Pacific Partnership Agreement (CPTPP). Another example is Europe's backdown from its demands for a twenty year copyright term extension in the Mercosur-EU trade agreement. Other U.S. trading partners have also been expressing more critical views about the downsides of excessively long minimum copyright terms, and most surprising at all, so have representatives of copyright holders.

The USTR could continue to press its hard line on intellectual property for round after round, in the hope that Canada and Mexico would eventually capitulate. Or, it could easily remove one huge obstacle to the successful conclusion of NAFTA simply by dropping these tough demands, including its demands for extension of the copyright term, and concentrate instead on issues of more importance to the farming and manufacturing sectors.

Transparency is Another Key to the Smoother Conclusion of NAFTA

The low public support for the TPP, which ultimate led to the United States' withdrawal from the agreement, has been attributed in part to the lack of transparency of the negotiations. Ahead of the commencement of the NAFTA negotiations, 52 members of Congress wrote to the USTR asking that the negotiations be made more open and transparent than the TPP had been. EFF wrote a similar letter.

Yet at the end of the official negotiating rounds, NAFTA is even less transparent and inclusive than the TPP had been. Not a single text proposal or consolidated draft has been released (or even leaked) to the public. The USTR has not yet appointed a Transparency Officer under the new administration, despite this being required under the Bipartisan Congressional Trade Priorities and Accountability Act of 2015. And there have been precisely zero stakeholder engagement events arranged for stakeholders to brief negotiators during the NAFTA negotiations, despite this having been a common practice during the negotiation of the TPP.

This month the Congressional Progressive Caucus released its Fair Trade Agenda [PDF], which recommends:

For the remainder of the NAFTA renegotiations, the Trump Administration should make draft proposals publicly available and should solicit Congressional and public input before finalizing the proposals. Negotiating texts also must be made publicly available after each negotiating round with the opportunity for public comment, so Congress can provide input in the process and so the American people can evaluate whether their interests are being advanced.

If these suggestions seem extreme, they're really not. Similar recommendations were part of the Promoting Transparency in Trade Act that was reintroduced into Congress last July, but which has languished in committee since then. Europe has already adopted rules requiring its text proposals in trade negotiations to be released to the public, and the United Kingdom is considering going a step further, by requiring consolidated texts also to be released within ten days of each negotiation round. 

Although better transparency in NAFTA would be a way of gaining public trust in the agreement, it's understandable why the USTR takes refuge in secrecy. Keeping controversial provisions out of sight and mind of the public while they are being negotiated—for example, tough secondary liability rules on Internet platforms may be under negotiation—spares the USTR from having to defend these to the public at the same time as it attempts to sell them to our trading partners. But the problem with waiting until the provisions have been agreed before releasing them to the public is that by that stage, it is practically impossible to improve them, or if necessary, to walk them back.

When it comes to the point that even copyright holder representatives are arguing against the USTR's hard line on copyright, and when its transparency practices are falling out of step with those of our major trading partners, it's time for the USTR to consider whether a course change is required. We think that trade agreements don't have to be contentious: they could even be positive for users and innovators, if done right. But the longer the negotiations drag on without any sign that questions of transparency or IP overreach are being addressed, the harder it is to maintain this optimism.

Categories: Privacy

A New Welcome to Privacy Badger and How We Got Here

Deep Links - Tue, 04/10/2018 - 21:00

The latest update to Privacy Badger brings a new onboarding process and other improvements. The new onboarding process will make Privacy Badger easier to use and understand. These latest changes are just some of the many improvements EFF has made to the project, with more to come!

Install Privacy Badger

Join EFF and millions of users in the fight to regain your privacy rights!

Privacy Badger was created with the objective of protecting users from third-party tracking across the web—all users. To do this, Privacy Badger needed a couple of key features:

  • The ability to catch sneaky trackers without completely breaking your browsing experience when possible.
  • Simple to use and understand.

Privacy Badger uses heuristics, meaning it observes and learns who is tracking you

For the first purpose, Privacy Badger uses heuristics, meaning it observes and learns who is tracking you rather than maintaining a manual list of trackers. Even if there is a third-party tracker that is rather unknown, or new, Privacy Badger will see that tracker. If your Privacy Badger sees the tracker three times, it will block that tracker so you don't have to wait for someone to eventually update that list. It's also a matter of trust—Privacy Badger blocks by behavior and not by a third-party controlled list that might be sold to advertisers.

Second, we try to make Privacy Badger simple and informative. Your Privacy Badger learns on its own and displays a badge showing how many trackers it has seen. If it breaks a website’s functionality, you can quickly disable Privacy Badger on that site.

When you install Privacy Badger, it doesn't block anything immediately because it needs to learn. This is a unique functionality, so many users’ first reaction to Privacy Badger when they first install it is that it doesn't work. We explained this in our FAQ and Onboarding pages, but we’ve improved those pages to make it clear for everyone.

To fix this, the new onboarding is simple and points out some essentials on how Privacy Badger works, what to do when something breaks, and what it means to join the team of millions of Badgers. It’s also easy to see on mobile if you are testing our Beta for Firefox on Android.


We hope that these changes will help us achieve a tracking-blocking extension that is dead-simple to use for anyone and for everyone. And soon we’ll have an improved site and FAQ to guide you through more advanced settings and functions of Privacy Badger

Why did we make this change and what other things changed?

We listen to our users a lot. We read all their feedback, check GitHub, and review error reports. We also observe how people interact with Privacy Badger and ask many questions directly to people on the street, the cab, at the cafe and anywhere.

We are focused on making Privacy Badger install-and-forget-simple

We’re looking to see if they have any issues installing our extension, if they understand how it works, and if they can fix something when it breaks.
Instead of focusing on shipping tons of features, we are focused on making Privacy Badger install-and-forget-simple, so when we install it on our relatives' computer we don't get a call saying we broke the Internet. All this while still protecting their privacy.

Sometimes it's big visual changes like the onboarding process, and sometimes it's simple things like moving an advanced option away some levels so only curious users will get there—users who will probably read in more detail our FAQ.

You might think this is not important, but we observed that when people open the options, they interpret the first thing they see as "I'm supposed to play with this." If you have a complicated feature there, in an application where your target is every kind of user, you are asking for trouble and confusion.

We still have a lot of things to come, but we hope this update sheds some light on how to use Privacy Badger, and makes it easier to use. We are always listening to your feedback, and are really excited to work to protect your privacy online and that of your family and friends.

Categories: Privacy

Too Big to Let Others Fail Us: How Mark Zuckerberg Blamed Facebook's Problems On Openness

Deep Links - Tue, 04/10/2018 - 19:12

Facebook’s first reactions to the Cambridge Analytical headlines looked very different from the now contrite promises Mark Zuckerberg made to the U.S. Congress this week. Look closer, though, and you'll see a theme running through it all. The message coming from Facebook’s leadership is not about how it has failed its users. Instead it's about how users —and especially developers —have failed Facebook, and how Facebook needs to step in to take exclusive control over your data.

You may remember Facebook's initial response, which was to say that whatever Cambridge Analytica had gotten away with, the problem was already solved. As Paul Grewal, Facebook's deputy general counsel, wrote in that first statement, "In the past five years, we have made significant improvements in our ability to detect and prevent violations by app developers" Most significantly, he added, in 2014 Facebook "made an update to ensure that each person decides what information they want to share about themselves, including their friend list. This is just one of the many ways we give people the tools to control their experience. "

By the time Zuckerberg had reached Washington, D.C., however, the emphasis was less about users controlling their experience, and more about Facebook’s responsibility to make sure those outside Facebook—namely developers and users—were doing the right thing.

A week after Grewal's statement, Zuckerberg made his first public comments about Cambridge Analytica. "I was maybe too idealistic on the side of data portability,” he told Recode's Kara Swisher and Kurt Wagner. Going forward, preventing future privacy scandals "[is] going to be solved by restricting the amount of data that developers can have access to."

In both versions, the fault supposedly lay with third parties, not with Facebook. But Mark Zuckerberg has made it clear that he now takes a “broader view” of Facebook's responsibility. As he said in his Tuesday testimony to Congress:

We didn't take a broad enough view of our responsibility... It's not enough to just connect people, we have to make sure that those connections are positive. It's not enough to just give people a voice, we need to make sure that people aren't using it to harm other people or to spread misinformation. And it's not enough to just give people control over their information, we need to make sure that the developers they share it with protect their information, too. Across the board, we have a responsibility to not just build tools, but to make sure that they're used for good.

By far the most substantial shift Facebook has already taken in this direction has been to limit further how third-parties can access data it holds for its users.

Facebook began removing access to its APIs across its platforms earlier this month, from Instagram to Facebook search.

But that move just causes a new form of collateral damage. Shutting down third-party access and APIs doesn't just mean restricting creepy data snatchers like Cambridge Analytica. APIs are the ways that machines ingest what we read in our web browsers and Facebook's offical apps. Even then, we're only operating one hop from a bot: your Web browser is an automated client, tapping into Facebook's data feeds using Facebook's own web code. Facebook apps are just automated programs that talk to Facebook with a larger set of the interfaces that it permits others to use. Even if you're just hitting “refresh” on the feed over lunch, it's all automated in the end.

By locking down its APIs, Facebook is giving people less power over their information and reserving that power for itself.

By locking down its APIs, Facebook is giving people less power over their information and reserving that power for itself. In effect, Facebook is narrowing and eliminating the ways that legitimate users can engage with their own data. And that means that we're set to become even more dependent on Facebook's decisions about what to do with that information.

What is worse, we've seen a wave of similar rhetoric elsewhere. Twitter announced (then backed down from) a rapid shutdown of an API that creators outside the company depended upon for implementing alternative Twitter clients.

In response to a court case brought by academics to establish their right to examine services like Facebook for discrimination, Bloomsberg View columnist Noah Feldman claimed last week that permitting such automated scanning would undermine Facebooks' ability to protect the privacy of its users. We'd emphatically disagree. But you can expect more columns like that if Facebook decides to defend your data by doubling down on laws like the CFAA.

Here’s a concrete example. Supposing you'd like to #deletefacebook and move to another service—and you'd like to take your friends with you. That may mean you'd like to persuade them to move to a new better service with you. Or you might just want to keep the memories of your friends and your interactions with them intact as you move on from Facebook. You wouldn't want to leave Facebook without your conversations and favorite threads. You wouldn't want to find that you could take your own photographs, but not pictures of you with your family and colleagues. You'd like all that data as takeout. That's what data portability means.

But is that personal data really yours, asks the new "responsible" Facebook? Facebook's old model implied that it was yours: that's why the Graph API allowed you to deputize apps or third-parties to examine the all the data you could see yourself. Facebook's new model is that what you know about your friends is not yours to examine or extract how you'd like. Facebook's APIs now prevent you from accessing that in an automated way. Facebook is also locking down other ways you might extract that information—like searching for email addresses, or looking through old messages.

As the APIs shut down, the only way to access much of this data becomes the "archive download," a huge, sometimes incomplete slab of all your data that is unwieldy for users or their tools to parse or do anything with.

This isn't necessary a purely self-serving step by Facebook (though anyone who has tried to delete their account before, and had their friends pictures' deployed by Facebook in poignant pleas to remain, might well be skeptical of the company's best motives). It merely represents the emerging model of the modern social network's new responsibilities as the biggest hoarder of private information on the planet.

The thinking goes: There are bad people out there, and our customers may not understand privacy options well enough to prevent this huge morass of information being misused. So we need to be the sole keepers of it. We're too big to let others fail.

But that's entirely the wrong way of seeing how we got here. The problem here cannot be that people are sucking Facebook's data out of Facebook’s systems. The problem is that Facebook sucked that data from us, subject to vague assurances that a single Silicon Valley company could defend billions of people's private information en masse. Now, to better protect it from bad people, they want to lock us out of it, and prevent us taking it with us.

This new model of "responsibility" fits neatly with some lawmakers' vision of how such problems might get fixed. Fake news on Facebook? Then Facebook should take more responsibility to identity and remove false information. Privacy violations on the social media giants? Then the social media giants should police their networks, to better conform with what the government thinks should be said on them. In America, that "fix thyself" rule may well look like self-regulation: you drag Mr Zuckerberg in front of a committee, and make him fix the mess he's caused. In Europe, it looks like shadow regulation like the EU's Code of Conduct on Hate Speech, or even the GDPR. Data is to be held in giant silos by big companies, and the government will punish them if they handle it incorrectly.

We are unwittingly hard-wiring the existence of our present day social media giants into the infrastructure of society.

In both cases, we are unwittingly hard-wiring the existence of our present day social media giants into the infrastructure of society. Zuckerberg is the sheriff of what happens between friends and family, because he's best placed to manage that. Twitter's Jack Dorsey decides what civil discourse is, because that happens on his platform, and he is now the responsible adult. And you can't take your data out from these businesses, because smaller, newer companies can't prove their trustworthiness to Zuckerberg or Dorsey or Congress or the European Commission. And you personally certainly can't be trusted by any of them to use your own data the way you see fit, especially if you use third-party tools to achieve that.

The next step in this hard-wiring is to crack down on all data access in ways "unauthorized" by the companies holding it. That means sharpening the teeth of the notorious Computer Fraud and Abuse Act, and more misconceived laws like the Georgia's new Computer Crimes act. An alliance of the concerns of politicians and social media giants could well make that possible. As we saw with SESTA/FOSTA, established tech companies can be persuaded to support such laws, even when they unwind the openness and freedom that let those companies prosper in the first place. That is the shift that Zuckerberg is signaling in his offers to take responsibility in Washington.

But there is an alternative: We could empower Internet users, not big Internet companies, to decide what they want to do with their private data. It would mean answering some tough questions about how we get consent for data that is personal and private for more than one person, like conversations, photographs, my knowledge of you  and you of me—and how individuals can seek redress when those that they trust—whether it's Facebook, or a small developer—break that trust.

But beginning that long-overdue conversation now is surely going to end with a better result than making the existing business leaders of Silicon Valley the newly deputized sheriffs of our data, with shiny badges from Washington, and an impregnable new jail in which to hold it.

Categories: Privacy

Entries for the Catalog of Missing Devices, courtesy of EFF supporters like you

Deep Links - Tue, 04/10/2018 - 18:41

The Catalog of Missing Devices is a tour through some of the legitimate, useful and missing gadgets, tools and services that don't exist but should. They're technologies whose chance to exist was snuffed out by Section 1201 of the Digital Millennium Copyright Act of 1998, which makes tampering with "Digital Rights Management" into a legal no-go zone, scaring off toolsmiths, entrepreneurs, and tinkerers.

We're still adding our own designs to the Catalog, but we've also been honored by EFF supporters who've come up with their own additions. One such supporter is Dustin Rodriguez, who sends us these five great ideas for future entries. If you have great ideas for additions, send them to me and maybe you'll see them here on Deeplinks!

  • Software Scalpel - Load up the Software Scalpel and start slicing and dicing your favorite application that just has too many menus, too many options, and a cluttered interface! Download interface remixes put together by other users and simplify your apps. Boil them down to concentrated goodness, with just the pieces you actually use!
  • Gamewriter - Now available for Xbox and Playstation platforms, coming soon for the Switch! Don't just play video-games, but make them yourself! Create and remix your own game variants! When you need more flexibility, use the included cable to connect your console to your laptop, desktop, or phone and use our easy toolset to develop your family's next hit game.
  • MovieMoxie - Insert the best comebacks of all time into conversations with your friends in just a blink! Quote your favorite film line in your SMS chats and we will retrieve the exact clip you're thinking of as a subtitled animated GIF and serve it up to your partner.
  • Trailer Twister - Point Trailer Twister at your favorite or not-so-favorite film and choose the genre you wish it had been and watch Trailer Twister generate a pitch-perfect rendition of the film that never was! Tickle your fancy with a trailer that makes Saw look like a rom-com or terrify them with thoughts of a darkly horrific Daddy Day Care!
  • Casting Corrector - Using the most recent advances in machine learning with generative neural networks, select your favorite actor or actress (or anyone you have several minutes of footage of!), a target film, and the character that SHOULD have been played by them and our servers will serve up an instantly recast version. Your mom in Serial Mom? Your dad as Rambo? No problem!
Categories: Privacy
Syndicate content