Deep Links

Syndicate content
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 30 weeks 6 days ago

An Update on Patent Troll Shipping & Transit, LLC

Fri, 08/19/2016 - 13:02

There has been significant activity relating to cases and patent infringement claims made by Shipping & Transit, LLC, formerly known as ArrivalStar. Shipping & Transit, who we’ve written about on numerous occasions, is currently one of the most prolific patent trolls in the country. Lex Machina data indicates that, since January 1, 2016, Shipping & Transit has been named in almost 100 cases. This post provides an update on some of the most important developments in these cases.

In many Shipping & Transit cases, Shipping & Transit has alleged that retailers allowing their customers to track packages sent by USPS infringe various claims of patents owned by Shipping & Transit, despite previously suing (and settling with) USPS. EFF represents a company that Shipping & Transit accused of infringing four patents.

Shipping & Transit Is Facing Numerous Alice Motions

In April 2014, the Supreme Court decided Alice v. CLS Bank, holding that “abstract ideas” are not patentable. Many courts have since applied that ruling, finding that patents are “abstract” and therefore invalid, often very early in litigation, saving significant time, money, and effort by the parties.

Several defendants have now asked courts to quickly find Shipping & Transit’s patents invalid under Alice. Neptune Cigars has filed a motion with the Central District of California, arguing that two Shipping & Transit patents (U.S. Patent Nos. 6,763,299 and 6,415,207) are invalid. That motion is pending.

Another defendant, Loginext, also filed a motion arguing that U.S. Patent 6,415,207 was invalid under Alice. Shipping & Transit quickly dismissed its case against Loginext, with Loginext paying nothing to Shipping & Transit. Loginext had also sent a “Rule 11” letter to Shipping & Transit pointing out that Loginext did not even exist when U.S. Patent No. 6,763,299 expired.

Our clients, Triple7Vaping.com LLC and Jason Cugle (together, Triple7), have also noted that the patents are likely invalid under Alice. When another party in the Southern District of Florida moved to dismiss under Alice, we asked the court to consolidate our case with that one, and provided a brief explaining in detail why the claims are invalid under Alice. The motion, however, was not decided after the original party that moved to dismiss settled with Shipping & Transit.

Unified Patents Filed an Inter Partes Review Against the ’270 patent

On July 25, 2015, Unified Patents filed a petition for inter partes review of U.S. Patent 6,415,207 (the ’270 patent), one of the few Shipping & Transit patents that remains in force (many of Shipping & Transit’s patent expired in 2013). In its petition to the Patent Office to review the ’207 patent, Unified Patents argues that the patent is invalid because it is obvious in light of other patents, including a different, much older, Shipping & Transit patent. 

Shipping & Transit Disclaims All Liability by Triple7

On May 31, 2016, Triple7 filed a lawsuit asking for a declaratory judgment that four of Shipping & Transit’s patents were invalid and not infringed. Triple7 also asked the court to find that Shipping & Transit violated Maryland state law when it made its claims of infringement, because the claims were made in bad faith.

In response, on July 21, 2016, Shipping & Transit covenanted not to sue Triple7, meaning it has disclaimed any possible claim of infringement against Triple7. In doing so, Shipping & Transit has sought to prevent the court from deciding the merits of Shipping & Transit’s claims of infringement. Triple7 has argued that the court retains that ability as part of the Maryland claim, and the court is expected to decide the issue soon.

Shipping & Transit Reveals The Minimal Investigation It Does Before It Sends A Demand Letter 

Shipping & Transit asked the Court to dismiss Triple7’s claims for violations of Maryland State law. In doing so, it submitted two affidavits that detailed the investigation it engages in before sending a demand letter. In response, Triple7 argued that Shipping & Transit’s investigation was plainly deficient under binding Federal Circuit law.

While every individual case will have some differences, we hope that these materials are useful to current and future targets of Shipping & Transit’s trolling campaign.

Related Cases: Triple7Vaping.com, LLC et al. v. Shipping & Transit LLC
Share this: Join EFF
Categories: Privacy

The Global Ambitions of Pakistan's New Cyber-Crime Act

Thu, 08/18/2016 - 18:13

Despite near universal condemnation from Pakistan's tech experts; despite the efforts of a determined coalition of activists, and despite numerous attempts by alarmed politicians to patch its many flaws, Pakistan's Prevention of Electronic Crimes Bill (PECB) last week passed into law. Its passage ends an eighteen month long battle between Pakistan's government, who saw the bill as a flagship element of their anti-terrorism agenda, and the technologists and civil liberties groups who slammed the bill as an incoherent mix of anti-speech, anti-privacy and anti-Internet provisions.

But the PECB isn't just a tragedy for free expression and privacy within Pakistan. Its broad reach has wider consequences for Pakistan nationals abroad, and international criminal law as it applies to the
Net.

The new law creates broad crimes related to "cyber-terrorism" and its "glorification" online. It gives the authorities the opportunity to threaten, target and censor unpopular online speech in ways that go far beyond international standards or Pakistan's own free speech protections for offline media. Personal digital data will be collected and made available to the authorities without a warrant: the products of these data retention programs can then be handed to foreign powers without oversight.

PECB is generous to foreign intelligence agencies. It is far less tolerant of other foreigners, or of Pakistani nationals living abroad. Technologists and online speakers outside Pakistan should pay attention to the first clause of the new law:

  1. This Act may be called the Prevention of Electronic Crimes Act, 2016.
  2. It extends to the whole of Pakistan.
  3. It shall apply to every citizen of Pakistan wherever he may be and also to every other person for the time being in Pakistan.
  4. It shall also apply to any act committed outside Pakistan by any person if the act constitutes an offence under this Act and affects a person, property, information system or data location in Pakistan.

Poorly-written cyber-crime laws criminalize these everyday and innocent actions by technology users, and the PECB is no exception. It criminalizes the violation of terms of service in some cases, and ramps up the penalties for many actions that would be seen as harmless or positive acts in the non-digital world, including unauthorized copying and access. Security researchers and consumers frequently conduct "unauthorized" acts of access and copying for legitimate and lawful reasons. They do it to exercise of their right of fair use, to exposing wrongdoing in government, or to protect the safety and privacy of the public. Violating website terms of service may be a violation of your agreement with that site, but no nation should turn those violations into felonies.

The PECB asserts an international jurisdiction for these new crimes. It says that if you are a Pakistan national abroad (over 8.5 million people, or 4% of Pakistan's total population) you too can be prosecuted for violating its vague statutes. And if a Pakistan court determines that you have violated one of the prohibitions listed in the PECB in such a way that it affects any Pakistani national, you can find yourself prosecuted in the Pakistan courts, no matter where you live.

Pakistan isn't alone in making such broad claims of jurisdiction. Some countries claim the power to prosecute a narrow set of serious crimes committed against their citizens abroad under international law's "passive personality principle" (the U.S. does so in some of its anti-terrorism laws). Other countries claim jurisdiction over the actions of its own nationals abroad under the "active personality principle" (for instance, in cases of treason.)

But Pakistan's cyber-crime law asserts both principles simultaneously, and explicitly applies them to all cyber-crime, both major and minor, defined in PECB. That includes creating "a sense of insecurity in the [Pakistani] government" (Ch.2, 10), offering services to change a computer's MAC address (Ch.2, 16), or building tools that let you listen to licensed radio spectrum (Ch.2, 13 and 17).

The universal application of such arbitrary laws could have practical consequences for the thousands of overseas Pakistanis working in the IT and infosecurity industries, as well for those in the Pakistan diaspora who wish to publicly critique Pakistani policies. It also continues the global jurisdictional trainwreck that surrounds digital issues, where every country demands that its laws apply and must be enforced across a  borderless Internet.

Applying what has been described as "the worst piece of cyber-crime legislation in the world" to the world is a bold ambition, and the current Pakistani government's reach may well have exceeded its grasp, both under international law and its own constitutional limits. The broad coalition who fought PECB in the legislature will now seek to challenge it in the courts.

But until they win, Pakistan has overlaid yet another layer of vague and incompatible crimes over the Internet, and its own far-flung citizenry.


Share this: Join EFF
Categories: Privacy

California Lawmaker Pulls Digital Currency Bill After EFF Opposition

Thu, 08/18/2016 - 12:52

For the second year in a row, EFF and a coalition of virtual currency and consumer protection organizations have beaten back a California bill that would have created untenable burdens for the emerging cryptocurrency community.

This week, the author of A.B. 1326, Assemblymember Matt Dababneh withdrew the bill from consideration, saying in a statement:

Unfortunately, the current bill in print does not meet the objectives to create a lasting regulatory framework that protects consumers and allows this industry to thrive in our state. More time is needed and these conversations must continue in order for California to be at the forefront of this effort.

State lawmakers were poised to quickly jam through an amended version of a digital currency licensing bill­ with new provisions that were even worse than last year’s version.

As in the previous version, the bill required a “digital currency business” to get approval from the state before operating in California and also comply with regulations similar to those applicable to banks and money transmitters. The amended bill, however, was so carelessly drafted that it would have forced Bitcoin miners, video game makers, and even digital currency users to register with a state agency and be subject to the new regulations.

Worse, the bill failed to accomplish its intent—protecting consumers—because it would have limited the number of digital currency options available to Californians.

EFF is grateful that Assemblymember Dababneh recognized there were problems with the legislation and put the brakes on sending it through the legislature as its session winds down.

That said, the bill demonstrates that there are still too many technical and policy gaps in the current thinking about digital currencies and the need for regulation.

EFF continues to believe that before lawmakers anywhere consider legislation regulating digital currencies, they need to better understand the technology at issue as well as demonstrating how the legislation actually benefits consumers. The California bill unfortunately failed in both respects.

A.B. 1326 Would Have Hurt Consumers

First, as EFF’s opposition letter to A.B. 1326 stated, the bill’s goal to protect consumers would have ironically been frustrated by the legislation, as it would have restricted access to currencies that benefit consumers in ways that non-digital currencies do not.

Many digital currencies allow individuals to directly transact with one another even when they do not know or trust each other. These currencies have significant benefits to consumers as they eliminate the third parties needed in non-digital transactions that can often be the sources of fraud or other consumer harm.

Further, intermediaries in traditional currency transactions, such as payment processers, are often the targets of financial censorship, which ultimately inhibits people’s ability to support controversial causes or organizations.

Because the bill would have allowed California’s Department of Business Oversight to determine which digital currency businesses operated in California, the government would have been deciding which currencies and businesses could be used, rather than consumers. This would have significantly limited Californians’ digital currency options, to their detriment.

A.B. 1326’s Vague Terms Would Have Required Consumers to Register

The bill was also written in a manner that failed to grasp how digital currencies work, leading to broad definitions of “digital currency business” that would have regulated not just businesses transacting on behalf of digital currency users, but the users themselves.

There were many vague definitions in the bill. Take for example, a provision requiring anyone who transmits digital currencies to another person to register and comply with its complex regulations.

Digital currency users often directly transmit digital currency value to others without any intermediary, meaning those users would have been subject to the regulations even though they are merely using a digital currency. Additionally, despite the bill purporting to have an exemption for parties such as Bitcoin miners, they would also have to register because in appending transactions to the Blockchain, they could be viewed as transmitting digital currency.

The bill also would have required video game makers who offer in-game digital currency or goods to register, as the exemption for such activity is limited to items or currency that have no value outside of the game. The reality is that many items and currencies within games often have independent markets in which players buy, sell, or exchange items, regardless of whether a game maker allows for those transactions. Those game makers, however, would have to obtain a license under the bill even though they often do not control the outside markets. The bill would have also created roadblocks for video game companies who offer in-game currency that can be used to buy real world items, such as T-shirts or stickers.

Additionally, the bill contained no exemption for start-ups or smaller companies innovating digital currencies, giving established currencies such as Bitcoin and its more sophisticated industry a leg up over competition.

The many problems with the bill would ultimately have been bad for the state, as it would have pushed innovation elsewhere and chilled a young and quickly evolving industry.

EFF recognizes that there are risks for consumers using digital currencies and appreciates lawmakers interested in addressing them.  We think any legislative response, however, should be based on a better understanding of the state of digital currencies and narrowly focused on the situations that pose risks for consumers. Such an approach would preserve space for innovation in the industry while still protecting users.


Share this: Join EFF
Categories: Privacy

Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Illegally Using Stingrays to Disrupt Cellular Communications

Wed, 08/17/2016 - 21:18

Civil Rights Groups Urge FCC to Issue Enforcement Action Prohibiting Law Enforcement Agencies From Illegally Using Stingrays

This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color.

Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications.

Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.

The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public.

Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”

Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:

The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.

The FCC must protect the American people from law enforcement practices that disrupt emergency communications and unconstitutionally discriminate against communities based on race. The FCC is charged with safeguarding the public's interest in transparency and equality of access to communication over the airwaves. Please join us in calling on the FCC to enforce the Communications Act and put an end to widespread network interference by the rampant unauthorized transmissions of the BPD's illegal use of stingray technology.

But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology. 

Anyone can support the complaint by tweeting at FCC Commissioners or by signing the petitions hosted by Color of Change or MAG-Net.

Related Cases: U.S. v. Damian Patrick State of Maryland v. Kerron Andrews
Share this: Join EFF
Categories: Privacy

Tell Your University: Don't Sell Patents to Trolls

Wed, 08/17/2016 - 18:18

When universities invent, those inventions should benefit everyone. Unfortunately, they sometimes end up in the hands of patent trolls—companies that serve no purpose but to amass patents and demand money from others. When a university sells patents to trolls, it undermines the university’s purpose as a driver of innovation. Those patents become landmines that make innovation more difficult.

A few weeks ago, we wrote about the problem of universities selling or licensing patents to trolls. We said that the only way that universities will change their patenting and technology transfer policies is if students, professors, and other members of the university community start demanding it.

It’s time to start making those demands.

We’re launching Reclaim Invention, a new initiative to urge universities to rethink how they use patents. If you think that universities should keep their inventions away from the hands of patent trolls, then use our form to tell them.

EFF is proud to partner with Creative Commons, Engine, Fight for the Future, Knowledge Ecology International, and Public Knowledge on this initiative.

Tell your university: Don’t sell patents to trolls.

A Simple Promise to Defend Innovation

Central to our initiative is the Public Interest Patent Pledge (PIPP), a pledge we hope to see university leadership sign. The pledge says that before a university sells or licenses a patent, it will first check to make sure that the potential buyer or licensee doesn’t match the profile of a patent troll:

When determining what parties to sell or license patents to, [School name] will take appropriate steps to research the past practices of potential buyers or licensees and favor parties whose business practices are designed to benefit society through commercialization and invention. We will strive to ensure that any company we sell or license patents to does not have a history of litigation that resembles patent trolling. Instead, we will partner with those who are actively working to bring new technologies and ideas to market, particularly in the areas of technology that those patents inhabit.

One of our sources of inspiration for the pledge was the technology transfer community itself. In 2007, the Association of University Technology Managers (AUTM) released a document called Nine Points to Consider, which advocates transferring to companies that are actively working in the same fields of technology the patents cover, not those that will simply use them to demand licensing fees from others. More recently, the Association of American Universities (AAU) launched a working group on technology transfer policy, and that group’s early recommendations closely mirror AUTM’s (PDF). EFF has often found itself on the opposite side of policy fights from AUTM and AAU, but we largely agree with them on this issue that something needs to change.

Despite that good advice, many research universities continue to sell patents to trolls. Just a few weeks ago, we wrote about My Health, a company that appears to do nothing but file patent and trademark lawsuits. Its primary weapon is a patent from the University of Rochester. Rochester isn’t alone: dozens of universities regularly license patents to the notorious mega-troll Intellectual Ventures.

Good intentions and policy statements won’t solve the problem. Universities will change when students, professors, and alumni insist on it.

Local Organizers: You Can Make a Difference

We’re targeting this campaign at every college and university in the United States, from flagship state research institutions to liberal arts colleges. Why? Because patents affect everyone. The licensing decisions that universities make today will strengthen or sabotage the next generation of inventors and innovators. Together, we can make a statement that universities want more innovation-friendly laws and policies nationwide.

It would be impossible for any one organization to persuade every college and university to sign the pledge, so we’re turning to our network of local activists in the Electronic Frontier Alliance and beyond.

We’ve designed our petition to make it easy for local organizers to share the results with university leadership. For example, here are all of the people who’ve signed the petition with a connection to the University of South Dakota. If you volunteer for the USD digital civil liberties club—or if you’ve been looking to start it—then your group could write a letter to university leadership urging them to sign the pledge, and include the names of all of the signatories. We’re eager to work with you to make sure your voice is heard. You can write me directly with any questions.

Reclaim Invention represents a new type of EFF campaign. This is the first time we’ve launched a campaign targeting thousands of local institutions at once. It’s a part of our ongoing work to unite the efforts of grassroots digital rights activists across the country. Amazing things can happen when local activists coordinate their efforts.

Tell your university: Don’t sell patents to trolls.


Share this: Join EFF
Categories: Privacy

With Windows 10, Microsoft Blatantly Disregards User Choice and Privacy: A Deep Dive

Wed, 08/17/2016 - 10:12

Microsoft had an ambitious goal with the launch of Windows 10: a billion devices running the software by the end of 2018. In its quest to reach that goal, the company aggressively pushed Windows 10 on its users and went so far as to offer free upgrades for a whole year. However, the company’s strategy for user adoption has trampled on essential aspects of modern computing: user choice and privacy. We think that’s wrong.

You don’t need to search long to come across stories of people who are horrified and amazed at just how far Microsoft has gone in order to increase Windows 10’s install base. Sure, there is some misinformation and hyperbole, but there are also some real concerns that current and future users of Windows 10 should be aware of. As the company is currently rolling out its “Anniversary Update” to Windows 10, we think it’s an appropriate time to focus on and examine the company’s strategy behind deploying Windows 10.

Disregarding User Choice

The tactics Microsoft employed to get users of earlier versions of Windows to upgrade to Windows 10 went from annoying to downright malicious. Some highlights: Microsoft installed an app in users’ system trays advertising the free upgrade to Windows 10. The app couldn’t be easily hidden or removed, but some enterprising users figured out a way. Then, the company kept changing the app and bundling it into various security patches, creating a cat-and-mouse game to uninstall it.

Eventually, Microsoft started pushing Windows 10 via its Windows Update system. It started off by pre-selecting the download for users and downloading it on their machines. Not satisfied, the company eventually made Windows 10 a recommended update so users receiving critical security updates were now also downloading an entirely new operating system onto their machines without their knowledge. Microsoft even rolled in the Windows 10 ad as part of an Internet Explorer security patch. Suffice to say, this is not the standard when it comes to security updates, and isn’t how most users expect them to work. When installing security updates, users expect to patch their existing operating system, and not see an advertisement or find out that they have downloaded an entirely new operating system in the process.

In May 2016, in an action designed in a way we think was highly deceptive, Microsoft actually changed the expected behavior of a dialog window, a user interface element that’s been around and acted the same way since the birth of the modern desktop. Specifically, when prompted with a Windows 10 update, if the user chose to decline it by hitting the ‘X’ in the upper right hand corner, Microsoft interpreted that as consent to download Windows 10. 

Time after time, with each update, Microsoft chose to employ questionable tactics to cause users to download a piece of software that many didn’t want. What users actually wanted didn’t seem to matter. In an extreme case, members of a wildlife conservation group in the African jungle felt that the automatic download of Windows 10 on a limited bandwidth connection could have endangered their lives if a forced upgrade had begun during a mission.

Disregarding User Privacy

The trouble with Windows 10 doesn’t end with forcing users to download the operating system. Windows 10 sends an unprecedented amount of usage data back to Microsoft, particularly if users opt in to “personalize” the software using the OS assistant called Cortana. Here’s a non-exhaustive list of data sent back: location data, text input, voice input, touch input, webpages you visit, and telemetry data regarding your general usage of your computer, including which programs you run and for how long.

While we understand that many users find features like Cortana useful, and that such features would be difficult (though not necessarily impossible) to implement in a way that doesn’t send data back to the cloud, the fact remains that many users would much prefer not to use these features in exchange for maintaining their privacy.

And while users can disable some of these settings, it is not a guarantee that your computer will stop talking to Microsoft’s servers. A significant issue is the telemetry data the company receives. While Microsoft insists that it aggregates and anonymizes this data, it hasn’t explained just how it does so. Microsoft also won’t say how long this data is retained, instead providing only general timeframes. Worse yet, unless you’re an enterprise user, no matter what, you have to share at least some of this telemetry data with Microsoft and there’s no way to opt-out of it.

Microsoft has tried to explain this lack of choice by saying that Windows Update won’t function properly on copies of the operating system with telemetry reporting turned to its lowest level. In other words, Microsoft is claiming that giving ordinary users more privacy by letting them turn telemetry reporting down to its lowest level would risk their security since they would no longer get security updates1. (Notably, this is not something many articles about Windows 10 have touched on.)

But this is a false choice that is entirely of Microsoft’s own creation. There’s no good reason why the types of data Microsoft collects at each telemetry level couldn’t be adjusted so that even at the lowest level of telemetry collection, users could still benefit from Windows Update and secure their machines from vulnerabilities, without having to send back things like app usage data or unique IDs like an IMEI number.

And if this wasn’t bad enough, Microsoft’s questionable upgrade tactics of bundling Windows 10 into various levels of security updates have also managed to lower users’ trust in the necessity of security updates. Sadly, this has led some people to forego security updates entirely, meaning that there are users whose machines are at risk of being attacked.

There’s no doubt that Windows 10 has some great security improvements over previous versions of the operating system. But it’s a shame that Microsoft made users choose between having privacy and security.

The Way Forward

Microsoft should come clean with its user community. The company needs to acknowledge its missteps and offer real, meaningful opt-outs to the users who want them, preferably in a single unified screen. It also needs to be straightforward in separating security updates from operating system upgrades going forward, and not try to bypass user choice and privacy expectations.

Otherwise it will face backlash in the form of individual lawsuits, state attorney general investigations, and government investigations.

We at EFF have heard from many users who have asked us to take action, and we urge Microsoft to listen to these concerns and incorporate this feedback into the next release of its operating system. Otherwise, Microsoft may find that it has inadvertently discovered just how far it can push its users before they abandon a once-trusted company for a better, more privacy-protective solution.

       

Correction: an earlier version of the blogpost implied that data collection related to Cortana was opt-out, when in fact the service is opt in.

  • 1. Confusingly, Microsoft calls the lowest level of telemetry reporting (which is not available on Home or Professional editions of Windows 10) the “security” level—even though it prevents security patches from being delivered via Windows Update.

Share this: Join EFF
Categories: Privacy

Demand California Fix CalGang, A Deeply Flawed Gang Database

Tue, 08/16/2016 - 13:56

CalGang is a joke.

California’s gang database contains data on more than 150,000 people that police believe are associated with gangs, often based on the flimsiest of evidence. Law enforcement officials would have you believe that it’s crucial to their jobs, that they use it ever so responsibly, and that it would never, ever result in unequal treatment of people of color.

But you shouldn’t take their word for it. And you don’t have to take ours either, or the dozens of other civil rights organizations calling for a CalGang overhaul. But you should absolutely listen to the California State Auditor’s investigation.

The state’s top CPA, Elaine Howle, cracked open the books and crunched the numbers as part of an audit:

This report concludes that CalGang’s current oversight structure does not ensure that law enforcement agencies (user agencies) collect and maintain criminal intelligence in a manner that preserves individuals’ privacy rights.

Brutal.  But then there was more. 

She wrote that CalGang receives “no state oversight” and operates “without transparency or meaningful opportunities for public input.”

She found that agencies couldn’t legitimize 23 percent of CalGang entries she reviewed. Thirteen out of 100 people had no substantiated reason for being in the database.

She found that law enforcement had ignored a five-year purging policy for more than 600 people, often extending the purge date to more than 100 years. They also frequently disregarded a law requiring police to notify the parents of minors before adding them to CalGang. 

She found that there was “little evidence” that CalGang had met standards for protecting privacy and other constitutional rights.

As a result, user agencies are tracking some people in CalGang without adequate justification, potentially violating their privacy rights.

And then the other shoe dropped:

Further, by not reviewing information as required, CalGang’s governance and user agencies have diminished the system’s crime-fighting value.

To recap the audit: CalGang violates people’s rights, operates with no oversight, is chockfull of unsubstantiated information and data that should have been purged, and has diminished value in protecting public safety.

Assemblymember Shirley Weber has the start of a solution: A.B. 2298.  

This bill would write into law all new transparency and accountability measures for the controversial CalGang database and at least 11 other gang databases managed by local law enforcement agencies in California.

For example:

  • Law enforcement would be required to notify you if they intend to add you to the database.
  • You would have the opportunity to challenge your inclusion in a gang database.
  • Law enforcement agencies would have to produce transparency reports for anyone to look at with statistics on CalGang additions, removals, and demographics.

EFF has joined dozens of civil rights groups like the Youth Justice Coalition to support this bill. If you live in California, please join us by emailing your elected officials today to put this bill on the governor’s desk.

Support Reform of California's Gang Databases

Here are some other things you should know about CalGang.

What is CalGang?

CalGang is a data collection system used by law enforcement agencies to house information on suspected gang members. At last count, CalGang contained data on more than 150,000 people. As of 2016, the CalGang database is accessible by more than 6,000 law enforcement officers across the state from the laptops in their patrol vehicles.

As the official A.B. 2298 legislative analysis explains:

The CalGang system database, which is housed by the [California Department of Justice, is accessed by law enforcement officers in 58 counties and includes 200 data fields containing personal, identifying information such as age, race, photographs, tattoos, criminal associates, addresses, vehicles, criminal histories, and activities.

Something as simple as living on a certain block can label you as a possible Crip or Hell’s Angel, subjecting you to increased surveillance, police harassment, and gang injunctions. Police use the information in the database to justify an arrest, and prosecutors use it to support their request for maximum penalties.

Many of the Californians included in the CalGang database don’t know they’re on it. What’s worse: If you’re an adult on the list, you have no right to know you’re on it or to challenge your inclusion. Law enforcement agencies have lobbied aggressively to block legislation that would make the CalGang data more accessible to the public.

How Does CalGang Work?

In use for almost 20 years, CalGang holds information collected by beat officers during traffic stops and community patrols. The officers fill out Field Identification Cards with details supporting their suspicions, which can include pictures of the person’s tattoos and clothing. They can collect this information from any person at any time, no arrest necessary. The cards are then uploaded to CalGang at the discretion of the officer. Detectives also add to the database while mapping out connections and associations to the suspects they investigate. Any officer can access the information remotely at any time. So if, during the course of writing a fix-it ticket, an officer runs the driver’s name through the database and sees an entry, that officer can potentially formulate a bias against the driver.

Ali Winston’s Reveal News article about the horrors of CalGang shows how Facebook photos with friends can lead to criminal charges.

Aaron Harvey, a 26-year-old club promoter in Las Vegas at the time, was arrested and taken back to his native city of San Diego. He was charged with nine counts of gang conspiracy to commit a felony due to the fact that a couple of his Facebook friends from the Lincoln Park neighborhood where he grew up were believed to be in a street gang. Police further suspected that those friends took part in nine shootings, all of which occurred after Harvey had moved to Nevada. Even though no suspects were ever charged in connection to the actual shootings, Harvey still spent eight months in jail before a judge dismissed the gang conspiracy charges against him as baseless. As a direct result of his unjust incarceration, he lost his job and his apartment in Las Vegas and had to move in with family in San Diego.

Asked about his experience of gang classification systems, Harvey said,  “It’s like a virus that you have, that you don’t know you have… (Someone) infected me with this disease; now I have it, and there’s no telling how many other people I have infected.”

It’s Based on Subjective Observations

The criteria used for determining gang affiliation are laughably broad. Much of the information that is considered to be evidence of gang activity is open to personal interpretation: being seen with suspected gang members, wearing “gang dress”, making certain hand signs, or simply being called a gang member by, as the CalGang procedural manual states, an “untested informant”. The presence of two of these criteria is considered enough evidence for people to be included in the database for at least 5 years and subject to a possible gang injunction (a court order that restricts where you can go and with whom you can interact). 

A.B. 2298’s legislative analysis explains the flaw in this system.  

[A]s a practical matter, it may be difficult for a minor, or a young-adult, living in a gang-heavy community to avoid qualifying criteria when the list of behaviors includes items such as “is in a photograph with known gang members,” “name is on a gang document, hit list or gang-related graffiti” or “corresponds with known gang members or writes and/or receives correspondence.” In a media-heavy environment, replete with camera phones and social network comments, it may be challenging for a teenager aware of the exact parameters to avoid such criteria, let alone a teenager unaware he or she is being held to such standards.

As we saw with Aaron Harvey, meeting three of the criteria can get you a gang conspiracy charge.

It’s Racially Biased

 Patrol officers, because they directly engage the public during their daily beat, make many of the entries. The problem is that communities of color tend to be heavily policed in the first place. In a state that is 45% black and brown, Hispanic and African-American individuals make up 85% of the CalGang database. In a country where people of color are already targeted and criminally prosecuted at disproportionately higher rates, having a database that intensifies racial bias and penalizes thousands of Californians based on the neighborhood and community in which they live, their friends and other personal connections, what they wear, and the way that they pose in pictures is unconstitutional. 

That being said, false gang ties can be attributed to anyone (with all the negative ramifications that go along with them) regardless of race. The database also includes people with tenuous ties to Asian gangs, white nationalist groups, and motorcycle clubs.

Lack of Transparency

Even though S.B. 458 was passed in 2013 requiring that the state of California notify parents of juveniles who are listed on the database (because some registrants are as young as 9 years old), a 2014 proposition that would have extended the notification to adults was heavily resisted by law enforcement agencies. That bill ultimately failed. As it stands today, if an adult Californian wanted to know if they are listed in CalGang, they would have absolutely no recourse. There is no way to challenge incorrect assertions of gang affiliation. Most of the adults who are listed as potential gang members won’t find out until after an arrest. 

In terms of governance, the State Auditor noted that because CalGang wasn’t created by a statute, there is no formal state oversight. Instead, it’s managed by two secretive committees, the CalGang Executive Board and the CalGang Node Advisory Committee. She writes:

Generally, CalGang’s current operations are outside of public view… we found that the CalGang users self‑administer the committee’s audits and that they do not meaningfully report the results to the board, the committee, or the public. Further, CalGang’s governance does not meet in public, and neither the board nor the committee invites public participation by posting meeting dates, agendas, or reports about CalGang.

The last report from the California Department of Justice explaining the data in CalGang was published way back in 2010.

Tell your elected representative to support A.B. 2298 today.

Correction: The figure regarding the number of individuals in the CalGang database has been adjusted from 200,000 to 150,000 based on updated numbers from the auditor's report.  


Share this: Join EFF
Categories: Privacy

Rock Against the TPP heads to Portland, Seattle, and San Francisco

Tue, 08/16/2016 - 12:07

As the Rock Against the TPP tour continues its way around the country, word is spreading that it's not too late for us to stop the undemocratic Trans-Pacific Partnership (TPP) in its tracks. The tour kicked off in Denver on July 23 with a line-up that included Tom Morello, Evangeline Lilly, and Anti-Flag, before hitting San Diego the following week where Jolie Holland headlined. You can check out the powerful vibe of the kick-off show below.

Privacy info. This embed will serve content from youtube.com

And the tour isn't even half done yet! This weekend, Rock Against the TPP heads to Seattle on August 19 and Portland on August 20, featuring a number of new artists including Danbert Nobacon of Chumbawamba in Seattle, and hip-hop star Talib Kweli in Portland. The latest tour date to be announced is a stop in EFF's home city of San Francisco on September 9, featuring punk legend Jello Biafra.

EFF will be on stage for each of the three remaining dates to deliver a short message about the threats that the TPP poses to Internet freedom, creativity, and innovation both here in the United States, and across eleven other Pacific Rim countries. These threats include:

  • Doubling down on U.S. law that makes it easy for copyright owners to have content removed from the Internet without a court order, and hard for users whose content is wrongly removed.
  • Forcing six other countries to go along with our ridiculously long copyright term—life of the author plus another 70 years—which stops artists and fans from using music and art from a century ago.
  • Imposing prison terms for those who disclose corporate secrets, break copyright locks, or share files, even if they are journalists, whistleblowers, or security researchers, and even if they're not making any money from it.

In addition, the TPP completely misses the opportunity to include meaningful protections for users. It fails to require other countries to adopt an equivalent to the fair use right in U.S. copyright law, it includes only weak and unenforceable language about the importance of a free and open Internet and net neutrality, and its provisions on encryption technology and software source code fail to offer any protection against crypto backdoors.

Rock Against the TPP is an opportunity to spread the word about these problems and to stand up to the corporate lobbyists and their captive trade negotiators who have spent years pushing the TPP against the people's will. First and foremost it's also a celebration of the creativity, passion, and energy of the artists and fans who are going to help to stop this flawed agreement.

If you can make it to Portland, Seattle, or San Francisco, please join us! Did we mention that the concerts are absolutely free? Reserve your tickets now, and spread the word to all your family and friends. With your help, the TPP will soon be nothing but a footnote in history.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

White House Source Code Policy Should Go Further

Mon, 08/15/2016 - 18:21

A new federal government policy will result in the government releasing more of the software that it creates under free and open source software licenses. That’s great news, but doesn’t go far enough in its goals or in enabling public oversight.

A few months ago, we wrote about a proposed White House policy regarding how the government handles source code written by or for government agencies. The White House Office of Management and Budget (OMB) has now officially enacted the policy with a few changes. While the new policy is a step forward for government transparency and open access, a few of the changes in it are flat-out baffling.

As originally proposed (PDF), the policy would have required that code written by employees of federal agencies be released to the public. For code written by third-party developers, agencies would have been required to release at least 20% of it under a license approved by the Open Source Initiative—prioritizing “code that it considers potentially useful to the broader community.”

At the time, EFF recommended that OMB consider scrapping the 20% rule; it would be more useful for agencies to release everything, regardless of whether it was written by employees or third parties. Exceptions could be made in instances in which making code public would be prohibitively expensive or dangerous.

Instead, OMB went in the opposite direction: the official policy treats code written by government employees and contractors the same and puts code in both categories under the 20% rule. OMB was right the first time: code written by government employees is, by law, in the public domain and should be available to the public.

More importantly, though, a policy that emphasizes “potentially useful” code misses the point. While it’s certainly the case that people and businesses should be able to reuse and build on government code in innovative ways, that’s not the only reason to require that the government open it. It’s also about public oversight.

Giving the public access to government source code gives it visibility into government programs. With access to government source code—and permission to use it—the public can learn how government software works or even identify security problems. The 20% rule could have the unfortunate effect of making exactly the wrong code public. Agencies can easily sweep the code in most need of public oversight into the 80%. In fairness, OMB does encourage agencies to release as much code as they can “to further the Federal Government's commitment to transparency, participation, and collaboration.” But the best way to see those intentions through is to make them the rule.

Open government policy is at its best when its mandates are broad and its exceptions are narrow. Rather than trust government officials’ judgment about what materials to make public or keep private, policies like OMB’s should set the default to open. Some exceptions are unavoidable, but they should be limited and clearly defined. And when they’re invoked, the public should know what was exempted and why.

OMB has implemented the 20% rule as a three-year pilot. The office says that it will “evaluate pilot results and consider whether to allow the pilot program to expire or to issue a subsequent policy to continue, modify, or increase the minimum requirements of the pilot program.” During the next three years, we’ll be very interested to see how much code agencies release and what stays obscured.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

The FCC Can't Save Community Broadband -- But We Can

Sat, 08/13/2016 - 10:39

Last year, while most of us were focused on the FCC’s Open Internet Order to protect net neutrality, the FCC quietly did one more thing: it voted to override certain state regulations that inhibit the development and expansion of community broadband projects. The net neutrality rules have since been upheld, but last week a federal appeals court rejected FCC’s separate effort to preempt state law.

The FCC’s goals were laudable. Municipalities and local communities have been experimenting with ways to foster alternatives to big broadband providers like Comcast and Time/Warner. Done right, community fiber experiments have the potential to create options that empower Internet subscribers and make Internet access more affordable. For example, Chattanooga, Tennessee, is home to one of the nation’s least expensive, most robust municipally-owned broadband networks. The city decided to build a high-speed network initially to meet the needs of the city’s electric company. Then, the local government learned that the cable companies would not be upgrading their Internet service fast enough to meet the city's needs. So the electric utility also became an ISP, and the residents of Chattanooga now have access to a gigabit (1,000 megabits) per second Internet connection. That’s far ahead of the average US connection speed, which typically clocks in at 9.8 megabits per second.

But 19 states have laws designed to inhibit experiments like these, which is why the FCC decided to take action, arguing that its mandate to promote broadband competition gave it the authority to override state laws inhibiting community broadband. The court disagreed, finding that the FCC had overstepped its legal authority to regulate.

While the communities that looked to the FCC for help are understandably disappointed, the ruling should offer some reassurance for those who worry about FCC overreach. Here, as with net neutrality rulings prior to the latest one, we see that the courts can and will rein in the FCC if it goes beyond its mandate.

But there are other lessons to be learned from the decision. One is that we cannot rely on the FCC alone to promote high speed Internet access. If a community wants the chance to take control of its Internet options, it must organize the political will to make it happen – including the will to challenge state regulations that stand in the way. Those regulations were doubtless passed to protect incumbent Internet access providers, but we have seen that a determined public can fight those interests and win. This time, the effort must begin at home. Here are a few ideas:

Light Up the Dark Fiber, Foster Competiiton

In most U.S. cities there is only one option for high-speed broadband access. And this lack of competition means that users can’t vote with their feet when monopoly providers like Comcast or Verizon discriminate among Internet users in harmful ways. On the flipside, a lack of competition leaves these large Internet providers with little incentive to offer better service.

It doesn't have to be that way. Right now, 89 U.S. cities provide residents with high-speed home Internet, but dozens of additional cities across the country have the infrastructure, such as dark fiber, to either offer high-speed broadband Internet to residents or lease out the fiber to new Internet access providers to bring more competition to the marketplace (the option we prefer).

“Dark fiber” refers to unused fiber optic lines already laid in cities around the country, intended to provide high speed, affordable Internet access to residents. In San Francisco, for example, more than 110 miles of fiber optic cable run under the city. Only a fraction of that fiber network is being used.

And San Francisco isn’t alone. Cities across the country have invested in laying fiber to connect nonprofits, schools, and government offices with high-speed Internet. That fiber can be used by Internet service startups to help deliver service to residents, reducing the expensive initial investment it takes to enter this market.

So the infrastructure to provide municipal alternatives is there in many places—we just need the will and savvy to make it a reality that works.

"Dig Once"—A No Brainer

Building the infrastructure for high-speed internet is expensive. One big expense is tearing up the streets to build out an underground network. But cities regularly have to tear up streets for all kinds of reasons, such as upgrading sewer lines. They should take advantage of this work to create a network of conduits, and then let any company that wants to route their cables through that network, cutting the cost of broadband deployment.

Challenge Artificial Political and Legal Barriers

In addition to state regulations, many cities have created their own unnecessary barriers to their efforts to light up dark fiber or extend existing networks. Take Washington, D.C., where the city’s fiber is bound up in a non-compete contract with Comcast, keeping the network from serving businesses and residents. If that's the case in your town, you should demand better from your representatives. In addition, when there's a local meeting to consider new construction, demand that they include a plan for installing conduit.

These are just a few ideas; you can find more here, along with a wealth of resources. It’s going to take a constellation of solutions to keep our Internet open, but we don't need to wait on regulators and legislators in D.C. This is one area where we can all be leaders. We can organize locally and tell our elected officials to invest in protecting our open Internet.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

EFF Asks Supreme Court To Review ‘Dancing Baby’ Copyright Case

Fri, 08/12/2016 - 17:45
Copyright Holders Must Be Held Accountable For Baseless Takedown Notices

Washington, D.C.—The Electronic Frontier Foundation (EFF) today filed a petition on behalf of its client Stephanie Lenz asking the U.S. Supreme Court to ensure that copyright holders who make unreasonable infringement claims can be held accountable if those claims force lawful speech offline.

Lenz filed the lawsuit that came to be known as the “Dancing Baby” case after she posted—back in 2007—a short video on YouTube of her toddler son in her kitchen. The 29-second recording, which Lenz wanted to share with family and friends, shows her son bouncing along to the Prince song "Let's Go Crazy," which is heard playing in the background. Universal Music Group, which owns the copyright to the Prince song, sent YouTube a notice under the Digital Millennium Copyright Act (DMCA), claiming that the family video was an infringement of the copyright.

EFF sued Universal on Lenz’s behalf, arguing that the company’s claim of infringement didn’t pass the laugh test and was just the kind of improper, abusive DMCA targeting of lawful material that so often threatens free expression on the Internet. The DMCA includes provisions designed to prevent abuse of the takedown process and allows people like Lenz to sue copyright holders for bogus takedowns.

The San Francisco-based U.S. Court of Appeals for the Ninth Circuit last year sided in part with Lenz, ruling that that copyright holders must consider fair use before sending a takedown notice. But the court also held that copyright holders should be held to a purely subjective standard. In other words, senders of false infringement notices could be excused so long as they subjectively believed that the material they targeted was infringing, no matter how unreasonable that belief. Lenz is asking the Supreme Court to overrule that part of the Ninth Circuit’s decision to ensure that the DMCA provides the protections for fair use that Congress intended.

“Rightsholders who force down videos and other online content for alleged infringement—based on nothing more than an unreasonable hunch, or subjective criteria they simply made up—must be held accountable,” said EFF Legal Director Corynne McSherry. “If left standing, the Ninth Circuit’s ruling gives fair users little real protection against private censorship through abuse of the DMCA process.”

For the brief:
https://www.eff.org/document/petition-writ-lenz-v-universal

For more on Lenz v. Universal:
https://www.eff.org/cases/lenz-v-universal

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Contact:  CorynneMcSherryLegal Directorcorynne@eff.org
Share this: Join EFF
Categories: Privacy

We Shouldn’t Wait Another Fifteen Years for a Conversation About Government Hacking

Fri, 08/12/2016 - 13:46

With high-profile hacks in the headlines and government officials trying to reopen a long-settled debate about encryption, information security has become a mainstream issue. But we feel that one element of digital security hasn’t received enough critical attention: the role of government in acquiring and exploiting vulnerabilities and hacking for law enforcement and intelligence purposes. That’s why EFF recently published some thoughts on a positive agenda for reforming how the government, obtains, creates, and uses vulnerabilities in our systems for a variety of purposes, from overseas espionage and cyberwarfare to domestic law enforcement investigations.

Some influential commentators like Dave Aitel at Lawfare have questioned whether we at EFF should be advocating for these changes, because pursuing any controls on how the government uses exploits would be “getting ahead of the technology.” But anyone who follows our work should know we don’t call for new laws lightly.

To be clear: We are emphatically not calling for regulation of security research or exploit sales. Indeed, it’s hard to imagine how any such regulation would pass constitutional scrutiny. We are calling for a conversation around how the government uses that technology. We’re fans of transparency; we think technology policy should be subject to broad public debate, heavily informed by the views of technical experts. The agenda in the previous post outlined calls for exactly that.

There’s reason to doubt anyone who claims that it’s too soon to get this process started.

Consider the status quo: The FBI and other agencies have been hacking suspects for at least 15 years without real, public, and enforceable limits. Courts have applied an incredible variety of ad hoc rules around law enforcement’s exploitation of vulnerabilities, with some going so far as to claim that no process at all is required. Similarly, the government’s (semi-)formal policy for acquisition and retention of vulnerabilities—the Vulnerabilities Equities Process (VEP)—was apparently motivated in part by public scrutiny of Stuxnet (widely thought to have been developed at least in part by the U.S. government) and the long history of exploiting vulnerabilities in its mission to disrupt Iran's nuclear program. Of course, the VEP sat dormant and unused for years until after the Heartbleed disclosure. Even today, the public has seen the policy in redacted form only thanks to FOIA litigation by EFF.

The status quo is unacceptable.

If the Snowden revelations taught us anything, it’s that the government is in little danger of letting law hamstring its opportunistic use of technology. Nor is the executive branch shy about asking Congress for more leeway when hard-pressed. That’s how we got the Patriot Act and the FISA Amendments Act, not to mention the impending changes to Federal Rule of Criminal Procedure 41 and the endless encryption “debate.” The notable and instructive exception is the USA Freedom Act, the first statute substantively limiting the NSA’s power in decades, born out of public consternation over the agency’s mass surveillance.

So let’s look at some of the arguments for not pursuing limits on the government’s use of particular technologies here.

On vulnerabilities, the question is whether the United States should have any sort of comprehensive, legally mandated policy requiring disclosure in some cases where the government finds, acquires, creates, or uses vulnerabilities affecting the computer networks we all rely on. That is, should we take a position on whether it is beneficial for the government to disclose vulnerabilities to those in the security industry responsible for keeping us safe? 

In one sense, this is a strange question to be asking, since the government says it already has a considered position, as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.

But Aitel says all those officials are wrong. He argues that we as outsiders have no evidence that disclosure increases security. To the contrary, Aitel says it’s a “fundamental misstatement” and a “falsehood” that vulnerabilities exploited by the government might overlap with vulnerabilities used by bad actors. “In reality,” he writes, “the vulnerabilities used by the U.S. government are almost never discovered or used by anyone else.”

If Aitel has some data to back up his “reality,” he doesn’t share it. And indeed, in the past, Aitel himself has written that “bugs are often related, and the knowledge that a bug exists can lead [attackers] to find different bugs in the same code or similar bugs in other products.” This suggests that coordinated disclosure by the government to affected vendors wouldn’t just patch the particular vulnerabilities being exploited, but rather would help them shore up the security of our systems in new, important, and possibly unexpected ways. We already know, in non-intelligence contexts, that “bug collision,” while perhaps not common, is certainly a reality. We see no reason, and commentators like Aitel have pointed to none, that exploits developed or purchased by the government wouldn’t be subject to the same kinds of collision.

In addition, others with knowledge of the equities process, like Knake and Schwartz, are very much concerned about the risk of these vulnerabilities falling into the hands of groups “working against the national security interest of the United States.” Rather than sit back and wait for that eventuality—which Aitel dismisses without showing his work—we agree with Daniel, Knake and Schwartz and many others that the VEP needs to put defense ahead of offense in most cases.

Democratic oversight won't happen in the shadows

Above all, we can’t have the debate all sides claim to want without a shared set of data. And if outside experts are precluded from participation because they don’t have a TS/SCI clearance, then democratic oversight of the intelligence community doesn’t stand much chance.

On its face, the claim that vulnerabilities used by the U.S. are in no danger of being used by others seems particularly weak when combined with the industry’s opposition to “exclusives,” clauses accompanying exploit purchase agreements giving the U.S. exclusive rights to their use. In a piece last month, Aitel’s Lawfare colleague Susan Hennessey laid out her opposition to any such requirements. But we know for instance that the NSA buys vulnerabilities from the prolific French broker/dealer Vupen. Without any promises of exclusivity from sellers like Vupen, it’s implausible for Aitel to claim that exploits the US purchases will “almost never” fall into others’ hands. 

Suggesting that no one else will happen onto exploits used by the U.S. government seems overconfident at best, given that collisions of vulnerability disclosure are well-documented in the wild. And if disclosing vulnerabilities will truly burn “techniques” and expose “sensitive intelligence operations,” that seems like a good argument for formally weighing the equities on both sides on an individualized basis, as we advocate.

In short, we’re open to data suggesting we’re wrong about the substance of the policy, but we’re not going to let Dave Aitel tell us to “slow our roll.” (No disrespect, Dave.)

Our policy proposal draws on familiar levers—public reports and congressional oversight. Even those who say that the government’s vulnerability disclosure works fine as is, like Hennessey, have to acknowledge that there’s too much secrecy. EFF shouldn’t have had to sue to see the VEP in the first place, and we shouldn’t still be in the dark about certain details of the process. As recently as last year, the DOJ claimed under oath that merely admitting that the U.S. has “offensive” cyber capabilities would endanger national security. Raising the same argument about simply providing insight into that process is just as unpersuasive to us. If the government truly does weigh the equities and disclose the vast majority of vulnerabilities, we should have some way of seeing its criteria and verifying the outcome, even if the actual deliberations over particular bugs remain classified. 

Meanwhile, the arguments against putting limits on government use of exploits and malware—what we referred to as a “Title III for hacking”—bear even less scrutiny.

The FBI’s use of malware raises serious constitutional and legal questions, and the warrant issued in the widely publicized Playpen case arguably violates both the Fourth Amendment and Rule 41. Further problems arose at the trial stage in one Playpen prosecution when the government refused to disclose all evidence material to the defense, because it “derivatively classified" the exploit used by the FBI. The government would apparently prefer dismissal of prosecutions to disclosure, under court-supervised seal, of exploits that would reveal intelligence sources and methods, even indirectly. Thus, even where exploits are widely used for law enforcement, the government’s policy appears to be driven by the Defense Department, not the Justice Department. That ordering of priorities is incompatible with prosecuting serious crimes like child pornography. Hence, those that ask us to slow down should recognize that the alternative to a Title III for hacking is actually a series of court rulings putting a stop to the government’s use of such exploits.

Adapting Title III to hacking is also a case where public debate should inform the legislative process. We’re not worried about law enforcement and the intelligence community advocating for their vision of how technology should be used. But given calls to slow down, however, we are very concerned that there be input from the public, especially technology experts charged with defending our systems—not just exploit developers with Top Secret clearances.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Cases: The Playpen Cases: Mass Hacking by U.S. Law EnforcementEFF v. NSA, ODNI - Vulnerabilities FOIA
Share this: Join EFF
Categories: Privacy

Illinois Sets New Limits On Cell-Site Simulators

Thu, 08/11/2016 - 19:27

Illinois has joined the growing ranks of states limiting how police may use cell-site simulators, invasive technology devices that masquerade as cell phone towers and turn our mobile phones into surveillance devices. By adopting the Citizen Privacy Protection Act, Illinois last month joined half a dozen other states—as well as the Justice Department and one federal judge—that have reiterated the constitutional requirement for police to obtain a judicial warrant before collecting people's location and other personal information using cell-site simulators.

By going beyond a warrant requirement and prohibiting police from intercepting data and voice transmissions or conducting offensive attacks on personal devices, the Illinois law establishes a new high watermark in the battle to prevent surveillance technology from undermining civil liberties. Illinois also set an example for other states to follow by providing a powerful remedy when police violate the new law by using a cell-site simulator without a warrant: wrongfully collected information is inadmissible in court, whether to support criminal prosecution or any other government proceedings.

Tools to monitor cell phones

Cell-site simulators are sometimes called “IMSI catchers” because they seize from every cell phone within a particular area its unique International Mobile Subscriber Identity, and force those phones to connect to them, instead of real cell towers.

Early versions of the devices—such as the Stingray device used by police in major U.S. cities since at least 2007 after having been used by federal authorities since at least the 1990s—were limited to location tracking, as well as capturing and recording data and voice traffic transmitted by phones. Later versions, however, added further capabilities which policymakers in Illinois have become the first to address.

Cell phone surveillance tools have eroded constitutional rights guaranteed under the Fourth Amendment’s protection from unreasonable searches and seizures in, at minimum, tens of thousands of cases. Stingrays were deployed thousands of times in New York City alone—and even more often in Baltimore—without legislative or judicial oversight, until in 2011 a jailhouse lawyer accused of tax fraud discovered the first known reference to a “Stingray” in court documents relating to the 2008 investigation that led to his arrest and conviction.

Meanwhile, government and corporate secrecy surrounding police uses of Stingrays has undermined Fifth and Sixth Amendment rights to due process, such as the right to challenge evidence used by one’s accusers. Contracts with police departments demanded by corporate device manufacturers imposed secrecy so severe that prosecutors walked away from legitimate cases across the country rather than risk revealing Stingrays to judges by pursuing prosecutions based on Stingray-collected evidence.

Citing the constraint of a corporate non-disclosure agreement, a police officer in Baltimore even risked contempt charges by refusing to answer judicial inquiries about how police used the devices. Baltimore public defender Deborah Levi explains, “They engage in a third-party contract to violate people’s constitutional rights.”

Several states agree: Get a warrant

In one respect, Illinois is walking well-settled ground.

By requiring that state and local police agents first seek and secure a judicial order based on individualized probable cause of criminal misconduct before using a cell-site simulator, Illinois has joined half a dozen other states (including California, Washington, Utah, Minnesota, and Virginia) that have already paved that road.

At the federal level, the Justice Department took action in 2015 to require federal agencies to seek warrants before using the devices. And just two weeks before Illinois enacted its new law, a federal judge in New York ruled for the first time that defendants could exclude from trial evidence collected from an IMSI-catcher device by police who failed to first obtain a judicial order.

These decisions vindicate core constitutional rights, as well as the separation of powers and underscore that warrants are constitutionally crucial.

It's true that warrants are not particularly difficult for police to obtain when based on legitimate reasons for suspicion. When New York Court of Appeals Chief Judge Sol Wachtler observed in 1985 that any prosecutor could persuade a grand jury to “indict a ham sandwich,” he was talking about the ease with which the government can satisfy the limited scrutiny applied in any one-sided process, including that through which police routinely secure search warrants.

But while judicial warrants do not present a burdensome constraint on legitimate police searches, they play an important role in the investigative process. Warrantless searches are conducted essentially by fiat, without independent review, and potentially arbitrarily. Searches conducted pursuant to a warrant, however, bear the stamp of impartial judicial review and approval.

Warrants ensure, for instance, that agencies do not treat their public safety mandate as an excuse to pursue personal vendettas, or the kinds of stalking “LOVEINT” abuses to which NSA agents and contractors have occasionally admitted. Requiring authorization from a neutral magistrate, put simply, maintains civilian control over police.

Despite its importance and ease for authorities to satisfy, the warrant requirement has ironically suffered frequent erosion by the courts—making all the more important efforts by states like Illinois to legislatively reiterate and expand it.

But in two important respects beyond the warrant requirement, the Illinois Citizen Privacy Protection Act breaks new ground. 

Breaking new ground: Allowing an exclusionary remedy

First, the Illinois law is the first policy of its kind in the country that carries a price for law enforcement agencies that violate the warrant requirement. If police use a cell-site simulator to gather information without securing a judicial order, then courts will suppress that information and exclude it from any consideration at trial.

This vindicates the rights of accused individuals by enabling them to exclude illegally collected evidence. It also helps ensure that police use their powerful authorities for only legitimate reasons based on probable cause to suspect criminal activity, rather than fishing expeditions without real proof of misconduct, or for that matter, the personal, racial, or financial biases of police officers.

Like the warrant requirement created to limit the powers of police agencies, the exclusionary rule on which the judiciary relies to enforce the warrant requirement has endured doctrinal erosion over the past generation. Courts have allowed one exception after another, allowing prosecutors to use “fruits of the poisonous tree” in criminal trials despite violations of constitutional rights committed by police when collecting them.

In this context, the new statute in Illinois represents a crucial public policy choice explicitly extending the critical protections of the warrant requirement and exclusionary rule.

Breaking new ground: Prohibiting offensive uses

The new Illinois law also limits the purposes for which cell-site simulators may be used, even pursuant to a judicial order. It flatly prohibits several particularly offensive uses that remain largely overlooked elsewhere.

When Stingrays (and their frequent secret use by local police departments across the country) first attracted attention, most concerns addressed the location tracking capabilities of the device’s first generation, obtained by domestic police departments as early as 2003.

But while Stingrays presented profound constitutional concerns 10 years ago, they present even greater concerns now, because of technology advancements in the past decade enabling stronger surveillance and even militaristic offensive capabilities. Unlike early versions of the devices that could be used only for location monitoring or gathering metadata, later versions, such as the Triggerfish, Hailstorm and Stargrazer series, can be used to intercept voice communications or browsing history in real-time, mount offensive denial of service attacks on a phone, or even plant malware on a device.

Recognizing how invasive the latest versions of IMSI-catchers can be, legislators in Illinois authorized police to use cell-site simulators in only two ways: after obtaining a warrant, police may use the devices to locate or track a known device, or instead to identify an unknown device.

Even if supported by a judicial order, the Citizen Privacy Protection Act affirmatively bans all other uses of these devices. Prohibited activities include intercepting the content or metadata of phone calls or text messages, planting malware on someone’s phone, or blocking a device from communicating with other devices.

The use limitations enshrined in Illinois law are among the first of their kind in the country.

The Illinois statute also requires police to delete any data (within 24 hours after location tracking, or within 72 hours of identifying a device) incidentally obtained from third parties, such as non-targets whose devices are forced to connect to a cell-site simulator. These requirements are similar to those announced by a federal magistrate judge in Illinois who in November 2015 imposed on a federal drug investigation minimization requirements including an order to “immediately destroy all data other than the data identifying the cell phone used by the target. The destruction must occur within forty-eight hours after the data is captured.”

Enhancing security through transparency

Beyond enforcing constitutional limits on the powers of law enforcement agencies, and protecting individual rights at stake, the new law in Illinois also appropriately responds to an era of executive secrecy.

The secrecy surrounding law enforcement uses of IMSI-catchers has also compromised security. As the ACLU’s Chris Soghoian has explained alongside Stephanie Pell from West Point’s Army Cyber Institute and Stanford University, “the general threat that [any particular] technology poses to the security of cellular networks” could outweigh its “increasing general availability at decreasing prices.” With respect to cell-site simulators, in particular:

[C]ellular interception capabilities and technology have become, for better or worse, globalized and democratized, placing Americans’ cellular communications at risk of interception from foreign governments, criminals, the tabloid press and virtually anyone else with sufficient motive to capture cellular content in transmission. Notwithstanding this risk, US government agencies continue to treat practically everything about this cellular interception technology, as a closely guarded, necessarily secret “source and method,” shrouding the technical capabilities and limitations of the equipment from public discussion….

Given the persistent secrecy surrounding IMSI-catchers and the unknown risks they pose to both individual privacy and network security, the statutory model adopted by Illinois represents a milestone not only for civil liberties but also for the security of our technological devices. Khadine Bennett from the ACLU of Illinois explained the new law’s importance in terms of the secrecy pervading how police have used cell-site simulators:

For so long, uses of IMSI-catchers such as Stingrays have been behind the scenes, enabling searches like the pat down of thousands of cell phones at once without the users ever even knowing it happened. It’s exciting to see Illinois adopt a measure to ensure that these devices are used responsibly and appropriately, and I hope to see more like it emerge around the country.

EFF enthusiastically agrees with Ms. Bennett. If you’d like to see the Citizen Privacy Protection Act’s groundbreaking requirements adopted in your state, you can find support through the Electronic Frontier Alliance.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

EFF Announces 2016 Pioneer Award Winners: Malkia Cyril of the Center for Media Justice, Data Protection Activist Max Schrems, the Authors of ‘Keys Under Doormats,’ and the Lawmakers Behind CalECPA

Tue, 08/09/2016 - 15:45
Ceremony for Honorees on September 21 in San Francisco

San Francisco - The Electronic Frontier Foundation (EFF) is pleased to announce the distinguished winners of the 2016 Pioneer Awards: Malkia Cyril of the Center for Media Justice, data protection activist Max Schrems, the authors of the “Keys Under Doormats” report that counters calls to break encryption, and the lawmakers behind CalECPA—a groundbreaking computer privacy law for Californians.

The award ceremony will be held the evening of September 21 at Delancey Street’s Town Hall Room in San Francisco. The keynote speaker is award-winning investigative journalist Julia Angwin, whose work on corporate invasions of privacy has uncovered the myriad ways companies collect and control personal information. Her recent articles have sought to hold algorithms accountable for the important decisions they make about our lives. Tickets are $65 for current EFF members, or $75 for non-members.  

Malkia A. Cyril is the founder and executive director of the Center for Media Justice and co-founder of the Media Action Grassroots Network, a national network of community-based organizations working to ensure racial and economic justice in a digital age. Cyril is one of few leaders of color in the movement for digital rights and freedom, and a leader in the Black Lives Matter Network—helping to bring important technical safeguards and surveillance countermeasures to people across the country who are fighting to reform systemic racism and violence in law enforcement. Cyril is also a prolific writer and public speaker on issues ranging from net neutrality to the communication rights of prisoners. Their comments have been featured in publications like Politico, Motherboard, and Essence Magazine, as well as three documentary films. Cyril is a Prime Movers fellow, a recipient of the 2012 Donald H. McGannon Award for work to advance the roles of women and people of color in the media reform movement, and won the 2015 Hugh Hefner 1st Amendment Award for framing net neutrality as a civil rights issue.

Max Schrems is a data protection activist, lawyer, and author whose lawsuits over U.S. companies’ handling of European Union citizens’ personal information have changed the face of international data privacy. Since 2011 he has worked on the enforcement of EU data protection law, arguing that untargeted wholesale spying by the U.S. government on Internet communications undermines the EU’s strict data protection standards. One lawsuit that reached the European Court of Justice led to the invalidation of the “Safe Harbor” agreement between the U.S. and the EU, forcing governments around the world to grapple with the conflict between U.S. government surveillance practices and the privacy rights of citizens around the world. Another legal challenge is a class action lawsuit with more than 25,000 members currently pending at the Austrian Supreme Court. Schrems is also the founder of “Europe v Facebook,” a group that pushes for social media privacy reform at Facebook and other companies, calling for data collection minimization, opt-in policies instead of opt-outs, and transparency in data collection.

The “Keys Under Doormats” report has been central to grounding the current encryption debates in scientific realities. Published in July of 2015, it emerged just as calls to break encryption with “backdoors” or other access points for law enforcement were becoming pervasive in Congress, but before the issue came into the global spotlight with the FBI’s efforts against Apple earlier this year. “Keys Under Doormats” both reviews the underlying technical considerations of the earlier encryption debate of the 1990s and examines the modern systems realities, creating a compelling, comprehensive, and scientifically grounded argument to protect and extend the availability of encrypted digital information and communications. The authors of the report are all security experts, building the case that weakening encryption for surveillance purposes could never allow for any truly secure digital transactions. The “Keys Under Doormats” authors are Harold Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Michael Specter, and Daniel J. Weitzner. Work on the report was coordinated by the MIT Internet Policy Research Initiative.

CalECPA—the California Electronic Communications Privacy Act—is a landmark law that safeguards privacy and free speech rights. CalECPA requires that a California government entity gets a warrant to search electronic devices or compel access to any electronic information, like email, text messages, documents, metadata, and location information—whether stored on the electronic device itself or online in the “cloud.” CalECPA gave California the strongest digital privacy law in the nation and helps prevent abuses before they happen. In many states without this protection, police routinely claim the authority to search sensitive electronic information about who we are, where we go, and what we do—without a warrant. CalECPA was introduced by California State Senators Mark Leno (D-San Francisco) and Joel Anderson (R-Alpine), who both fought for years to get stronger digital privacy protections for Californians. Leno has been a champion of improved transportation, renewable energy, and equal rights for all, among many other issues. Anderson regularly works across party lines to protect consumer privacy in the digital world.

“We are honored to announce this year’s Pioneer Award winners, and to celebrate the work they have done to make communications private, safe, and secure,” said EFF Executive Director Cindy Cohn. “The Internet is an unprecedented tool for everything from activism to research to commerce, but it will only stay that way if everyone can trust their technology and the systems it relies on. With this group of pioneers, we are building a digital future we can all be proud of.”

Awarded every year since 1992, EFF’s Pioneer Awards recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Aaron Swartz, Citizen Lab, Richard Stallman, and Anita Borg.

Sponsors of the 2016 Pioneer Awards include Adobe, Airbnb, Dropbox, Facebook, and O’Reilly Media.

To buy tickets to the Pioneer Awards:
https://www.eff.org/Pioneer2016

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org
Share this: Join EFF
Categories: Privacy

Stand Up for Open Access. Stand Up for Diego.

Tue, 08/09/2016 - 14:30

Diego Gomez is a recent biology graduate from the University of Quindío, a small university in Colombia. His research interests are reptiles and amphibians. Since the university where he studied didn’t have a large budget for access to academic databases, he did what any other science grad student would do: he found the resources he needed online. Sometimes he shared the research he discovered, so that others could benefit as well.

In 2011, Diego shared another student’s Master’s thesis with colleagues over the Internet. That simple act—something that many people all over the world do every day—put Diego at risk of spending years in prison. In Colombia, copying and distribution of copyrighted works without permission can lead to criminal charges of up to eight years if the prosecution can show it hurt the commercial rights of the author (derechos patrimoniales).

We’ve been following Diego’s trial over the past year, and closing arguments are scheduled for this week. Today, we join open access allies all over the world in standing with Diego.

Support Open Access Worldwide

EFF believes that extreme criminal penalties for copyright infringement can chill people’s right of free expression, limit the public’s access to knowledge, and quell scientific research. That’s particularly true in countries like Colombia that pay lip service to free speech and access to education (which are expressly recognized as basic rights in Colombia’s Constitution) but don’t have the robust fair use protections that help ensure copyright doesn’t stymie those commitments.

Diego’s case also serves as a wake-up call: it’s time for open access to become the global standard for academic publishing.

The movement for open access is not new, but it seems to be accelerating. Even since we started following Diego’s case in 2014, many parts of the scientific community have begun to fully embrace open access publishing. Dozens of universities have adopted open access policies requiring that university research be made open, either through publishing in open access journals or by archiving papers in institutional repositories. This year’s groundbreaking discovery on gravitational waves—certainly one of the most important scientific discoveries of the decade—was published in an open access journal under a Creative Commons license. Here in the U.S., it’s becoming more and more clear that an open access mandate for federally funded research will be written into law; it’s just a matter of when. The tide is changing, and open access will win.

But for researchers like Diego who face prison time right now, the movement is not accelerating quickly enough. Open access could have saved Diego from the risk of spending years in prison.

Many people reading this remember the tragic story of Aaron Swartz. When Aaron died, he was facing severe penalties for accessing millions of articles via MIT’s computer network without "authorization." Diego’s case differs from Aaron’s in a lot of ways, but in one important way, they’re exactly the same: if all academic research were published openly, neither of them would have been in trouble for anything.

When laws punish intellectual curiosity and scientific research, everyone suffers; not just researchers, but also the people and species who would benefit from their research. Copyright law is supposed to foster innovation, not squash it.

Please join us in standing with Diego. Together, we can fight for a time when everyone can access and share the world’s research.

Support Open Access Worldwide

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

DRM: You have the right to know what you're buying!

Fri, 08/05/2016 - 15:06

Today, the EFF and a coalition of organizations and individuals asked the US Federal Trade Commission (FTC) to explore fair labeling rules that would require retailers to warn you when the products you buy come locked down by DRM ("Digital Rights Management" or "Digital Restrictions Management"). 

These digital locks train your computerized devices to disobey you when you ask them to do things the manufacturer didn't specifically authorize -- even when those things are perfectly legal. Companies that put digital locks on their products -- ebook, games and music publishers, video companies, companies that make hardware from printers to TVs to cat litter trays -- insist that DRM benefits their customers, by allowing the companies to offer products at a lower price by taking away some of the value -- you can "rent" an ebook or a movie, or get a printer at a price that only makes sense if you also have to buy expensive replacement ink.

We don't buy it. We think that the evidence is that customers don't much care for DRM (when was the last time you woke up and said, "Gosh, I wish there was a way I could do less with my games?"). Studies agree.

The FTC is in charge of making sure that Americans don't get ripped off when they buy things. We've written the Commission a letter, drafted and signed by a diverse coalition of public interest groups, publishers, and rightsholders, calling on the agency to instruct retailers to inform potential customers of the restrictions on the products they're selling. In a separate letter, we detail the stories of 22 EFF supporters who unwittingly purchased DRM-encumbered products and later found themselves unable to enjoy their purchases (a travel guide that required a live internet connection to unlock, making it unreadable on holiday), or locked into an abusive relationship with their vendors (a cat litter box that only worked if resupplied with expensive detergent), or even had other equipment they owned rendered permanently inoperable by the DRM in a new purchase (for example, a game that "bricked" a customer's DVD-RW drive).

Now the FTC has been equipped with evidence that there are real harms, and that rightsholders are willing to have fair labeling practices, the FTC should act. And if the DRM companies are so sure that their customers love their products, why would they object?

EFF is currently suing the US government to invalidate Section 1201 of the DMCA, a law that has been used to threaten research into the security risks of DRM and inhibit the development of products and tools that break digital locks -- again, even if the purpose is otherwise legal (like letting you read your books on an alternate reader, or put a different brand of perfume in your cat litter box). Until we win our lawsuit, people who buy DRM-locked products are unlikely to be rescued from their lock-in by add-ons that restore functionality to their property. That makes labeling especially urgent: it's bad enough to be stuck with product that is defective by design, but far worse if those defects can't be fixed without risking legal retaliation.

For the full letter to the FTC about labeling:
https://www.eff.org/document/eff-letter-ftc-re-drm-labeling

For the full letter to the FTC with the stories of people who've been harmed by DRM they weren't informed of:
https://www.eff.org/files/2016/08/06/eff_request_for_investigation_re_labeling_drm-limited_products.pdf

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

EFF to FTC: Online Retailers Must Label Products Sold with Digital Locks

Fri, 08/05/2016 - 13:24
Consumers Need Warning If Movies, Music, Games Restrict When and How They Are Used

San Francisco - The Electronic Frontier Foundation (EFF) and a coalition of consumer groups, content creators, and publishers asked the Federal Trade Commission (FTC) today to require online retailers to label the ebooks, songs, games, and apps that come with digital locks restricting how consumers can use them.
 
In a letter sent to the FTC today, the coalition said companies like Amazon, Google, and Apple have a duty to inform consumers if products for sale are locked with some kind of "digital rights management" or DRM. Companies use DRM to purportedly combat copyright infringement, but DRM locks can also block you from watching the movie you bought in New York when you go to Asia on vacation, or limit which devices can play the songs you purchased.
 
"Without DRM labeling, it’s nearly impossible to figure out which products have digital locks and what restrictions these locks impose," said EFF Special Advisor Cory Doctorow. "We know the public prefers DRM-free e-books and other electronic products, but right now buyers are in the dark about DRM locks when they go to make purchases online. Customers have a right to know about these restrictions before they part with their money, not after."
 
The letter is accompanied by a request that the FTC investigate and take action on behalf of consumers who find themselves deprived of the enjoyment of their property every day, due to a marketplace where products limited by DRM are sold without adequate notice. The request details the stories of 20 EFF supporters who bought products—ebooks, videos, games, music, devices, even a cat-litter box—that came with DRM that caused them grief. They report that DRM left them with broken, orphaned, or useless devices and in some cases even incapacitated other devices.
 
The FTC oversees fair packaging and labeling rules that are supposed to prevent consumers from being deceived and facilitate value comparisons. Today’s letter argues that the FTC should require electronic sellers to use a simple, consistent, and straightforward label about DRM locks for digital media. For example, "product detail" lists—which appear on digital product pages and disclose such basic information as serial number, file size, publisher, and whether certain technological features are enabled—should include a category stating whether a product is DRM-free or DRM-restricted. The latter designation should include a link to a clear explanation of the restrictions imposed on the product.
 
"The use of DRM is controversial among creators, studios, and audiences. What shouldn’t be controversial is the right of consumers to know which products have DRM locks. If car companies made vehicles that only drove on certain streets, they’d have to disclose this to consumers. Likewise, digital media products with DRM restrictions should be clearly labeled," said Doctorow.
 
Signers of today’s letter include the Consumer Federation of America, Public Knowledge, the Free Software Foundation, McSweeney’s, and No Starch Press.
 
For the full letter to the FTC about labeling:
https://www.eff.org/document/eff-letter-ftc-re-drm-labeling

For the full letter to the FTC with the stories of people who've been harmed by DRM they weren't informed of:https://www.eff.org/files/2016/08/06/eff_request_for_investigation_re_labeling_drm-limited_products.pdf

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Contact:  CoryDoctorowEFF Special Advisordoctorow@craphound.com
Share this: Join EFF
Categories: Privacy

Join Us for the Great California Database Hunt

Fri, 08/05/2016 - 11:29

Imagine if local governments were like restaurants, where you could pick up a menu of public datasets, read the names and description, then order whatever suits your open data appetite? 

This transparency advocate’s fantasy became reality in California on July 1, when a new law took effect. S.B. 272 added a section to the California Public Records Act that requires local agencies (except school districts) to publish inventories of “enterprise systems” on their websites. We are talking about catalogs of every database that holds information on the public or serves as a primary source of government data. 

And we need your help on Saturday, Aug. 27 to—as the saying goes—catch ‘em all.

What: California Database Hunt

Date: Saturday, August 27, 2016
Time: 11 a.m. - 3 p.m. PT/ 2 p.m. - 6 p.m. ET
Where: San Francisco, Washington, D.C., and Remotely
RSVP Link

Similar policies are in place on the federal level due to President Obama's 2013 Open Data Policy, which requires every federal agency to compile an inventory of its data resources and say what's public and what's not.

Under the new California law, these catalogs don’t just simply list the names of databases. They also contain information such as: the purpose of the system; the type of data collected; how often data is collected and updated; the name of the software product being used; and the vendor supplying it.  

The passage of S.B. 272 was a victory on multiple fronts. Now, the public can look through these catalogs in order to file records requests for data sets. Privacy and civil liberties activists can also learn what kind of data is being collected on the public, including police databases and certain surveillance systems.

So far, there’s little consistency between local agencies publishing these sets. For example, the City of Manhattan Beach provides its inventory of 13 enterprise systems as a .pdf file.  Meanwhile, the City and County of San Francisco offers a robust inventory of 451 data systems that can be filtered, searched, sorted, and exported in multiple formats.

Currently, however, all these databases reside on individual websites.

The Electronic Frontier Foundation, the Data Foundation, and the Sunlight Foundation are now teaming up to collect links to all these data catalogs in a single repository. And we need your help.

Join us on Aug. 27 for a sprint to track down and index these catalogs across California. We’ll be holding events in San Francisco and Washington, DC, but you will also be able to join us remotely from where you are in the world.

To register for the event or for more information, just sign up. (If you plan on attending in-person in DC, please also register with the Data Foundation for logistical coordination.) 

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

FCC Settlement Requires TP-Link to Support 3rd-Party Firmware

Thu, 08/04/2016 - 21:30

In a win for the open source community, router maker TP-Link will be required to allow consumers to install third-party firmware on their wireless routers, the Federal Communications Commission (FCC) announced Monday. The announcement comes on the heels of a settlement requiring TP-Link to pay a $200,000 fine for failing to properly limit their devices' transmission power on the 2.4GHz band to within regulatory requirements. On its face, new rules about open source firmware don't seem to have much to do with TP-Link's compliance problems. But the FCC's new rule helps fix an unintended consequence of a policy the agency made last year, which had led to open source developers being locked out of wireless routers entirely.

The FCC set forth a list of Software Security Requirements in March 2015 that included specific language which appeared to encourage restrictions on third-party firmware—in particular the popular DD-WRT—that could be used to circumvent bandwidth requirements. The purpose of the requirements was to prevent wireless routers from interfering with other communications. In November, the FCC clarified that it was not in fact seeking to ban open source software from wireless routers—but by that point the damage had already been done. TP-Link had already begun paving the way for locking out third-party firmware as a way of bringing itself into compliance. Meanwhile, other manufacturers such as Linksys had sought to work with the open-source firmware community to allow consumers to install custom firmware without violating FCC rules.

This decision is a welcome one for the open-source firmware community, which has worked hard to support the wide range of routers in circulation. It's good for security, too. Manufacturers often leave their device firmware neglected after flashing it at the factory, leaving users completely unprotected from security vulnerabilities that are frequently discovered. Just last month, TP-Link let the domain registration lapse for a site allowing consumers to configure their devices over the Internet, potentially exposing a large swath of its users to credentials-stealing or malware attacks. Many open-source firmware projects, on the other hand, release regular updates that allow users to make sure vulnerabilities on their devices get patched. In addition, third-party firmware allows users to take more fine-grained control of their routers than is typically granted by manufacturer firmware. This opens a whole range of possibilities, from power-users wishing to extend the range of their home Wi-Fi by setting up repeaters throughout their homes, to community members wishing to take part in innovative community-based mesh-networking firmware projects.

Although the FCC statement guarantees TP-Link will allow installation of open-source firmware, they have also made clear that manufacturers have to do something to ensure compliance with a second set of rules, relating to the U-NII radio band. This could leave manufacturers with a hard choice: locking down the separate, low-level firmware that controls the router radio so that users cannot tamper with it, or limiting the capabilities of the radio itself at the point of manufacture. The first option would prevent users from taking full control of their hardware by replacing the firmware that controls it with open-source alternatives. It means that even if the high-level firmware on the router is open-source, the device can never be fully controlled by the user because the low-level firmware controlling the hardware is encumbered by closed-source binaries. After the unfortunate reaction of router manufacturers to the FCC's 2015 policy, the agency should have been more careful not to create new incentives to lock down router firmware.

Overall, the FCC has sent a clear message with the TP-Link settlement: work with the community, not against it, to improve your devices and ensure compliance. But they should be more clear about how router makers can comply while allowing for the possibility of fully open-source routers, right down to the firmware.

Update 8/8: TP-Link has issued a statement on the settlement explaining how they will allow third-party firmware to be installed on their devices, but (following the suggestion of the FCC) "any third-party software/firmware developers must demonstrate how their proposed designs will not allow access to the frequency or power level protocols in our devices."  This seems to confirm earlier concerns of an open source software advocate that "FCC is trying to do something through an settlement agreement that they can't do through law: regulate what ALL software can do if it interacts with radio devices."

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

Does DARPA's Cyber Grand Challenge Need A Safety Protocol?

Thu, 08/04/2016 - 18:55

Today, DARPA (the Defense Advanced Research Projects Agency, the R&D arm of the US military) is holding the finals for its Cyber Grand Challenge (CGC) competition at DEF CON. We think that this initiative by DARPA is very cool, very innovative, and could have been a little dangerous.

In this post, we’re going to talk about why the CGC is important and interesting (it's about building automated systems that can break into computers!); about some of the dangers posed by this line of automated security research; and the sorts of safety precautions that may become appropriate as endeavors in this space become more advanced. We think there may be some real policy concerns down the road about systems that can automate the process of exploiting vulnerabilities. But rather than calling for external policy interventions, we think the best people to address these issues are the people doing the research themselves—and we encourage them to come together now to address these questions explicitly.

The DARPA Cyber Grand Challenge

In some ways, the Cyber Grand Challenge is a lot like normal capture the flag (CTF) competitions held at hacker and computer security events. Different teams all connect their computers to the same network and place a special file (the “flag”) in a secure location on their machines. The goal is to secure your team's machines to make sure nobody else can hack into them and retrieve the flag, while simultaneously trying to hack the other teams' machines and exfiltrate their flag. (And of course, your computer has to stay connected to the network the whole time, possibly serving a website or providing some other network service.)

The difference with DARPA's Cyber Grand Challenge, though, is that the “hackers” participating in the competition are automated systems. In other words, human teams get to program completely automated offensive and defensive systems which are designed to automatically detect vulnerabilities in software and either patch them or exploit them, using various techniques including fuzzing, static analysis or machine learning. Then, during the competition, these automated systems face off against each other with no human participation or help. Once the competition starts, it's all up to the automated systems.

In principle, autonomous vulnerability detection research like this is only an incremental step beyond the excellent fuzzing work being done at Google, Microsoft and elsewhere, and may be good from a cybersecurity policy perspective, particularly if it serves to level the playing field between attackers and defenders when it comes to computer and network security. To date, attackers have tended to have the advantage because they often only need to find one vulnerability in order to compromise a system. No matter how many vulnerabilities a defender patches, if there's even one critical bug they haven't discovered, an attacker could find a way in. Research like the Cyber Grand Challenge could help even the odds by giving defenders tools which will automatically scan all exposed software, and not only discover vulnerabilities, but assist in patching them, too. Theoretically, if automated methods became the best way of finding bugs, it might negate some of the asymmetries that often make defensive computer security work so difficult.

But this silver lining has a cloud. We are going to start seeing tools that don't just identify vulnerabilities, but automatically write and launch exploits for them. Using these same sorts of autonomous tools, we can imagine an attacker creating (perhaps even accidentally) a 21st century version of the Morris worm that can discover new zero days to help itself propagate. How do you defend the Internet against a virus that continuously finds new vulnerabilities as it attacks new machines? The obvious answer would be to use one of the automated defensive patching systems we just described—but unfortunately, in many cases such a system just won't be effective or deployable.

Why not? Because not all computer systems can be patched easily. A multitude of Internet of Things devices have already been built and sold where a remote upgrade simply isn't possible—particularly on embedded systems where the software is flashed onto a microcontroller and upgrading requires an actual physical connection. Other devices might technically have the capability to be upgraded, but the manufacturer might not have designed or implemented an official remote upgrade channel.1 And even when there is an official upgrade channel, many devices continue to be used long after manufacturers decide it isn't profitable to continue to provide security updates.2

In some cases, it may be possible to do automated defensive patching on the network, before messages get to vulnerable end systems. In fact, some people closely familiar with the DARPA CGC have suggested to us that developing these kinds of defensive proxies may be one of the CGC’s long-term objectives. But such defensive patching at the network layer is only possible for protocols that are not encrypted, or on aggressively managed networks where encryption is subject to man-in-the-middle inspection by firewalls and endpoints are configured to trust man-in-the-middle CAs. Both of these situations have serious security problems of their own.

Right now, attacking the long tail of vulnerable devices, such as IoT gadgets, isn't worthwhile for many sophisticated actors because the benefit for the would-be hacker is far lower than the effort it would take to make the attack successful. Imagine a hacker thinking about attacking a model of Internet-connected thermostat that's not very popular. It would probably take days or weeks of work, and the number of compromised systems would be very low (compared to compromising a more popular model)—not to mention the systems themselves wouldn't be very useful in and of themselves. For the hacker, focusing on this particular target just isn't worth it.

But now imagine an attacker armed with a tool which discovers and exploits new vulnerabilities in any software it encounters. Such an attacker could attack an entire class of systems (all Internet of Things devices using a certain microprocessor architecture, say) much more easily. And unlike when the Morris worm went viral in 1988, today everything from Barbie dolls to tea kettles are connected to the Internet—as well as parts of our transportation infrastructure like gas pumps and traffic lights. If a 21st century Morris worm could learn to attack these systems before we replaced them with patchable, upgradable versions, the results would would be highly unpredictable and potentially very serious.

Precautions, Not Prohibitions

Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop innovating. Nor is it even really clear how you could stop such research if you wanted to; plenty of actors could do it if they wanted.

Instead, we think the right thing, at least for now, is for researchers to proceed cautiously and be conscious of the risks. When thematically similar concerns have been raised in other fields, researchers spent some time reviewing their safety precautions and risk assessments, then resumed their work. That's the right approach for automated vulnerability detection, too. At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers. Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands.

For example, researchers should probably ask questions like:

  • If this tool is designed to find and patch vulnerabilities, how hard would it be for someone who got its source code to turn it into a tool for finding and exploiting vulnerabilities? The differences may be small but still important. For instance, does the tool need a copy of the source code or binary it's analyzing? Does it just identify problematic inputs that may crash programs, or places in their code that may require protections, or does it go further and automate exploitation of the bugs it has found?
  • What architectures or types of systems does this tool target? Are they widespread? Can these systems be easily patched and protected?
  • What is the worst-case scenario if this tool's source code were leaked to, say, an enemy nation-state or authors of commercial cryptoviruses? What would happen if the tool escaped onto the public Internet?

To be clear, we're not saying that researchers should stop innovating in cases where the answers to those questions are more pessimistic. Rather, we're saying that they may want to take precautions proportional to the risk. In the same way biologists take different precautions ranging from just wearing a mask and gloves to isolating samples in a sealed negative-pressure environment, security researchers may need to vary their precautions from using full-disk encryption, all the way to only doing the research on air-gapped machines, depending on the risk involved.

For now, though, the field is still quite young and such extreme precautions probably aren't necessary. DARPA's Cyber Grand Challenge illustrates some of the reasons for this: the tools in the CGC aren't designed to target the same sort of software that runs on everyday laptops or smartphones. Instead, DARPA developed a simplified open source operating system extension expressly for the CGC. In part, this was intended to make the work of CGC contestants easier. But it was also done so that any tools designed for use in the CGC would need to be significantly modified for use in the real-world—so they don't really pose much of a danger as is, and no additional safety precautions are likely necessary.

But what if, a few years from now, the subsequent rounds of the contest target commonplace software? As they move in that direction, the designers of systems capable of automatically finding and exploiting vulnerabilities should take the time to think through the possible risks, and strategies for how to minimize them in advance. That's why we think the people who are experts in this field should come together, discuss the issues we're flagging here (and perhaps raise new ones), and come up with a strategy for handling the safety considerations for any risks they identify. In other words, we’d like to encourage the field to fully think through the ramifications of new research as it’s conducted. Much like the genetics community did in 1975, we think researchers working in the intersection of AI, automation, and computer security should come together to hold a virtual “Autonomous Cybersecurity Asilomar Conference.” Such a conference would serve two purposes. It would allow the community to develop internal guidelines or suggestions for performing autonomous cybersecurity research safely, and it would reassure the public that the field isn't proceeding blindly forward, but instead proceeding in a thoughtful way with an eye toward bettering computer security for all of us.

  • 1. Of course, manufacturers could turn loose autonomous patching viruses which patch users' devices as they propagate through the Internet, but this could open up a huge can of worms if users aren't expecting their devices to undergo these sorts of aggressive pseudo-attacks (not to mention the possible legal ramifications under the CFAA).
  • 2. We're looking at you, Android device manufacturers, mobile carriers, and Google.
var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy