Privacy

EFF to FTC: Online Retailers Must Label Products Sold with Digital Locks

Deep Links - Fri, 08/05/2016 - 13:24
Consumers Need Warning If Movies, Music, Games Restrict When and How They Are Used

San Francisco - The Electronic Frontier Foundation (EFF) and a coalition of consumer groups, content creators, and publishers asked the Federal Trade Commission (FTC) today to require online retailers to label the ebooks, songs, games, and apps that come with digital locks restricting how consumers can use them.
 
In a letter sent to the FTC today, the coalition said companies like Amazon, Google, and Apple have a duty to inform consumers if products for sale are locked with some kind of "digital rights management" or DRM. Companies use DRM to purportedly combat copyright infringement, but DRM locks can also block you from watching the movie you bought in New York when you go to Asia on vacation, or limit which devices can play the songs you purchased.
 
"Without DRM labeling, it’s nearly impossible to figure out which products have digital locks and what restrictions these locks impose," said EFF Special Advisor Cory Doctorow. "We know the public prefers DRM-free e-books and other electronic products, but right now buyers are in the dark about DRM locks when they go to make purchases online. Customers have a right to know about these restrictions before they part with their money, not after."
 
The letter is accompanied by a request that the FTC investigate and take action on behalf of consumers who find themselves deprived of the enjoyment of their property every day, due to a marketplace where products limited by DRM are sold without adequate notice. The request details the stories of 20 EFF supporters who bought products—ebooks, videos, games, music, devices, even a cat-litter box—that came with DRM that caused them grief. They report that DRM left them with broken, orphaned, or useless devices and in some cases even incapacitated other devices.
 
The FTC oversees fair packaging and labeling rules that are supposed to prevent consumers from being deceived and facilitate value comparisons. Today’s letter argues that the FTC should require electronic sellers to use a simple, consistent, and straightforward label about DRM locks for digital media. For example, "product detail" lists—which appear on digital product pages and disclose such basic information as serial number, file size, publisher, and whether certain technological features are enabled—should include a category stating whether a product is DRM-free or DRM-restricted. The latter designation should include a link to a clear explanation of the restrictions imposed on the product.
 
"The use of DRM is controversial among creators, studios, and audiences. What shouldn’t be controversial is the right of consumers to know which products have DRM locks. If car companies made vehicles that only drove on certain streets, they’d have to disclose this to consumers. Likewise, digital media products with DRM restrictions should be clearly labeled," said Doctorow.
 
Signers of today’s letter include the Consumer Federation of America, Public Knowledge, the Free Software Foundation, McSweeney’s, and No Starch Press.
 
For the full letter to the FTC about labeling:
https://www.eff.org/document/eff-letter-ftc-re-drm-labeling

For the full letter to the FTC with the stories of people who've been harmed by DRM they weren't informed of:https://www.eff.org/files/2016/08/06/eff_request_for_investigation_re_labeling_drm-limited_products.pdf

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Contact:  CoryDoctorowEFF Special Advisordoctorow@craphound.com
Share this: Join EFF
Categories: Privacy

EFF to FTC: Online Retailers Must Label Products Sold with Digital Locks

EFF Press Releases - Fri, 08/05/2016 - 13:24
Consumers Need Warning If Movies, Music, Games Restrict When and How They Are Used

San Francisco - The Electronic Frontier Foundation (EFF) and a coalition of consumer groups, content creators, and publishers asked the Federal Trade Commission (FTC) today to require online retailers to label the ebooks, songs, games, and apps that come with digital locks restricting how consumers can use them.
 
In a letter sent to the FTC today, the coalition said companies like Amazon, Google, and Apple have a duty to inform consumers if products for sale are locked with some kind of "digital rights management" or DRM. Companies use DRM to purportedly combat copyright infringement, but DRM locks can also block you from watching the movie you bought in New York when you go to Asia on vacation, or limit which devices can play the songs you purchased.
 
"Without DRM labeling, it’s nearly impossible to figure out which products have digital locks and what restrictions these locks impose," said EFF Special Advisor Cory Doctorow. "We know the public prefers DRM-free e-books and other electronic products, but right now buyers are in the dark about DRM locks when they go to make purchases online. Customers have a right to know about these restrictions before they part with their money, not after."
 
The letter is accompanied by a request that the FTC investigate and take action on behalf of consumers who find themselves deprived of the enjoyment of their property every day, due to a marketplace where products limited by DRM are sold without adequate notice. The request details the stories of 20 EFF supporters who bought products—ebooks, videos, games, music, devices, even a cat-litter box—that came with DRM that caused them grief. They report that DRM left them with broken, orphaned, or useless devices and in some cases even incapacitated other devices.
 
The FTC oversees fair packaging and labeling rules that are supposed to prevent consumers from being deceived and facilitate value comparisons. Today’s letter argues that the FTC should require electronic sellers to use a simple, consistent, and straightforward label about DRM locks for digital media. For example, "product detail" lists—which appear on digital product pages and disclose such basic information as serial number, file size, publisher, and whether certain technological features are enabled—should include a category stating whether a product is DRM-free or DRM-restricted. The latter designation should include a link to a clear explanation of the restrictions imposed on the product.
 
"The use of DRM is controversial among creators, studios, and audiences. What shouldn’t be controversial is the right of consumers to know which products have DRM locks. If car companies made vehicles that only drove on certain streets, they’d have to disclose this to consumers. Likewise, digital media products with DRM restrictions should be clearly labeled," said Doctorow.
 
Signers of today’s letter include the Consumer Federation of America, Public Knowledge, the Free Software Foundation, McSweeney’s, and No Starch Press.
 
For the full letter to the FTC about labeling:
https://www.eff.org/document/eff-letter-ftc-re-drm-labeling

For the full letter to the FTC with the stories of people who've been harmed by DRM they weren't informed of:https://www.eff.org/files/2016/08/06/eff_request_for_investigation_re_labeling_drm-limited_products.pdf

Contact:  CoryDoctorowEFF Special Advisordoctorow@craphound.com
Share this: Join EFF
Categories: Privacy

Join Us for the Great California Database Hunt

Deep Links - Fri, 08/05/2016 - 11:29

Imagine if local governments were like restaurants, where you could pick up a menu of public datasets, read the names and description, then order whatever suits your open data appetite? 

This transparency advocate’s fantasy became reality in California on July 1, when a new law took effect. S.B. 272 added a section to the California Public Records Act that requires local agencies (except school districts) to publish inventories of “enterprise systems” on their websites. We are talking about catalogs of every database that holds information on the public or serves as a primary source of government data. 

And we need your help on Saturday, Aug. 27 to—as the saying goes—catch ‘em all.

What: California Database Hunt

Date: Saturday, August 27, 2016
Time: 11 a.m. - 3 p.m. PT/ 2 p.m. - 6 p.m. ET
Where: San Francisco, Washington, D.C., and Remotely
RSVP Link

Similar policies are in place on the federal level due to President Obama's 2013 Open Data Policy, which requires every federal agency to compile an inventory of its data resources and say what's public and what's not.

Under the new California law, these catalogs don’t just simply list the names of databases. They also contain information such as: the purpose of the system; the type of data collected; how often data is collected and updated; the name of the software product being used; and the vendor supplying it.  

The passage of S.B. 272 was a victory on multiple fronts. Now, the public can look through these catalogs in order to file records requests for data sets. Privacy and civil liberties activists can also learn what kind of data is being collected on the public, including police databases and certain surveillance systems.

So far, there’s little consistency between local agencies publishing these sets. For example, the City of Manhattan Beach provides its inventory of 13 enterprise systems as a .pdf file.  Meanwhile, the City and County of San Francisco offers a robust inventory of 451 data systems that can be filtered, searched, sorted, and exported in multiple formats.

Currently, however, all these databases reside on individual websites.

The Electronic Frontier Foundation, the Data Foundation, and the Sunlight Foundation are now teaming up to collect links to all these data catalogs in a single repository. And we need your help.

Join us on Aug. 27 for a sprint to track down and index these catalogs across California. We’ll be holding events in San Francisco and Washington, DC, but you will also be able to join us remotely from where you are in the world.

To register for the event or for more information, just sign up. (If you plan on attending in-person in DC, please also register with the Data Foundation for logistical coordination.) 

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

FCC Settlement Requires TP-Link to Support 3rd-Party Firmware

Deep Links - Thu, 08/04/2016 - 21:30

In a win for the open source community, router maker TP-Link will be required to allow consumers to install third-party firmware on their wireless routers, the Federal Communications Commission (FCC) announced Monday. The announcement comes on the heels of a settlement requiring TP-Link to pay a $200,000 fine for failing to properly limit their devices' transmission power on the 2.4GHz band to within regulatory requirements. On its face, new rules about open source firmware don't seem to have much to do with TP-Link's compliance problems. But the FCC's new rule helps fix an unintended consequence of a policy the agency made last year, which had led to open source developers being locked out of wireless routers entirely.

The FCC set forth a list of Software Security Requirements in March 2015 that included specific language which appeared to encourage restrictions on third-party firmware—in particular the popular DD-WRT—that could be used to circumvent bandwidth requirements. The purpose of the requirements was to prevent wireless routers from interfering with other communications. In November, the FCC clarified that it was not in fact seeking to ban open source software from wireless routers—but by that point the damage had already been done. TP-Link had already begun paving the way for locking out third-party firmware as a way of bringing itself into compliance. Meanwhile, other manufacturers such as Linksys had sought to work with the open-source firmware community to allow consumers to install custom firmware without violating FCC rules.

This decision is a welcome one for the open-source firmware community, which has worked hard to support the wide range of routers in circulation. It's good for security, too. Manufacturers often leave their device firmware neglected after flashing it at the factory, leaving users completely unprotected from security vulnerabilities that are frequently discovered. Just last month, TP-Link let the domain registration lapse for a site allowing consumers to configure their devices over the Internet, potentially exposing a large swath of its users to credentials-stealing or malware attacks. Many open-source firmware projects, on the other hand, release regular updates that allow users to make sure vulnerabilities on their devices get patched. In addition, third-party firmware allows users to take more fine-grained control of their routers than is typically granted by manufacturer firmware. This opens a whole range of possibilities, from power-users wishing to extend the range of their home Wi-Fi by setting up repeaters throughout their homes, to community members wishing to take part in innovative community-based mesh-networking firmware projects.

Although the FCC statement guarantees TP-Link will allow installation of open-source firmware, they have also made clear that manufacturers have to do something to ensure compliance with a second set of rules, relating to the U-NII radio band. This could leave manufacturers with a hard choice: locking down the separate, low-level firmware that controls the router radio so that users cannot tamper with it, or limiting the capabilities of the radio itself at the point of manufacture. The first option would prevent users from taking full control of their hardware by replacing the firmware that controls it with open-source alternatives. It means that even if the high-level firmware on the router is open-source, the device can never be fully controlled by the user because the low-level firmware controlling the hardware is encumbered by closed-source binaries. After the unfortunate reaction of router manufacturers to the FCC's 2015 policy, the agency should have been more careful not to create new incentives to lock down router firmware.

Overall, the FCC has sent a clear message with the TP-Link settlement: work with the community, not against it, to improve your devices and ensure compliance. But they should be more clear about how router makers can comply while allowing for the possibility of fully open-source routers, right down to the firmware.

Update 8/8: TP-Link has issued a statement on the settlement explaining how they will allow third-party firmware to be installed on their devices, but (following the suggestion of the FCC) "any third-party software/firmware developers must demonstrate how their proposed designs will not allow access to the frequency or power level protocols in our devices."  This seems to confirm earlier concerns of an open source software advocate that "FCC is trying to do something through an settlement agreement that they can't do through law: regulate what ALL software can do if it interacts with radio devices."

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

Does DARPA's Cyber Grand Challenge Need A Safety Protocol?

Deep Links - Thu, 08/04/2016 - 18:55

Today, DARPA (the Defense Advanced Research Projects Agency, the R&D arm of the US military) is holding the finals for its Cyber Grand Challenge (CGC) competition at DEF CON. We think that this initiative by DARPA is very cool, very innovative, and could have been a little dangerous.

In this post, we’re going to talk about why the CGC is important and interesting (it's about building automated systems that can break into computers!); about some of the dangers posed by this line of automated security research; and the sorts of safety precautions that may become appropriate as endeavors in this space become more advanced. We think there may be some real policy concerns down the road about systems that can automate the process of exploiting vulnerabilities. But rather than calling for external policy interventions, we think the best people to address these issues are the people doing the research themselves—and we encourage them to come together now to address these questions explicitly.

The DARPA Cyber Grand Challenge

In some ways, the Cyber Grand Challenge is a lot like normal capture the flag (CTF) competitions held at hacker and computer security events. Different teams all connect their computers to the same network and place a special file (the “flag”) in a secure location on their machines. The goal is to secure your team's machines to make sure nobody else can hack into them and retrieve the flag, while simultaneously trying to hack the other teams' machines and exfiltrate their flag. (And of course, your computer has to stay connected to the network the whole time, possibly serving a website or providing some other network service.)

The difference with DARPA's Cyber Grand Challenge, though, is that the “hackers” participating in the competition are automated systems. In other words, human teams get to program completely automated offensive and defensive systems which are designed to automatically detect vulnerabilities in software and either patch them or exploit them, using various techniques including fuzzing, static analysis or machine learning. Then, during the competition, these automated systems face off against each other with no human participation or help. Once the competition starts, it's all up to the automated systems.

In principle, autonomous vulnerability detection research like this is only an incremental step beyond the excellent fuzzing work being done at Google, Microsoft and elsewhere, and may be good from a cybersecurity policy perspective, particularly if it serves to level the playing field between attackers and defenders when it comes to computer and network security. To date, attackers have tended to have the advantage because they often only need to find one vulnerability in order to compromise a system. No matter how many vulnerabilities a defender patches, if there's even one critical bug they haven't discovered, an attacker could find a way in. Research like the Cyber Grand Challenge could help even the odds by giving defenders tools which will automatically scan all exposed software, and not only discover vulnerabilities, but assist in patching them, too. Theoretically, if automated methods became the best way of finding bugs, it might negate some of the asymmetries that often make defensive computer security work so difficult.

But this silver lining has a cloud. We are going to start seeing tools that don't just identify vulnerabilities, but automatically write and launch exploits for them. Using these same sorts of autonomous tools, we can imagine an attacker creating (perhaps even accidentally) a 21st century version of the Morris worm that can discover new zero days to help itself propagate. How do you defend the Internet against a virus that continuously finds new vulnerabilities as it attacks new machines? The obvious answer would be to use one of the automated defensive patching systems we just described—but unfortunately, in many cases such a system just won't be effective or deployable.

Why not? Because not all computer systems can be patched easily. A multitude of Internet of Things devices have already been built and sold where a remote upgrade simply isn't possible—particularly on embedded systems where the software is flashed onto a microcontroller and upgrading requires an actual physical connection. Other devices might technically have the capability to be upgraded, but the manufacturer might not have designed or implemented an official remote upgrade channel.1 And even when there is an official upgrade channel, many devices continue to be used long after manufacturers decide it isn't profitable to continue to provide security updates.2

In some cases, it may be possible to do automated defensive patching on the network, before messages get to vulnerable end systems. In fact, some people closely familiar with the DARPA CGC have suggested to us that developing these kinds of defensive proxies may be one of the CGC’s long-term objectives. But such defensive patching at the network layer is only possible for protocols that are not encrypted, or on aggressively managed networks where encryption is subject to man-in-the-middle inspection by firewalls and endpoints are configured to trust man-in-the-middle CAs. Both of these situations have serious security problems of their own.

Right now, attacking the long tail of vulnerable devices, such as IoT gadgets, isn't worthwhile for many sophisticated actors because the benefit for the would-be hacker is far lower than the effort it would take to make the attack successful. Imagine a hacker thinking about attacking a model of Internet-connected thermostat that's not very popular. It would probably take days or weeks of work, and the number of compromised systems would be very low (compared to compromising a more popular model)—not to mention the systems themselves wouldn't be very useful in and of themselves. For the hacker, focusing on this particular target just isn't worth it.

But now imagine an attacker armed with a tool which discovers and exploits new vulnerabilities in any software it encounters. Such an attacker could attack an entire class of systems (all Internet of Things devices using a certain microprocessor architecture, say) much more easily. And unlike when the Morris worm went viral in 1988, today everything from Barbie dolls to tea kettles are connected to the Internet—as well as parts of our transportation infrastructure like gas pumps and traffic lights. If a 21st century Morris worm could learn to attack these systems before we replaced them with patchable, upgradable versions, the results would would be highly unpredictable and potentially very serious.

Precautions, Not Prohibitions

Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop innovating. Nor is it even really clear how you could stop such research if you wanted to; plenty of actors could do it if they wanted.

Instead, we think the right thing, at least for now, is for researchers to proceed cautiously and be conscious of the risks. When thematically similar concerns have been raised in other fields, researchers spent some time reviewing their safety precautions and risk assessments, then resumed their work. That's the right approach for automated vulnerability detection, too. At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers. Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands.

For example, researchers should probably ask questions like:

  • If this tool is designed to find and patch vulnerabilities, how hard would it be for someone who got its source code to turn it into a tool for finding and exploiting vulnerabilities? The differences may be small but still important. For instance, does the tool need a copy of the source code or binary it's analyzing? Does it just identify problematic inputs that may crash programs, or places in their code that may require protections, or does it go further and automate exploitation of the bugs it has found?
  • What architectures or types of systems does this tool target? Are they widespread? Can these systems be easily patched and protected?
  • What is the worst-case scenario if this tool's source code were leaked to, say, an enemy nation-state or authors of commercial cryptoviruses? What would happen if the tool escaped onto the public Internet?

To be clear, we're not saying that researchers should stop innovating in cases where the answers to those questions are more pessimistic. Rather, we're saying that they may want to take precautions proportional to the risk. In the same way biologists take different precautions ranging from just wearing a mask and gloves to isolating samples in a sealed negative-pressure environment, security researchers may need to vary their precautions from using full-disk encryption, all the way to only doing the research on air-gapped machines, depending on the risk involved.

For now, though, the field is still quite young and such extreme precautions probably aren't necessary. DARPA's Cyber Grand Challenge illustrates some of the reasons for this: the tools in the CGC aren't designed to target the same sort of software that runs on everyday laptops or smartphones. Instead, DARPA developed a simplified open source operating system extension expressly for the CGC. In part, this was intended to make the work of CGC contestants easier. But it was also done so that any tools designed for use in the CGC would need to be significantly modified for use in the real-world—so they don't really pose much of a danger as is, and no additional safety precautions are likely necessary.

But what if, a few years from now, the subsequent rounds of the contest target commonplace software? As they move in that direction, the designers of systems capable of automatically finding and exploiting vulnerabilities should take the time to think through the possible risks, and strategies for how to minimize them in advance. That's why we think the people who are experts in this field should come together, discuss the issues we're flagging here (and perhaps raise new ones), and come up with a strategy for handling the safety considerations for any risks they identify. In other words, we’d like to encourage the field to fully think through the ramifications of new research as it’s conducted. Much like the genetics community did in 1975, we think researchers working in the intersection of AI, automation, and computer security should come together to hold a virtual “Autonomous Cybersecurity Asilomar Conference.” Such a conference would serve two purposes. It would allow the community to develop internal guidelines or suggestions for performing autonomous cybersecurity research safely, and it would reassure the public that the field isn't proceeding blindly forward, but instead proceeding in a thoughtful way with an eye toward bettering computer security for all of us.

  • 1. Of course, manufacturers could turn loose autonomous patching viruses which patch users' devices as they propagate through the Internet, but this could open up a huge can of worms if users aren't expecting their devices to undergo these sorts of aggressive pseudo-attacks (not to mention the possible legal ramifications under the CFAA).
  • 2. We're looking at you, Android device manufacturers, mobile carriers, and Google.
var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

Malware Linked to Government of Kazakhstan Targets Journalists, Political Activists, Lawyers: EFF Report

Deep Links - Thu, 08/04/2016 - 13:31
Editors Who Exposed Corruption, Political Opponents of Authoritarian Government’s President, and Their Legal Teams Were Sent Malware

San Francisco—Journalists and political activists critical of Kazakhstan’s authoritarian government, along with their family members, lawyers, and associates, have been targets of an online phishing and malware campaign believed to be carried out on behalf of the government of Kazakhstan, according to a new report by the Electronic Frontier Foundation (EFF).

Malware was sent to Irina Petrushova and Alexander Petrushov, publishers of the independent newspaper Respublika, which was forced by the government of Kazakhstan to stop printing after years of exposing corruption but has continued to operate online. Also targeted are family members and attorneys of Mukhtar Ablyazov, co-founder and leader of opposition party Democratic Choice of Kazakhstan, as well as other prominent dissidents.

The campaign—which EFF has called “Operation Manul,” after endangered wild cats found in the grasslands of Kazakhstan—involved sending victims spearphishing emails that tried to trick them into opening documents which would covertly install surveillance software capable of recording keystrokes, recording through the webcam, and more. Some of the software used in the campaign is commercially available to anyone and sells for as little as $40 online.

Spearphishing emails and malware sent to members of the Ablyazov family while they were in exile in Italy may have helped track the whereabouts of Mukhtar Ablyazov’s wife and young daughter.  Despite having legal European resident permits, the two were taken into custody in Italy in 2013 and forcibly deported to Kazakhastan. Many targets of the malware campaign are also involved in litigation with the government of Kazakhstan, including the publishers of Respublika noted above. EFF represented Respublika in a U.S. lawsuit during the course of which the government has attempted to censor the site and discover Respublika’s confidential sources

Kazakhstan is a former Soviet republic that heavily restricts freedom of speech and assembly, and where torture is a serious problem, according to Human Rights Watch. The republic was ranked 160 out of 180 countries tracked by Reporters Without Borders for attacks on journalistic freedom and independence.

“The use of malware to spy on and intimidate dissidents beyond their borders is an increasingly common tactic employed by oppressive governments,” said Eva Galperin, Global Policy Analyst at EFF and one of the report’s authors. “As we have seen in places like Syria and Vietnam, journalists and political opposition leaders are being attacked in both the physical and digital worlds. Regimes are turning to covertly installed malware to track, harass, and silence those who seek to expose corruption and inform the public about human rights abuses—especially targets that have moved beyond the regime's sphere of control. Based on available evidence, we believe this campaign is likely to have been carried out on behalf of the government of Kazakhstan.”

EFF researchers, along with technologists at First Look Media and Amnesty International, examined data about suspected espionage groups and found overlaps between Operation Manul and Appin Security Group, an Indian company that has been linked with several other attack campaigns.

“Appin has been linked by cybersecurity firm Norman Shark to cyber-attacks against a Norwegian telecom company, Punjabi separatists, and others," said EFF Staff Technologist Cooper Quintin. “We found that some of the technology infrastructure used in those cyber attacks overlapped with the infrastructure used in Operation Manul. “

“Our research shows that such cheap, commercially available malware can have a real impact on vulnerable populations,” said Galperin. “Much of the past research in this area has exposed campaigns carried out by governments using spy software which they have purchased. In this case, the evidence suggests that the government of Kazakhstan hired a company to carry out the attacks on their behalf.”

For the report:
https://www.eff.org/files/2016/08/03/i-got-a-letter-from-the-government.pdf

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Contact:  EvaGalperinGlobal Policy Analysteva@eff.org CooperQuintinStaff Technologistcooperq@eff.org
Share this: Join EFF
Categories: Privacy

Malware Linked to Government of Kazakhstan Targets Journalists, Political Activists, Lawyers: EFF Report

EFF Press Releases - Thu, 08/04/2016 - 13:31
Editors Who Exposed Corruption, Political Opponents of Authoritarian Government’s President, and Their Legal Teams Were Sent Malware

San Francisco—Journalists and political activists critical of Kazakhstan’s authoritarian government, along with their family members, lawyers, and associates, have been targets of an online phishing and malware campaign believed to be carried out on behalf of the government of Kazakhstan, according to a new report by the Electronic Frontier Foundation (EFF).

Malware was sent to Irina Petrushova and Alexander Petrushov, publishers of the independent newspaper Respublika, which was forced by the government of Kazakhstan to stop printing after years of exposing corruption but has continued to operate online. Also targeted are family members and attorneys of Mukhtar Ablyazov, co-founder and leader of opposition party Democratic Choice of Kazakhstan, as well as other prominent dissidents.

The campaign—which EFF has called “Operation Manul,” after endangered wild cats found in the grasslands of Kazakhstan—involved sending victims spearphishing emails that tried to trick them into opening documents which would covertly install surveillance software capable of recording keystrokes, recording through the webcam, and more. Some of the software used in the campaign is commercially available to anyone and sells for as little as $40 online.

Spearphishing emails and malware sent to members of the Ablyazov family while they were in exile in Italy may have helped track the whereabouts of Mukhtar Ablyazov’s wife and young daughter.  Despite having legal European resident permits, the two were taken into custody in Italy in 2013 and forcibly deported to Kazakhastan. Many targets of the malware campaign are also involved in litigation with the government of Kazakhstan, including the publishers of Respublika noted above. EFF represented Respublika in a U.S. lawsuit during the course of which the government has attempted to censor the site and discover Respublika’s confidential sources

Kazakhstan is a former Soviet republic that heavily restricts freedom of speech and assembly, and where torture is a serious problem, according to Human Rights Watch. The republic was ranked 160 out of 180 countries tracked by Reporters Without Borders for attacks on journalistic freedom and independence.

“The use of malware to spy on and intimidate dissidents beyond their borders is an increasingly common tactic employed by oppressive governments,” said Eva Galperin, Global Policy Analyst at EFF and one of the report’s authors. “As we have seen in places like Syria and Vietnam, journalists and political opposition leaders are being attacked in both the physical and digital worlds. Regimes are turning to covertly installed malware to track, harass, and silence those who seek to expose corruption and inform the public about human rights abuses—especially targets that have moved beyond the regime's sphere of control. Based on available evidence, we believe this campaign is likely to have been carried out on behalf of the government of Kazakhstan.”

EFF researchers, along with technologists at First Look Media and Amnesty International, examined data about suspected espionage groups and found overlaps between Operation Manul and Appin Security Group, an Indian company that has been linked with several other attack campaigns.

“Appin has been linked by cybersecurity firm Norman Shark to cyber-attacks against a Norwegian telecom company, Punjabi separatists, and others," said EFF Staff Technologist Cooper Quintin. “We found that some of the technology infrastructure used in those cyber attacks overlapped with the infrastructure used in Operation Manul. “

“Our research shows that such cheap, commercially available malware can have a real impact on vulnerable populations,” said Galperin. “Much of the past research in this area has exposed campaigns carried out by governments using spy software which they have purchased. In this case, the evidence suggests that the government of Kazakhstan hired a company to carry out the attacks on their behalf.”

For the report:
https://www.eff.org/files/2016/08/03/i-got-a-letter-from-the-government.pdf

Contact:  EvaGalperinGlobal Policy Analysteva@eff.org CooperQuintinStaff Technologistcooperq@eff.org
Share this: Join EFF
Categories: Privacy

Copyright Office Jumps Into Set-Top Box Debate, Says Hollywood Should Control Your TV

Deep Links - Thu, 08/04/2016 - 02:36

The Federal Communications Commission has a plan to bring much-needed competition and consumer choice to the market for set-top boxes and television-viewing apps. Under the FCC’s proposed rule change, pay-TV customers would be able to choose devices and apps from anywhere rather than being forced to use the box and associated software provided by the cable company, ending cable companies’ and major TV studios’ monopoly in the field.

But major entertainment companies are trying to derail this effort and keep control over TV technology. Central to their argument is a set of misleading claims about copyright law. Hollywood thinks that copyright holders should be able to use licensing agreements to place whatever restrictions they like on how people can access their content.

Unfortunately, the Copyright Office has sent a letter to Congress supporting those claims. The letter is wrong as a matter of law, and it’s also bad policy. Rather than promote innovation, the Copyright Office offers ideas that would be hostile to choice and innovation in all kinds of information technology, not just pay TV.

Congress and the courts have repeatedly rejected that vision, and so should the FCC.

The FCC’s plan would let cable and satellite subscribers choose the devices and apps they can use to access pay TV content instead of being limited to the leased set-top boxes and walled-garden apps provided by the cable and satellite companies. That’s not just a great goal; it’s also the law—Congress ordered the FCC to pursue this goal all the way back in 1996, but cable companies and TV producers have fought against it for over 20 years. Choice and competition threaten cable and content companies’ power to control what programming gets seen or ignored, how we can search for it, and who can build the hardware and software.

Currently, that power over the design of personal TV technology derives from a confluence of unfair private agreements and monopoly power, not from copyright law. Copyright gives rightsholders power to control copying, but not technology design; in fact, that sort of control is antithetical to copyright’s purpose. Over thirty years ago, in Sony v. Universal, the Supreme Court refused to allow movie studios to “extend [their] monopoly” into “control over an article of commerce”—the videocassette recorder—“that is not the subject of copyright protection.” You can search all 280 pages of the Copyright Act, and you won’t find anything that says a copyright holder has the power to control search functionality, or channel placement, or to decide who can build a DVR or video app.

Unlocking competition in pay TV hardware and software isn’t a copyright issue - it’s a competition issue. But the Copyright Office mistakenly suggests that a copyright holder “generally has full control as to whether and how to exploit his or her work.” Once a copyright holder has released their work to paying customers, like cable subscribers, those customers have their own set of rights: to view TV programs at home or on the go, to skip around within the programs as they wish, to search for and organize the programs and other content they’re entitled to see, and to choose tools that enable them to do these things.

The Copyright Office’s letter implies that cable and content companies could create new rights for themselves just by writing them into private contracts between each other: the right to control which “platforms and devices” customers can use, the right to limit time-shifting and other fair uses, and the right to “exclude” other software from a customer’s device. While private companies are free to negotiate conditions like these between each other, nothing in the law gives copyright holders the power to impose those conditions on the whole world, snuffing out the rights of users.

If the law were actually as the Copyright Office says it is, the Internet as we know it would be impossible. Instead, it would look more like today’s cable TV. Imagine that a popular news website made an agreement with your Internet service provider saying that no one should be able to save a local copy of a news article, or to email a link to a friend. Under the Copyright Office’s theory, it might be illegal for you, the subscriber, to do those things. And websites could create other rules dictating subscribers’ activity just by putting them in a secret contract. When you apply the Copyright Office’s reasoning to media in which healthy competition exists, it’s easy to see the logic break down.

Re-branding cable and content companies’ private deals as “copyright” issues risks stalling all sorts of efforts to promote competition and innovation that can lead to new markets for creative work. And it’s simply incorrect.

Copyright law gives owners specific rights—namely, to control copying and redistribution of their works. Copyright holders cannot control the technologies that customers use to lawfully access their works, nor can they invent new restrictions and rights out of thin air. The Copyright Office should have seen through Hollywood’s attempt to shut out competition through a misinterpretation of copyright law. We hope the FCC does.

 

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

EFF at the Eleventh Hope

Deep Links - Tue, 08/02/2016 - 17:10

Last weekend EFF took part in the Eleventh Hackers On Planet Earth (HOPE) conference in New York City and got to meet so many of our wonderful supporters. We've collected the HOPE talks given by EFF staff below, with the official program abstract, video, and where applicable, the original slides. Once you're done watching those, you can also try your hand at our Capture The Flag competition—the challenges are still up at https://eff-ctf.org, even though the contest is over.

Keynote Address

Cory Doctorow

Privacy info. This embed will serve content from youtube.com

Program abstract: We are so stoked to have Cory Doctorow as our keynote this year. We've been trying to get the stars to align for many HOPEs, and this time they did. But we're glad we waited until now, since so much has happened in the past few years that Cory has been on top of - Snowden, Manning, privacy, copyright issues, surveillance - and his talk will no doubt open your eyes even more. As co-editor of Boing Boing, special advisor to the Electronic Frontier Foundation, a prolific writer of both fiction and non-fiction, and a vocal proponent of changing our copyright laws, Cory really has a lot of super-important and relevant thoughts to share with our HOPE audience.

Slides: https://drive.google.com/file/d/0BxbYd30UHZqHNGU2R2ZMWkppTmc/view

Video:  http://livestream.com/internetsociety/hopeconf/videos/130727866

Ask the EFF: The Year in Digital Civil Liberties

Kurt Opsahl, Jacob Hoffman-Andrews, Vivian Brown, Parker Higgins

Privacy info. This embed will serve content from youtube.com

Program abstract: Get the latest information about how the law is racing to catch up with technological change from staffers at the Electronic Frontier Foundation, the nation's premiere digital civil liberties group fighting for freedom and privacy in the computer age. This session will include updates on current EFF issues such as surveillance online, encryption (and backdoors), and fighting efforts to use intellectual property claims to shut down free speech and halt innovation. The panel will also include a discussion on their technology project to protect privacy and speech online, updates on cases and legislation affecting security research, and much more. Half the session will be given over to question-and-answer, so it's your chance to ask EFF questions about the law and technology issues that are important to you.

Video: http://livestream.com/internetsociety/hopeconf/videos/130646436

The Next Billion Certificates: Let's Encrypt and Scaling the Web PKI

Jacob Hoffman-Andrews

Privacy info. This embed will serve content from youtube.com

Program abstract: Let's Encrypt is a free and automated certificate authority to encrypt the web, launched in December 2015. Jacob will explain why HTTPS is important to Internet freedom and the role certificate authorities play. He'll give an introduction to the ACME protocol that Let's Encrypt uses to automate validation and issuance, discuss Let's Encrypt's progress by the numbers, and outline some of its future plans.

Slides: https://jacob.hoffman-andrews.com/next-billion/#/

Video:  http://livestream.com/internetsociety/hopeconf/videos/130816207

Privacy Badger and Panopticlick vs. the Trackers, Round 1

William Budington, Cooper Quintin

Privacy info. This embed will serve content from youtube.com

Program abstract: Increasingly, as you navigate the web, your movements are being tracked. Even when you reject browser cookies, you transmit unique information that makes your browser personally identifiable. Ad tech and tracking companies are transforming the web into a platform where your user data is brokered and exchanged freely without your consent or even knowledge - and there is a true absence of limits to the methods trackers are willing to use to get that data from you. Luckily, there is hope. The Electronic Frontier Foundation (EFF) has been developing technologies that let you know exactly how much of this data you are giving out as you browse, as well as releasing tools to help you protect yourselves against the trackers. Panopticlick and Privacy Badger help you keep your personal data private - and this talk will show you how.

Slides: https://www.eff.org/files/privacy-badger-panopticlick-v-trackers.pdf

Video: http://livestream.com/internetsociety/hopeconf/videos/130664570

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

EFF Asks Court to Uphold Invalidation of Podcasting Patent

Deep Links - Tue, 08/02/2016 - 13:34
Thursday Hearing in EFF’s Case Against Patent That Threatened Podcasting

Washington, D.C.—The Electronic Frontier Foundation (EFF) will urge a federal appeals court at a hearing Thursday to find that the U.S. Patent and Trademark Office (USPTO) correctly invalidated key claims of a patent owned by Personal Audio, which had used the patent to threaten podcasters big and small.

EFF is defending a USPTO ruling it won last year in its petition challenging the validity of key claims of Personal Audio’s patent. EFF argued, and the USPTO agreed, that the claimed invention existed before Personal Audio filed its patent application.

Personal Audio maintained that it invented the process of updating a website regularly with new, related content creating a series of episodes—basically podcasting—in 1996. Personal Audio began sending letters to podcasters in 2013, demanding licensing fees from creators such as comedian Adam Carolla and three major television networks. In its challenge to the patent, EFF showed that putting a series of episodes online for everyone to enjoy was not a new idea when the patent application was filed.

Personal Audio asked the U.S. Court of Appeals for the Federal District in Washington D.C. to overturn the USPTO ruling. At a hearing on Thursday, EFF's pro bono counsel will ask the court to reject Personal Audio’s argument that the USPTO erred when it invalidated the patent claims.

What: Court hearing in Personal Audio LLC v. Electronic Frontier Foundation

When:  Thursday, August 4, 10 am

Where:  U.S. Court of Appeals for the Federal Circuit
             Courtroom 401, Panel J
             717 Madison Place, N.W.              
             Washington, D.C.  20439

 For more on EFF’s Personal Audio challenge:
https://www.eff.org/cases/eff-v-personal-audio-llc

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Contact:  DanielNazerStaff Attorney and Mark Cuban Chair to Eliminate Stupid Patentsdaniel@eff.org
Share this: Join EFF
Categories: Privacy

EFF Asks Court to Uphold Invalidation of Podcasting Patent

EFF Press Releases - Tue, 08/02/2016 - 13:34
Thursday Hearing in EFF’s Case Against Patent That Threatened Podcasting

Washington, D.C.—The Electronic Frontier Foundation (EFF) will urge a federal appeals court at a hearing Thursday to find that the U.S. Patent and Trademark Office (USPTO) correctly invalidated key claims of a patent owned by Personal Audio, which had used the patent to threaten podcasters big and small.

EFF is defending a USPTO ruling it won last year in its petition challenging the validity of key claims of Personal Audio’s patent. EFF argued, and the USPTO agreed, that the claimed invention existed before Personal Audio filed its patent application.

Personal Audio maintained that it invented the process of updating a website regularly with new, related content creating a series of episodes—basically podcasting—in 1996. Personal Audio began sending letters to podcasters in 2013, demanding licensing fees from creators such as comedian Adam Carolla and three major television networks. In its challenge to the patent, EFF showed that putting a series of episodes online for everyone to enjoy was not a new idea when the patent application was filed.

Personal Audio asked the U.S. Court of Appeals for the Federal District in Washington D.C. to overturn the USPTO ruling. At a hearing on Thursday, EFF's pro bono counsel will ask the court to reject Personal Audio’s argument that the USPTO erred when it invalidated the patent claims.

What: Court hearing in Personal Audio LLC v. Electronic Frontier Foundation

When:  Thursday, August 4, 10 am

Where:  U.S. Court of Appeals for the Federal Circuit
             Courtroom 401, Panel J
             717 Madison Place, N.W.              
             Washington, D.C.  20439

 For more on EFF’s Personal Audio challenge:
https://www.eff.org/cases/eff-v-personal-audio-llc

Contact:  DanielNazerStaff Attorney and Mark Cuban Chair to Eliminate Stupid Patentsdaniel@eff.org
Share this: Join EFF
Categories: Privacy

Victory! Oregon Supreme Court Agrees that Violating a Company Rule is Not a Computer Crime

Deep Links - Tue, 08/02/2016 - 12:55

Can you imagine being prosecuted for checking personal email while at work because your employer says you can only use your computer for “company business”? Of course not. Violating a company rule is not—and should not be—a computer crime. Prosecutors have tried to use the federal Computer Fraud and Abuse Act (CFAA) and parallel state criminal laws to target violations of company rules, but courts are increasingly calling foul on the misuse of statutes intended to criminalize computer break-ins.

The Oregon Supreme Court is one of them, saying “no” to prosecutors who tried to hold Caryn Nascimento liable under Oregon’s computer crime law for a violation of her employer’s computer use policy. EFF filed an amicus brief in the case, State v. Nascimento, and the court specifically cited our argument that “the state’s reading of the statute—which arguably criminalizes any computer use in violation of an employer’s personnel or computer use policies—is unworkably broad because it gives private entities the power to decide what conduct in the workplace is criminal and what is not.”

Nascimento worked as a cashier at the deli counter of a convenience store. As part of her job, she was authorized to access a lottery terminal in the store to sell and validate lottery tickets for paying customers. Store policy prohibited employees from purchasing lottery tickets for themselves or validating their own lottery tickets while on duty. A store manager noticed a discrepancy in the receipts from the lottery terminal and discovered that Nascimento had printed lottery tickets for herself without paying for them. She was charged and convicted with not only first-degree theft, but also computer crime on the ground that she accessed the lottery terminal “without authorization.”

Nascimento took her case to the Oregon Supreme Court, where we filed a brief in her support. We did not challenge the theft conviction but explained to the court that the state’s interpretation of Oregon's computer crime statute was unworkable because it turned employees into criminals for reading personal email or checking a baseball game's score while at work, in violation of company policy. And, we explained, because Facebook’s terms of use prohibit users from providing false personal information, a Facebook user could be prosecuted for shaving a few years off her age in her profile.

The Oregon Supreme Court heeded our advice, rejecting the lower court’s expansive interpretation of the statute. The court held that violating an employer’s personnel or computer use policies could “lead to personnel actions or other private discipline or to possible proceedings under other statutes, but it does not violate” Oregon’s computer crime law. According to the court, the law’s history demonstrated that it was intended to criminalize access or use of a computer by someone who had no authority to do so—“the kind of intrusion or access to a computer by unauthorized third parties commonly referred to as ‘hacking.’” Meanwhile, “Nothing in the legislative history suggests that the statute was intended to reach a person who was trained and authorized to use a particular computer, but did so for an unpermitted purpose.”

As the court recognized, a company can restrict a person’s “authorization” to access or use a computer through setting up a password requirement or other authentication or security procedures. But here, Nascimento’s employer had done nothing to restrict her authorization. Because there was no evidence that she had “circumvented any computer security measures, misused another employee’s password, or accessed any protected data,” she was not guilty of violating the state’s computer crime statute.

The prosecutor’s interpretation of the statute would have transformed innocent employees and Internet users into criminals on the basis of innocuous, everyday behavior. We’re happy the Oregon Supreme Court took to heart our warnings about the dangers of such an expansive interpretation of the law and adopted a clear rule that limits the discretion of overzealous prosecutors.

We also hope this decision sets an example for other courts—including the Ninth Circuit Court of Appeals, which just issued two decisions (here and here) that have eviscerated the clarity of CFAA law in the nine states its rulings affect. The decisions both involve password sharing, rather than Nascimento’s direct use of her employee credentials, but together they raise all sorts of questions about when an authorized user can give an outside person authorization to use their account and how and when a computer owner can revoke that authorization. We hope the Ninth Circuit rehears both cases and recognizes—just like the Oregon Supreme Court did with its state computer crime statute—that the CFAA should be limited to the purpose intended by Congress: targeting computer break-ins.

Special thanks to our local counsel, J. Ashlee Albies of Creighton & Rose, PC in Portland, Oregon.

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Cases: United States v. David NosalOregon v. NascimentoFacebook v. Power Ventures
Share this: Join EFF
Categories: Privacy

Don’t Wrap Anti-Competitive Pay-TV Practices In A Copyright Flag

Deep Links - Mon, 08/01/2016 - 16:36

The Federal Communications Commission has proposed to break cable and satellite TV companies’ monopoly over the hardware and software used by their subscribers. Those companies are fighting back hard, probably to preserve the $20 billion in revenue they collect every year from set-top box rental fees. Major TV producers and copyright holders are pushing back too. They want to control how you can search for TV shows and discover new ones, and the order in which shows appear to you. And they want to limit the features of your home and mobile TV setups, like how and when you can control the playback.

One tactic these major media companies are using to try to derail the FCC’s proposal is to claim that allowing customers to buy pay-TV viewing technology from independent vendors (something that Congress actually ordered the FCC to do way back in 1996) somehow violates “principles of copyright law.”

As we explained to the FCC along with top legal scholars, the plan to break the set-top box monopoly doesn’t change copyright law or allow anyone to get pay-TV content without paying for it. But by crying “copyright,” cable companies and TV producers have rallied opposition to the FCC’s plan from some members of Congress, and possibly from the Copyright Office. It’s a misleading tactic.

Today, TV studios influence the design and features of home video equipment by specifying them as terms in the deals they make with cable companies. The cable companies have to accept those terms because under copyright law, they need permission from major copyright holders (the TV studios) to transmit programming to subscribers. And because cable companies have a monopoly over the technology on the subscriber’s end—the set-top boxes and apps that can access cable channels—the TV studios effectively have veto power over that technology.

TV studios’ power over the design of personal TV technology derives from that confluence of market agreements and monopoly—not from the law. Copyright gives rightsholders power to control copying, but not technology design. In fact, that sort of control is the antithesis of copyright’s purpose. Over thirty years ago, in Sony v. Universal, the Supreme Court refused to allow movie studios to “extend [their] monopoly” into “control over an article of commerce”—the videocassette recorder—“that is not the subject of copyright protection.” Today, you can search all 280 pages of the Copyright Act, and you won’t find anything that says a copyright holder has the power to control search functionality, or channel placement, or to decide who can build a DVR or video app.

The studios claim that things are different this time, because the successors to the VCR—today’s smart TVs, DVRs, set-top boxes, and mobile apps—are more sophisticated and have an online component. But the law remains the same, and for good reason. Allowing pay-TV subscribers to choose the devices and software they want to use doesn’t permit or encourage copyright infringement. Illegal copying is still illegal, and under every version of the FCC’s plan, pay-TV content will continue to be wrapped in user-unfriendly DRM at every step. (That raises other problems, including privacy and security threats.) A competitive set-top box or video app will be subject to the same copyright law as a TV, DVR, or home audio system is today.

In short, this isn’t a copyright issue. Yet TV studios and other opponents of the Unlock the Box proposal have draped their existing contracts and market relationships in the rhetoric of copyright and creativity in order to preserve their veto power over the design of consumer technology. When two businesses enter an agreement, they can include almost any terms they want to include. Adding terms to a copyright license doesn’t automatically make them copyright issues.

Imagine, if you will, that a movie studio refused to let their film play in theaters unless the theaters promised to serve a particular brand of cola. The studio has licensed its copyrighted movie to the theater with certain conditions, but no one would claim that movie-goers drinking Coke instead of Pepsi offends principles of copyright law, or hurts artists, or requires intervention by the Copyright Office and members of Congress. It’s simply an agreement between businesses.

That’s essentially what defenders of the set-top box monopoly mean when they argue that the proposal will harm “property rights” and interfere with “licensing.” Not coincidentally, the license terms that they want to maintain are the ones that preserve the competition-free status quo that the FCC’s plan seeks to transform. At best, the only new devices and apps that would be allowed under the cable industry’s latest proposal will be so much like today’s set-top boxes that no real competition will be possible.

Cloaking those anti-competitive contracts and practices in the language of copyright may lead the Copyright Office and certain members of Congress to toss monkey wrenches across the National Mall in the direction of the FCC building. Fortunately, it seems that FCC Chairman Tom Wheeler sees this tactic for what it is — an attempt at misdirection.

Bringing competition to pay-TV technology is a complex issue. Crafting good rules on consumer privacy, and closing off sneaky avenues of cable company influence over consumer technology, will take care and cooperation to get right. That’s why the FCC should not allow misleading copyright rhetoric to derail those discussions, and the Copyright Office should keep its thumb off of the scales.

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

What to Do About Lawless Government Hacking and the Weakening of Digital Security

Deep Links - Mon, 08/01/2016 - 12:47

In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security.

The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones.

Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects. 

That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  

Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions.

This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.    

Recognizing That Government Intrusion and Subversion of Digital Security Is a Single Issue

The first step is to understand a wide range of government activities as part of one larger threat to security. We see the U.S. government attempt to justify and compartmentalize its efforts with terms like "lawful hacking" and "computer network attack." It is easy for the government to argue that the FBI’s attempts to subvert the security of Apple iOS in the San Bernardino case are entirely unrelated to the NSA’s apparent sabotage of the Dual_EC_DRBG algorithm. Likewise, the intelligence community’s development of the Stuxnet worm to target the Iranian nuclear program was governed by a set of rules entirely separate from the FBI’s use of malware to target criminals using Tor hidden services.

These activities are carried out by different agencies with different missions. But viewing them as separate—or allowing government to present it that way—misses the forest for the trees. When a government takes a step to create, acquire, stockpile or exploit weaknesses in digital security, it risks making us all less safe by failing to bolster that security. 

Each of these techniques should involve consideration of the tradeoffs involved, and none of them should be viewed as risk-free to the public. They require oversight and clear rules for usage, including consideration of the safety of innocent users of affected technologies.

There is hope, albeit indirectly. In the United States, high-ranking government officials have acknowledged that "cyber threats" are the highest priority, and that we should be strengthening our digital security rather than weakening it to facilitate government access. In some cases, this is apparently reflected in government policy. For instance, in explaining the government’s policy on software vulnerabilities, the cybersecurity coordinator for the White House and the Office of the Director of National Intelligence have both stated in blog posts that the there is a "strong presumption" in favor of disclosing these vulnerabilities to the public so they can be fixed.

But the government shouldn’t engage in "policy by blog post." Government action that actively sabotages or even collaterally undermines digital security is too important to be left open to executive whim.

Finding Models for Transparency and Limits on When Government Can Harm Digital Security

While government hacking and other activities that have security implications for the rest of us are not new, they are usually secret. We should demand more transparency and real, enforceable rules.

Fortunately, this isn’t the first time that new techniques have required balancing public safety along with other values. Traditional surveillance law gives us models to draw from. The Supreme Court’s 1967 decision in Berger v. New York is a landmark recognition that electronic wiretapping presents a significant danger to civil liberties. The Court held that because wiretapping is both invasive and surreptitious, the Fourth Amendment required "precise and discriminate" limits on its use.

Congress added considerable structure to the Berger Court’s pronouncements with the Wiretap Act, first passed as Title III of the Omnibus Crime Control and Safe Streets Act of 1968. First, Title III places a high bar for applications to engage in wiretapping, so that it is more of an exception than a rule, to be used only in serious cases. Second, it imposes strict limits on using the fruits of surveillance, and third, it requires that the public be informed on a yearly basis about the number and type of government wiretaps.

Other statutes concerned with classified information also find ways of informing the public while maintaining basic secrecy. For example, the USA Freedom Act, passed in 2015 to reform the intelligence community, requires that significant decisions of the FISA Court either be published in redacted form or be summarized in enough detail to be understood by the public.

These principles provide a roadmap that can be used to prevent government from unnecessarily undermining our digital security. Here are a few areas where EFF is working to craft these new rules:

Item 1: Rules for When Government Stockpiles Vulnerabilities

It’s no secret that governments look for vulnerabilities in computers and software that they can exploit for a range of intelligence and surveillance purposes. The Stuxnet worm, which was notable for causing physical or "kinetic" damage to its targets, relied on several previously unknown vulnerabilities, or "zero days," in Windows. Similarly, the FBI relied on a third party’s knowledge of a vulnerability in iOS to access the contents of the iPhone in the San Bernardino case.

News reports suggest that many governments—including the U.S.—collect these vulnerabilities for future use. The problem is that if a vulnerability has been discovered, it is likely that other actors will also find out about it, meaning the same vulnerability may be exploited by malicious third parties, ranging from nation-state adversaries to simple thieves. This is only exacerbated by the practice of selling vulnerabilities to multiple buyers, sometimes even multiple agencies within a single government.

Thanks to a FOIA suit by EFF, we have seen the U.S. government’s internal policy on how to decide whether to retain or disclose a zero day, the Vulnerabilities Equities Process (VEP). Unfortunately, the VEP is not a model of clarity, setting out a bureaucratic process without any substantive guidelines in favor of disclosure, More concerning, we’ve seen no evidence of how the VEP actually functions. As a result, we have no confidence that the government discloses vulnerabilities as often as claimed. The lack of transparency fuels an ongoing divide between technologists and the government.

A report published in June by two ex-government officials—relying heavily on the document from EFF’s lawsuit—offers a number of helpful recommendations for improving the government’s credibility and fueling transparency.   

These proposals serve as an excellent starting point for legislation that would create a Vulnerabilities Equities Process with the force of law, formalizing and enforcing a presumption in favor of disclosure. VEP legislation should also:

  • Mandate periodic reconsideration of any decision to retain a vulnerability;
  • Require the government to publish the criteria used to decide whether to disclose;
  • Require regular reports to summarize the process and give aggregate numbers of vulnerabilities retained and disclosed in a given period;
  • Preclude contractual agreements that sidestep the VEP, as in the San Bernardino case, where the FBI apparently signed a form of non-disclosure agreement with the "outside party." The government should not be allowed to enter such agreements, because when the government buys a zero day, we should not have to worry about defending ourselves from a hostile state exploiting the same vulnerability. If tax dollars are going to be used to buy and exploit vulnerabilities, the government should also eventually use them to patch the security of affected systems, with benefits to all.

Above all, formalizing the VEP will go a long way to reassuring the public, especially members of the technology industry, that the U.S. government takes its commitment to strengthening digital security seriously.

Item 2:  Preventing Disproportionate Use of Government Malware and Global Hacking Warrants

EFF has also long been concerned about state-sponsored malware. It’s at the heart of our suit against the government of Ethiopia. Even in the United States, when the government seeks court permission to use malware to track and surveil suspects over the Internet, it can endanger innocent users as well as general network security.

A particularly egregious example is the Playpen case, involving an FBI investigation into a Tor hidden service that hosted large amounts of child pornography. The FBI seized the site’s server and operated it as a honey pot for visitors. A single warrant authorized the FBI to install malware on any and all visitors’ computers in order to breach the anonymity otherwise provided by Tor. By not specifying particular users—even though the list of users and logs of their activity was available to the FBI—the warrant totally failed to satisfy the Fourth Amendment requirement that warrants particularly describe persons and places to be searched.

What’s more, the FBI asked the court to trust that it would operate its malware safely, without accidentally infecting innocent users or causing other collateral damage. Once defendants began to be charged in these cases, the government staunchly refused to turn over certain information about how the malware operated to the defense, even under seal, arguing that it would compromise other operations. As a result, defendants are left unable to exercise their right to challenge the evidence against them. And of course, anyone else whose computer is vulnerable to the same exploit remains at risk.

In these cases, the FBI flouted existing rules: the Playpen warrant violated both the Fourth Amendment and Rule 41 of the Federal Rules of Criminal Procedure. Other cases have involved similarly overboard uses of malware. EFF has been working to explain the danger of this activity to courts, asking them to apply Fourth Amendment precedent and require that the FBI confront serious threats like Playpen in a constitutional manner. We have also been leaders of a coalition to stop an impending change that would loosen the standards for warrants under Rule 41 and make it easier for the FBI to remotely hack users all over the world. 

Item 3:  A "Title III for Hacking"

Given the dangers posed by government malware, the public would likely be better served by the enactment of affirmative rules, something like a "Title III for Hacking." The legislative process should involve significant engagement with technical experts, soliciting a range of opinions about whether the government can ever use malware safely and if so, how. Drawing from Title III, the law should:

  • Require that the government not use invasive malware when more traditional methods would suffice or when the threats being addressed are relatively insignificant;
  • Establish strict minimization requirements, so that the targets of hacking are identified with as much specificity as the government can possibly provide;
  • Include public reporting requirements so that the public has a sense of the scope of hacking operations; and
  • Mandate a consideration of the possible collateral effects—on individuals and the public interest as a whole—on the decision to unleash malware that takes advantages of known or unknown vulnerabilities. Even if the VEP itself does not encompass publicly known vulnerabilities ("N-days"), using remote exploits should impose an additional requirement on the government to mitigate collateral damage, through disclosure and/or notice to affected individuals. 

The same principles should apply to domestic law enforcement activities and foreign intelligence activities overseen by the FISA Court or conducted under the guidelines of Executive Order 12333.

Of course, these sorts of changes will not happen overnight. But digital security is an issue that affects everyone, and it’s time that we amplify the public’s voice on these issues. We’ve created a single page that tracks our work as we fight in court and pursue broader public conversation and debate in the hopes of changing government practices of sabotaging digital security. We hope you join us. 

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Cases: The Playpen Cases: Mass Hacking by U.S. Law EnforcementEFF v. NSA, ODNI - Vulnerabilities FOIA Apple Challenges FBI: All Writs Act Order (CA)
Share this: Join EFF
Categories: Privacy

Protecting the Fourth Amendment in the Information Age: A Response to Robert Litt

Deep Links - Sun, 07/31/2016 - 12:06

The Yale Law Journal has published a short essay that I wrote in response to an article by Robert Litt, General Counsel to the Office of the Director of National Intelligence on the Fourth Amendment in the Digital Age. Mr. Litt uses EFF's NSA Spying case Jewel v. NSA and the Klayman v. Obama case, where I argued as amicus, as examples, so it seemed only reasonable that EFF reply. It's here and it's only 10 pages long:

Protecting the Fourth Amendment in the Information Age: A Response to Robert Litt

In the article, I agree with a couple of Mr. Litt's observations about how the Fourth Amendment, as currently interpreted, is not suited to the digital age. But, as you might expect, I disagree very sharply with how he’d like it to change.

Specifically, Mr. Litt and I agree that the Fourth Amendment's current “reasonable expectation of privacy” test and the third party doctrine do not work well and should likely be dispensed with for digital search and seizure. Where we disagree, though, is that Mr. Litt removes the reasonable expectation of privacy without offering a replacement, leaving just a balancing test where the people being affected by surveillance have to show that they were individually harmed by the government's activities but the government only has to show potential benefit from its surveillance. Mr. Litt likens his formulation to an insurance policy, which protects its holder even when no claim is filed.

Additionally, within the doctrine this shift would also eliminate the core protections against general warrants which are one of the reasons the Fourth Amendment exists at all, as well as the presumption that searches of content are “per se” unreasonable. So while the "reasonable expectation of privacy" formulation is a problem, we need to search for suitable, privacy protective replacements, not just eliminate it entirely.

Mr. Litt agrees that the third party doctrine, where the government claims that your data in the hands of third parties like your ISP or Facebook or Google or Amazon simply loses all Fourth Amendment protection, should go. On that we agree. But he proposes something worse: that computer searches through masses of data—like those done by the NSA when it searches through the data carried on the fiberoptic cables via its Upstream program at issue in Jewel v. NSA—just shouldn't count for purposes of the Fourth Amendment. I call this the "human eyes" thesis.  The idea that no search or seizure occurs until human eyes actually see your communications. I point out why that proposal, variations of which the government has made and lost in other contexts, is dangerous.

On both points, I note that a better place to start than Mr. Litt's suggestions is the Necessary and Proportionate Principles, an interpretation of international human rights law written by an international team of privacy advocates and attorneys and signed on by over 400 organizations, international experts, politicians and political parties around the world. Updating the Fourth Amendment is critically needed, but as I say in the piece:

What is clear is that if we are going to address where the Fourth Amendment should be in the digital age, we must do better than a free-form balancing test where the government will always be perched on the heavy end of the scales, and where the substitution of computers for humans somehow eliminates our Fourth Amendment right to be secure from unreasonable seizures and searches of our most private communications.

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Cases: Smith v. ObamaJewel v. NSAFirst Unitarian Church of Los Angeles v. NSA
Share this: Join EFF
Categories: Privacy

New Tool to Help Notify Users When Their Content is Taken Offline

Deep Links - Fri, 07/29/2016 - 17:22

When user content is threatened with removal from the Internet, it's unlikely that anyone is going to put up more of a fight than the user who uploaded it. That's what makes it so critically important that the user is informed whenever an Internet intermediary is asked to remove their content from its platform, or decides to do so on its own account.

Unfortunately this doesn't consistently happen. In the case of content taken down for copyright infringement under the DMCA or its foreign equivalents, the law typically requires the user to be informed. But for content that allegedly infringes other laws (such as defamation, privacy, hate speech, or obscenity laws), or content that isn't alleged to be illegal but merely against the intermediary's terms of service, there is often no requirement that the user be informed, and some intermediaries don't make a practice of doing so.

Another problem is that even when intermediaries do pass on notices about allegedly illegal content to the user who uploaded it, this notice might be inaccurate or incomplete. This led to the situtation in Canada where ISPs were passing on misleading notices from US-based rightsholders, falsely threatening Canadian users with penalties that are not even applicable under Canadian law.

As a result of the failure to accurately inform users about why their content is being targeted for removal, users remain confused about their rights, and may fail to defend themselves against removal requests that are mistaken or abusive. The ultimate result of this is that much legitimate content silently disappears from the Internet.

To help with this, EFF and our Manila Principles partners have this week released a tool to help intermediaries generate more accurate notices to their users, when those users' content is threatened with removal. An alpha release of the tool was previewed at this year's RightsCon (on the first anniversary of the launch of the Manila Principles), and yesterday at the Asia-Pacific Regional Internet Governance Forum it was finally launched in beta.

The tool is simply a Web form that an intermediary can complete, giving basic details of what content was (or might be) removed and why, and what the user can do about it. Submitting the questionnaire will crunch the form data and produce a draft notice that the intermediary can copy, review, and send to the user. (Note that the form itself doesn't send anything automatically, and the form data is not stored for longer than required to generate the draft notice.)

We don't expect that this form will be needed by most large intermediaries, who will have staff to write their own notices to users. Further information to help users restore content taken down for terms of service violations by several of these large platforms, including Facebook, Twitter, and YouTube, is also available on onlinecensorship.org.

But bearing in mind that small businesses and hobbyists can also be intermediaries who host other users' content, this form may provide a useful shortcut for them to generate a draft notice that covers most of the important information that a user needs to know. The form remains in beta, and we welcome your suggestions for improvement!

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

First Aereo, Now FilmOn: Another Fight for Innovation and Competition in TV Technology

Deep Links - Fri, 07/29/2016 - 13:35

Why is it so hard to see our local TV stations these days? Even as more and more people watch TV via the Internet, streaming local TV stations to our Internet-enabled devices is next to impossible in most places. Companies that try to bring local TV to the Internet have faced relentless legal challenges from major media companies and the broadcast stations they own. The latest is FilmOn (formerly called Aereokiller), which is fighting in multiple lawsuits around the U.S. for the right to capture local TV broadcasts and stream them to paying subscribers, much as a traditional cable company does. This week, EFF and Public Knowledge filed a brief at the Court of Appeals for the District of Columbia Circuit to explain why copyright law doesn’t favor big pay-TV players over newer, Internet-based services like FilmOn.

For over four years, major TV producers like Comcast, Viacom, Fox, Time Warner, and Disney, along with TV station owners like Comcast, Fox, Disney, and Sinclair, and cable companies like--well, Comcast--have fought in court to shut down new services that deliver local broadcast TV via the Internet. In 2014, the Supreme Court ruled that one of those services, Aereo, performed a function that was so similar to a traditional cable system that, like a cable system, it needed permission from copyright holders for the TV programs it transmitted.

After the Supreme Court ruled, the titans of television pressed to tip the playing field of competition in their favor. Cable and satellite TV companies don’t have to ask permission from the thousands of copyright holders whose works they transmit to paying subscribers every day. Using a “statutory license” built into the Copyright Act, today’s major pay-TV services can simply file some paperwork, pay a fee set by the government, and transmit TV shows to their hearts’ content. (Under Federal Communications Commission rules, pay-TV services have to get permission from broadcast TV stations to retransmit their signals, but this is more feasible, since there are far fewer broadcast stations than there are copyright holders.)

Neither cable companies, nor satellite TV companies, nor phone companies like AT&T and Verizon who sell pay-TV, have ever had to negotiate licenses with every copyright holder for every TV show on every channel they carry.

Unfortunately, several courts have now ruled that new pay-TV services who use the Internet, like FilmOn and the now-defunct ivi and Aereo, can’t use the statutory license and pay the government-set fee. In order to stream local broadcast TV at all, say these courts, Internet-based services must perform the nearly impossible task of getting permission from every copyright holder whose TV shows are broadcast on the local channels.

As we explained this week in our brief to the appeals court, those rulings give established pay-TV companies an unfair advantage over newer competitors like FilmOn. When it passed the current Copyright Act back in 1976, Congress intended the rules to be technology-neutral, applying equally to pay-TV systems whether they used copper wires, microwaves, or other technologies to reach customers’ homes. Though the established players may not like it, that includes the Internet.

We also explained to the court that it doesn’t need to defer to the opinions of the Copyright Office on this issue. The Copyright Office has written several reports in which it said that Internet-based pay-TV services shouldn’t be able to use the statutory license for cable companies. But while the Copyright Office acts as an advisor to the government on copyright issues, it has no legal authority to decide how to interpret Congress’s rules on most issues, including this one. That means that courts should use their own judgment.

Finally, we explained why copyright provisions in trade agreements negotiated in secret shouldn’t control the outcome of a U.S. case. The lower federal court in D.C. pointed out that several recent trade agreements between the U.S. and other countries contained language that seems to bar the signing countries from creating statutory licenses for Internet streaming of broadcast TV. But, as we said in our brief, trade agreements don’t change U.S. law unless Congress explicitly makes a change. And when Congress ratified the recent trade agreements, it said explicitly that existing U.S. law would not change. That means the statutory licenses for pay-TV, which have existed since 1978, still apply to Internet-based services within the U.S., in spite of the trade agreements. Allowing secretive trade negotiations to affect the outcome of lawsuits in U.S. courts, between U.S. companies, would be undemocratic. That’s not the way the law works.

This battle is likely to continue for a while yet. Major media companies are pressing their lawsuits against FilmOn in three appeals courts. Whether or not FilmOn is allowed to keep streaming broadcast TV in different areas of the country, we’ll continue to push for copyright law that’s friendly to innovation and competition.

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

Stupid Patent of the Month: Solocron Education Trolls With Password Patent

Deep Links - Fri, 07/29/2016 - 09:01

Another month, another terrible patent being asserted in the Eastern District of Texas. Solocron Education LLC, a company whose entire “education” business is filing lawsuits, owns U.S. Patent No. 6,263,439, titled “Verification system for non-traditional learning operations.” What kind of “verification system” does Solocron claim to have invented? Passwords.

The patent describes a mundane process for providing education materials through video cassettes, DVDs, or online. Students are sent course materials, take tests, and, if they pass the tests, are allowed to continue on to the next part of the course. At various times, students confirm their identity by entering their biographical details and passwords.

Solocron did not invent distance education, encryption, or passwords. The patent doesn’t describe any new technology, it just applies existing technology in a routine way to education materials. That should not be enough to get a patent. Unfortunately, the Patent Office does not do enough to prevent obvious patents from issuing, which is how we get patents on white-background photography or on filming a Yoga class.

The extraordinary breadth of Solocron’s patent is clearest in its first claim. The claim, with added comments, is below:

1. A process which comprises the steps of:

encoding at least one personal identifier onto a user interface media [i.e. set up an interface requiring a particular user ID];

displaying a prompt on said user interface media for the at least one personal identifier which requires a match of the at least one personal identifier encoded on the user interface media [i.e. ask the user to enter their user ID];

encoding at least one password onto a data storage media [i.e. encrypt or otherwise password-lock a file];

encoding the at least one password from the data storage media onto the user interface media [i.e. set up the user interface so it can check if the password is correct]; and

displaying a prompt on the user interface media for entering the at least one password which requires a match of the at least one password from the data storage media with the at least one password encoded on the user interface media [i.e. require users to enter their passwords into the interface].

Although the claim runs 119 words, it just describes an ordinary system for accessing content via inputting a user ID and password. These kinds of systems for user identification predate the patent by many, many years. The claim is not even limited to education materials but, by its terms, applies to any kind of “data storage media.” The Patent Office should not allow itself to be hoodwinked by overly verbose language that, when read closely, describes an obvious process.

Solocron is asserting its stupid patent aggressively. It has sued dozens of companies, including many new suits filed this year. As with so many patents we have featured in this series, it is suing in the Eastern District of Texas, taking advantage of the court’s patent-owner-friendly rules. We need fundamental patent reform, including venue reform, to stop patents like this from being granted and from being abused in the courts.

var mytubes = new Array(5); mytubes[1] = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[2] = '%3Ciframe src=%22https://www.youtube.com/embed/9WbjhuEc2Js?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[3] = '%3Ciframe src=%22https://www.youtube.com/embed/fOyojK8BlNs?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[4] = '%3Ciframe src=%22https://www.youtube.com/embed/V2wI5or9Yuo?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; mytubes[5] = '%3Ciframe src=%22https://www.youtube.com/embed/f1D7APjmVbk?list=PLIj2gbMyP1RW3Sv92eegyYHJ13YgVlQDO%26?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Categories: Privacy

China Bans Internet News Reporting As Media Crackdown Widens

Your rights online - Mon, 07/25/2016 - 10:40
Earlier this month we learned that China had banned the use of social media as a news source. The local government feared that if news outlets were to report using signals coming from social media, there was a chance that fake, non-credible, and rumors would slip through the filter. It was absurd, to say the least, considering the government itself has been reportedly caught of posting a copious amount of misleading information on domestic social media platforms. In the latest wrinkle to the whole situation, the world's largest nation is now banning internet news reporting. Long time reader schwit1 shares a Bloomberg report on the same: China's top internet regulator ordered major online companies including Sina Corp. and Tencent Holdings Ltd. to stop original news reporting, the latest effort by the government to tighten its grip over the country's web and information industries. The Cyberspace Administration of China imposed the ban on several major news portals, including Sohu.com Inc. and NetEase Inc., Chinese media reported in identically worded articles citing an unidentified official from the agency's Beijing office. The companies have "seriously violated" internet regulations by carrying plenty of news content obtained through original reporting, causing "huge negative effects," according to a report that appeared in The Paper on Sunday. The agency instructed the operators of mobile and online news services to dismantle "current-affairs news" operations on Friday, after earlier calling a halt to such activity at Tencent, according to people familiar with the situation. Like its peers, Asia's largest internet company had developed a news operation and grown its team. Henceforth, they and other services can only carry reports provided by government-controlled print or online media, the people said, asking not to be identified because the issue is politically sensitive.

Read more of this story at Slashdot.

Categories: Privacy

Microsoft Can't Shield User Data From Government, Says Government

Your rights online - Mon, 07/25/2016 - 09:00
Microsoft is now arguing in court that their customers have a right to know when the government is reading their e-mail. But "The U.S. said federal law allows it to obtain electronic communications without a warrant or without disclosure of a specific warrant if it would endanger an individual or an investigation," according to Bloomberg. An anonymous reader quotes their report: The software giant's lawsuit alleging that customers have a constitutional right to know if the government has searched or seized their property should be thrown out, the government said in a court filing... The U.S. says there's no legal basis for the government to be required to tell Microsoft customers when it intercepts their e-mail... The Justice Department's reply Friday underscores the government's willingness to fight back against tech companies it sees obstructing national security and law enforcement investigations... Secrecy orders on government warrants for access to private e-mail accounts generally prohibit Microsoft from telling customers about the requests for lengthy or even unlimited periods, the company said when it sued. At the time, federal courts had issued almost 2,600 secrecy orders to Microsoft alone, and more than two-thirds had no fixed end date, cases the company can never tell customers about, even after an investigation is completed.

Read more of this story at Slashdot.

Categories: Privacy
Syndicate content