Privacy

Axon’s Ethics Board Must Keep the Company in Check

Deep Links - Thu, 04/26/2018 - 15:38

EFF, together with 41 national, state, and local civil rights and civil liberties groups, sent a letter today urging the ethics board of police technology and weapons developer Axon to hold the company accountable to the communities its products impact—and to itself.

Axon, based in Scottsdale, Arizona, is responsible for making and selling some of the most used police products in the United States, including tasers and body-worn cameras. Over the years, the company has taken significant heat for how those tools have been used in police interactions with the public, especially given law enforcement’s documented history of racial discrimination. Axon is now considering developing and incorporating into existing products new technologies like face recognition and artificial intelligence. It set up an “AI Ethics Board” made up of outside advisors and says it wants to confront the privacy and civil liberties issues associated with police use of these invasive technologies.

As we noted in the letter, “Axon has a responsibility to ensure that its present and future products, including AI-based products, don’t drive unfair or unethical outcomes or amplify racial inequities in policing.” Given this, our organizations called on the Axon Ethics Board to adhere to the following principles at the outset of its work:

  • Certain products are categorically unethical to deploy.
  • Robust ethical review requires centering the voices and perspective of those most impacted by Axon’s technologies.
  • Axon must pursue all possible avenues to limit unethical downstream uses of its technologies.
  • All of Axon’s digital technologies require ethical review.

With these guidelines, we urge Axon’s Ethics Board to steer the company in the right direction for all its current and future products. For example, the Ethics Board must advise Axon against pairing real-time face recognition analysis technology to the live video captured by body-worn cameras:

Real-time face recognition would chill the constitutional freedoms of speech and association, especially at political protests. In addition, research indicates that face recognition technology will never be perfectly accurate and reliable, and that accuracy rates are likely to differ based on subjects’ race and gender. Real-time face recognition therefore would inevitably misidentify some innocent civilians as suspects. These errors could have fatal consequences—consequences that fall disproportionately on certain populations.

For these reasons, we believe “no policy or safeguard can mitigate these risks sufficiently well for real-time face recognition ever to be marketable.”

Similarly, we urge Axon’s ethical review process to include the voices of those most impacted by its technologies:

The Board must invite, consult, and ultimately center in its deliberations the voices of affected individuals and those that directly represent affected communities. In particular, survivors of mass incarceration, survivors of law enforcement harm and violence, and community members who live closely among both populations must be included.

Finally, we believe that all of Axon’s products, both hardware and software, require ethical review. The Ethics Board has a large responsibility for the future of Axon. We hope its members will listen to our requests and hold Axon accountable for its products.[1]

Letter signatories include Color of Change, UnidosUS, South Asian Americans Leading Together, Detroit Community Technology Project, Algorithmic Justice League, Data for Black Lives, NAACP, NC Statewide Police Accountability Project, Urbana-Champaign Independent Media Center, and many more. All are concerned about the misuse of technology to entrench or expand harassment, prejudice, and bias against the public.

You can read the full letter here.

 

[1] EFF’s Technology Policy Director Jeremy Gillula, has chosen to join Axon’s Ethics Board in his personal capacity. He has recused himself from writing or reviewing this blog, or the letter, and his participation on the board should not be attributed to EFF.

Categories: Privacy

Facebook Inches Toward More Transparency and Accountability

Deep Links - Thu, 04/26/2018 - 14:03

Facebook took a step toward greater accountability this week, expanding the text of its community standards and announcing the rollout of a new system of appeals. Digital rights advocates have been pushing the company to be more transparent for nearly a decade, and many welcomed the announcements as a positive move for the social media giant.

The changes are certainly a step in the right direction. Over the past year, following a series of controversial decisions about user expression, the company has begun to offer more transparency around its content policies and moderation practices, such as the “Hard Questions” series of blog posts offering insight into how the company makes decisions about different types of speech.

The expanded community standards released on Tuesday offer a much greater level of detail of what’s verboten and why. Broken down into six overarching categories—violence and criminal behavior, safety, objectionable content, integrity and authenticity, respecting intellectual property, and content-related requests—each section comes with a “policy rationale” and bulleted lists of “do not post” items. 

But as Sarah Jeong writes, the guidelines “might make you feel sorry for the moderator who’s trying to apply them.” Many of the items on the “do not post” lists are incredibly specific—just take a look at the list contained in the section entitled “Nudity and Adult Sexual Activity”—and the carved-out exceptions are often without rationale.

And don’t be fooled: The new community standards do nothing to increase users’ freedom of expression; rather, they will hopefully provide users with greater clarity as to what might run afoul of the platform’s censors.

Facebook’s other announcement—that of expanded appeals—has received less media attention, but for many users, it's a vital development. In the platform’s early days, content moderation decisions were final and could not be appealed. Then, in 2011, Facebook instituted a process through which users whose accounts had been suspended could apply to regain access. That process remained in place until this week.

Through Onlinecensorship.org, we often hear from users of Facebook who believe that their content was erroneously taken down and are frustrated with the lack of due process on the platform. In its latest announcement, VP of Global Policy Management Monika Bickert explains that over the coming year, Facebook will be building the ability for people to appeal content decisions, starting with posts removed for nudity/sexual activity, hate speech, or graphic violence—presumably areas in which moderation errors occur more frequently.

Some questions about the process remain (will users be able to appeal content decisions while under temporary suspension? Will the process be expanded to cover all categories of speech?), but we congratulate Facebook on finally instituting a process for appealing content takedowns, and encourage the company to expand the process quickly to include all types of removals.

Categories: Privacy

Platform Censorship Won't Fix the Internet

Deep Links - Wed, 04/25/2018 - 21:31

The House Judiciary Committee will hold a hearing on “The Filtering Practices of Social Media Platforms” on April 26. Public attention to this issue is important: calls for online platform owners to police their members’ speech more heavily inevitably lead to legitimate voices being silenced online. Here’s a quick summary of a written statement EFF submitted to the Judiciary Committee in advance of the hearing.

Our starting principle is simple: Under the First Amendment, social media platforms and other online intermediaries have the right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities not to be silenced by harassment.

The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

But we won’t make the Internet fairer or safer by pushing platforms into ever more aggressive efforts to police online speech. When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier.

Indeed, for every high profile case of despicable content being taken down, there are many, many more stories of people in marginalized communities who are targets of persecution and violence. The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

That’s why we must remain vigilant when platforms decide to filter content. We are worried about how platforms are responding to new pressures to filter the content on their services. Not because there’s a slippery slope from judicious moderation to active censorship, but because we are already far down that slope.

To avoid slipping further, and maybe even reverse course, we’ve outlined steps platforms can take to help protect and nurture online free speech. They include:

  • Better transparency
  • Foster innovation and competition; e.g., by promoting interoperability
  • Clear notice and consent procedures
  • Robust appeal processes
  • Promote user control
  • Protect anonymity

You can read our statement here for more details.

For its part, rather than instituting more mandates for filtering or speech removal, Congress should defend safe harbors, protect anonymous speech, encourage platforms to be open about their takedown rules and to follow a consistent, fair, and transparent process, and avoid promulgating any new intermediary requirements that might have unintended consequences for online speech.

EFF was invited to participate in this hearing and we were initially interested. However, before we confirmed our participation, the hearing shifted in a different direction. We look forward to engaging in further discussions with policymakers and the platforms themselves.

Categories: Privacy

California Can Build Trust Between Police and Communities By Requiring Agencies to Publish Their Policies Online

Deep Links - Wed, 04/25/2018 - 19:30

If we as citizens are more informed of police policies and procedures, and we can easily access those materials online and study them, it’ll lead to greater accountability and better relations between our communities and the police departments that serve us. EFF supports a bill in the California legislature which aims to do exactly that.

S.B. 978, introduced by Sen. Steven Bradford, will require law enforcement agencies to post online their current standards, practices, policies, operating procedures, and education and training materials. As we say in our letter of support:

[The bill] will help address the increased public interest and concern about police policies in recent years, including around the issues of use of force, less-lethal weapons, body-worn cameras, anti-bias training, biometric identification and collection, and surveillance (such as social media analysis, automated license plate recognition, cell-site simulators, and drones).

Additionally, policies governing police activities should be readily available for review and scrutiny by the public, policymakers, and advocacy groups. Not only will this transparency measure result in well-informed policy decisions, but it will also provide the public with a clearer understanding of what to expect and how to behave during police encounters.

Last year, Gov. Jerry Brown vetoed a previous version of this bill, which had broad support from both civil liberties groups and law enforcement associations. The new bill is meant to address his concerns of the bill’s scope, and removes a few of the state law enforcement agencies from the law’s purview, like the Department of Alcoholic Beverage Control and California Highway Patrol, among others.

We hope that the legislature will once again pass this important bill, and that Gov. Brown will support transparency and accountability between law enforcement and Californians.

Categories: Privacy

A Tale of Two Poorly Designed Cross-Border Data Access Regimes

Deep Links - Wed, 04/25/2018 - 14:08

On Tuesday, the European Commission published two legislative proposals that could further cement an unfortunate trend towards privacy erosion in cross-border state investigati­ons. Building on a foundation first established by the recently enacted U.S. CLOUD Act, these proposals compel tech companies and service providers to ignore critical privacy obligations in order to facilitate easy access when facing data requests from foreign governments. These initiatives collectively signal the increasing willingness of states to sacrifice privacy as a way of addressing pragmatic challenges in cross-border access that could be better solved with more training and streamlined processes.

The EU proposals (which consist of a Regulation and a Directive) apply to a broad range of companies1 that offer services in the Union and that have a “substantial connection” to one or more Member States.2 Practically, that means companies like Facebook, Twitter, and Google, though not based in the EU, would still be affected by these proposals. The proposals create a number of new data disclosure powers and obligations, including:

  • European court orders that compel internet companies and service providers to preserve data they already stored at the time the order is received (European preservation orders);
  • European court orders for content and ‘transactional’ data3 for investigation of a crime that carries a custodial sentence of at least 3 years or more (European production orders for content data);
  • European orders for some metadata defined as “access data” (IP addresses, service access times) and customer identification data (including name, date of birth, billing data and email addresses) that could be issued for any criminal offense (European production orders for access and subscriber data);4
  • An obligation for some service providers to appoint an EU legal representative who will be responsible for complying with data access demands from any EU Member State;
  • The package of proposals does not address real-time access to communications (in contrast to the CLOUD Act).
Who Is Affected and How?

Such orders would affect Google, Facebook, Microsoft, Twitter, instant messaging services, voice over IP, apps, Internet Service Providers, and e-mail services, as well as cloud technology providers, domain name registries, registrars, privacy and proxy service providers, and digital marketplaces.

Moreover, tech companies and service providers would have to comply with law enforcement orders for data preservation and delivery within 10 days or, in the case of an imminent threat to life or physical integrity of a person or to a critical infrastructure, within just six hours. Complying with these orders would be costly and time-consuming.

Alarmingly, the EU proposals would compel affected companies (which include diverse entities ranging from small ISPs and burgeoning startups to multibillion dollar global corporations) to develop extensive resources and expertise in the nuances of many EU data access regimes. A small regional German ISP  will need the capacity to process demands from France, Estonia, Poland, or any other EU member state in a manner that minimizes legal risks. Ironically, the EU proposals are presented as beneficial to businesses and service providers on the basis that they provide ‘legal certainty and clarity’. In reality, they do the opposite, forcing these entities to devote resources to understanding the law of each member state. Even worse, the proposal would immunize businesses from liability in situations where good faith compliance with a data request might conflict with EU data protection laws. This creates a powerful incentive to err on the side of compliance with a data demand at cost to privacy. There is no comparable immunity from the heavy fines that could be levied for ignoring a data access request on the basis of good-faith compliance with EU data protection rules.

No such liability limitation at all is available to companies and service providers subject to non-EU privacy protections. In some instances, the companies would be forced to choose between complying with EU data demands issued further to EU standards and complying with legal restrictions on data exposure imposed by other jurisdictions. For example, mechanisms requiring service providers to disclose customer identification data on the basis of a prosecutorial demand could conflict with Canada’s data protection regime. Personal Information Protection and Electronic Documents Act (PIPEDA), a Canadian privacy law, has been held to prevent service providers from identifying customers associated with anonymous online activity in the absence of a court order. As the European proposals purport to apply to domain name registries as well, these mechanisms could also interfere with efforts at ICANN to protect anonymity in website registration by shielding customer registration information.

The EU package could also compel U.S.-based providers to violate the Stored Communications Act (SCA), which prevents the disclosure of stored communications content in the absence of a court order.5 The recent U.S. CLOUD Act created a new mechanism for bypassing these safeguards—allowing certain foreign nations (if the United States enters into a “executive agreement” with them under the CLOUD Act) to compel data production from U.S.-based providers without following U.S. law or getting an order from a U.S. judge. However, the United States has not entered into any such an agreement with the EU or any EU member states at this stage, and the European package would require compliance even in the absence of one.

No Political Will to Fix the MLAT Process

The unfortunate backdrop to this race to subvert other states’ privacy standards is a regime that already exists for navigating cross-border data access. The Mutual Legal Assistance Treaty (MLAT) system creates global mechanisms by which one state can access data hosted in another while still complying with privacy safeguards in both jurisdictions. The MLAT system is in need of reform, as the volume of cross-border requests in modern times has strained some of its procedural mechanisms to the point where delays in responses can be significant. However, the fundamental basis of the MLAT regime remains sound and the pragmatic flaws in its implementation are far from insurmountable. Instead of reforming the MLAT regime in a way that would retain the current safeguards it respects, the European Commission and the United States seem to prefer to jettison these safeguards.

Perhaps ironically, much of the delay within the MLAT system arises from a lack of expertise in state agencies and officials in the data access laws of foreign states. Developing such expertise would allow state agencies to formulate foreign data access requests faster and more efficiently. It would also allow state officials to process incoming requests with greater speed. The EU proposals seek to bypass this requirement by effectively privatizing the legal assessment process: meaning that we're losing a real judge making real judgments. Service providers will now need to decide whether foreign requests are properly formulated under foreign laws. Yet the judicial authorities and state agencies are far better placed to make these assessments—not only from a resource management perspective, but also from a legitimacy perspective.

Contrary to this trend, European courts have continued to assert their own domestic privacy standards when protecting EU individuals’ data from access by foreign state agencies. Late last week, an Irish court questioned whether U.S. state agencies ( particularly the NSA and FBI who are granted broad powers under the U.S. Foreign Intelligence Surveillance Court) are sufficiently restrained in their ability to access EU individuals’ data. The matter was referred to the EU’s highest court and an adverse finding on the matter could prevent global communications platforms from exporting EU individuals’ data to the U.S. Such a finding could even prevent those same platforms from complying with some U.S. data demands regarding EU individuals’ data if additional privacy safeguards and remedies are not added. It is not yet clear what role such restrictions might ultimately play in any EU-U.S. agreement that might be negotiated under the U.S. CLOUD Act.

Ultimately, both the U.S. CLOUD Act and the EU proposal are a missed opportunity to work towards cross border data access regime that facilitates efficient law enforcement access and respects privacy, due process, and freedom of expression.

Conclusion

Unlike the last-minute rush to approve the U.S. CLOUD Act, there is still a long way to go before finalizing the EU proposals. Both documents need to be reviewed by the European Parliament and the Council of the European Union, and be subject to amendments. Once approved by both institutions, the regulation will become immediately enforceable as law in all Member States simultaneously, and it will override all national laws dealing with the same subject matter. The directive, however, will need to be transposed into national law.

We call on EU policy-makers to avoid the privatization of law enforcement and work instead to enhance judicial cooperation within and outside the European Union.

  • 1. Specifically listed are: providers of electronic communications service, social networks, online marketplaces, hosting service providers, and Internet infrastructure providers such as IP address and domain name registries. See Article 2, Definitions.
  • 2. A substantial connection is defined in the regulation as having an establishment in one or more Member States. In the absence of an establishment in the Union, a substantive connection will be the existence of a significant number of users in one or more Member States, or the targeting of activities towards one or more Member States (including factors such as the use of a language or a currency generally used in a Member State, availability of an app in the relevant national app store from providing local advertising or advertising in the language used in a Member State, from making use of any information originating from persons in Member States in the course of its activities, among others). See Article 3 Scope of the Regulation.
  • 3. Transactional data is “generally pursued to obtain information about the contacts and whereabouts of the user and may be served to establish a profile of an individual concerned”. The regulation described transactional data as the “the source and destination of a message or another type of interaction, data on the location of the device, date, time, duration, size, route, format, the protocol used and the type of compression, unless such data constitutes access data.
  • 4. The draft regulation states that access data is “typically recorded as part of a record of events (in other words a server log) to indicate the commencement and termination of a user access session to a service. It is often an individual IP address (static or dynamic) or other identifier that singles out the network interface used during the access session.”
  • 5. Most large U.S. providers insist on a warrant based on probable cause to disclose content, although the SCA allows disclosure on a weaker standard in some cases.
Categories: Privacy

Supreme Court Upholds Patent Office Power to Invalidate Bad Patents

Deep Links - Tue, 04/24/2018 - 19:19

In one of the most important patent decisions in years, the Supreme Court has upheld the power of the Patent Office to review and cancel issued patents. This power to take a “second look” is important because, compared to courts, administrative avenues provide a much faster and more efficient means for challenging bad patents. If the court had ruled the other way, the ruling would have struck down various patent office procedures and might even have resurrected many bad patents. Today’s decision [PDF] in Oil States Energy Services, LLC v. Greene’s Energy Group, LLC is a big win for those that want a more sensible patent system.

Oil States challenged the inter partes review (IPR) procedure before the Patent Trial and Appeal Board (PTAB). The PTAB is a part of the Patent Office and is staffed by administrative patent judges. Oil States argued that the IPR procedure is unconstitutional because it allows an administrative agency to decide a patent’s validity, rather than a federal judge and jury.

Together with Public Knowledge, Engine Advocacy, and the R Street Institute, EFF filed an amicus brief [PDF] in the Oil States case in support of IPRs. Our brief discussed the history of patents being used as a public policy tool, and how Congress has long controlled how and when patents can be canceled. We explained how the Constitution sets limits on granting patents, and how IPR is a legitimate exercise of Congress’s power to enforce those limits.

Our amicus brief also explained why IPRs were created in the first place. The Patent Office often does a cursory job reviewing patent applications, with examiners spending an average of about 18 hours per application before granting 20-year monopolies. IPRs allow the Patent Office to make sure it didn’t make a mistake in issuing a patent. The process also allows public interest groups to challenge patents that harm the public, like EFF’s successful challenge to Personal Audio’s podcasting patent. (Personal Audio has filed a petition for certiorari asking the Supreme Court to reverse, raising some of the same grounds argued by Oil States. That petition will be likely be decided in May.)

The Supreme Court upheld the IPR process a 7-2 decision. Writing for the majority, Justice Thomas explained:

Inter partes review falls squarely within the public rights doctrine. This Court has recognized, and the parties do not dispute, that the decision to grant a patent is a matter involving public rights—specifically, the grant of a public franchise. Inter partes review is simply a reconsideration of that grant, and Congress has permissibly reserved the PTO’s authority to conduct that reconsideration. Thus, the PTO can do so without violating Article III.

Justice Thomas noted that IPRs essentially serve the same interest as initial examination: ensuring that patents stay within their proper bounds.

Justice Gorsuch, joined by Chief Justice Roberts, dissented. He argued that only Article III courts should have the authority to cancel patents. If that view had prevailed, it likely would have struck down IPRs, as well as other proceedings before the Patent Office, such as covered business method review and post-grant review. It would also have left the courts with difficult questions regarding the status of patents already found invalid in IPRs. 

In a separate decision [PDF], in SAS Institute v. Iancu, the Supreme Court ruled that, if the PTAB institutes an IPR, it must decide the validity of all challenged claims. EFF did not file a brief in that case. While the petitioner had tenable arguments under the statute (indeed, it won), the result seems to make the PTAB’s job harder and creates a variety of problems (what is supposed to happen with partially-instituted IPRs currently in progress?). Since it is a statutory decision, Congress could amend the law. But don’t hold your breath for a quick fix.

Now that IPRs have been upheld, we may see a renewed push from Senator Coons and others to gut the PTAB’s review power. That would be a huge step backwards. As Justice Thomas explained, IPRs protect the public’s “paramount interest in seeing that patent monopolies are kept within their legitimate scope.” We will defend the PTAB’s role serving the public interest.

Categories: Privacy

Stop Egypt’s Sweeping Ridesharing Surveillance Bill

Deep Links - Tue, 04/24/2018 - 18:11

The Egyptian government is currently debating a bill which would compel all ride-sharing companies to store any Egyptian user data within Egypt. It would also create a system that would let the authorities have real-time access to their passenger and trip information. If passed, companies such as Uber and its Dubai-based competitor Careem would be forced to grant unfettered direct access to their databases to unspecified security authorities. Such a sweeping surveillance measure is particularly ripe for abuse in a country known for its human rights violations, including an attempts to use surveillance against civil society. The bill is expected to pass a final vote before Egypt’s House on May 14th or 15th.

Article 10 of the bill requires companies to relocate their servers containing all Egyptian users’ information to within the borders of the Arab Republic of Egypt. Compelled data localization has frequently served as an excuse for enhancing a state’s ability to spy on its citizens.  

Even more troubling, article 9 of the bill forces these same ride-sharing companies to electronically link their local servers directly to unspecified authorities, from police to intelligence agencies. Direct access to a server would provide the Egyptian government unrestricted, real-time access to data on all riders, drivers, and trips. Under this provision, the companies themselves would have no ability to monitor the government’s use of their network data.

Effective computer security is hard, and no system will be free of bugs and errors.  As the volume of ride-sharing usage increases, risks to the security and privacy of ridesharing databases increase as well. Careem just admitted on April 23rd that its databases had been breached earlier this year. The bill’s demand to grant the Egyptian government unrestricted server access greatly increases the risk of accidental catastrophic data breaches, which would compromise the personal data of millions of innocent individuals. Careem and Uber must focus on strengthening the security of their databases instead of granting external authorities unfettered access to their servers.

Direct access to the databases of any company without adequate legal safeguards undermines the privacy and security of innocent individuals, and is therefore incompatible with international human rights obligations. For any surveillance measure to be legal under international human rights standards, it must be prescribed by law. It must be “necessary” to achieve a legitimate aim and “proportionate” to the desired aim. These requirements are vital in ensuring that the government does not adopt surveillance measures which threaten the foundations of a democratic society.

The European Court of Human Rights, in Zakharov v. Russia, made clear that direct access to servers is prone to abuse:

“...a system which enables the secret services and the police to intercept directly the communications of each and every citizen without requiring them to show an interception authorisation to the communications service provider, or to anyone else, is particularly prone to abuse.”                                                                                             

Moreover, the Court of Justice of the European Union (CJEU) has also discussed the importance of having an independent authorization prior to government access to electronic data. In Tele2 Sverige AB v. Post, held:

“it is essential that access of the competent national authorities to retained data should, as a general rule, (...) be subject to a prior review carried out either by a court or by an independent administrative body, and that the decision of that court or body should be made following a reasoned request by those authorities submitted...”.

Unrestricted direct access to the data of innocent individuals using ridesharing apps, by its very nature, eradicates any consideration of proportionality and due process. Egypt must turn back from the dead-end path of unrestricted access, and uphold its international human rights obligations. Sensitive data demands strong legal protections, not an all-access pass. Hailing a rideshare should never include a blanket access for your government to follow you. We hope Egypt’s House of Representatives rejects the bill.

Categories: Privacy

California Bill Would Guarantee Free Credit Freezes in 15 Minutes

Deep Links - Tue, 04/24/2018 - 15:09

 

After the shocking news of the massive Equifax data breach, which has now ballooned to jeopardize the privacy of nearly 148 million people, many Americans are rightfully scared and struggling to figure out how to protect themselves from the misuse of their personal information.

To protect against credit fraud, many consumer rights and privacy organizations recommend placing a ‘credit freeze’ with the credit bureaus. When criminals seek to use breached data to borrow money in the name of a breach victim, the potential lender normally runs a credit check with a credit bureau. If there’s a credit freeze in place, then it’s harder to obtain the loan.

But placing a credit freeze can be cumbersome, time-consuming, and costly. The process can also vary across states. It can be an expensive time-suck if a consumer wants to place a freeze across all credit bureaus and for all family members.

Fortunately, California now has an opportunity to dramatically streamline the credit freeze process for its residents, thanks to a state bill introduced by Sen. Jerry Hill, S.B. 823. EFF is proud to support it.

The bill will allow Californians to place, temporarily lift, and remove credit freezes easily and at no charge. Credit reporting agencies will be required to carry out the request in 15 minutes or less if the consumer uses the company’s website or mobile app.

The response time for written requests has been cut as well from three days to just 24 hours. Additionally, credit reporting agencies must offer consumers the option of passing along credit freeze requests to other credit reporting agencies, saving Californians time and reducing the likelihood of the misuse of their information. 

You can read our support letter for the bill here.

Free and convenient credit freezes are becoming even more important as many consumer credit reporting agencies are pushing their inferior “credit lock” products. These products don’t offer the same protections built into credit freezes by law, and to use some of them, consumers have to agree to have their personal information be used for targeted ads.

The bill has passed the California Senate and will soon be heading to the Assembly for a vote. EFF endorses this effort to empower consumers to protect their sensitive information.

Categories: Privacy

Net Neutrality Did Not Die Today

Deep Links - Mon, 04/23/2018 - 17:01

When the FCC’s “Restoring Internet Freedom Order,” which repealed net neutrality protections the FCC had previously issued, was published on February 22nd, it was interpreted by many to mean it would go into effect on April 23. That’s not true, and we still don’t know when the previous net neutrality protections will end.

On the Federal Register’s website—which is the official daily journal of the United States Federal Government and publishes all proposed and adopted rules, the so-called “Restoring Internet Freedom Order” has an “effective date” of April 23. But that only applies to a few cosmetic changes. The majority of the rules governing the Internet remain the same—the prohibitions on blocking, throttling, and paid prioritization—remain.

Before the FCC’s end to those protections can take effect, the Office of Management and Budget has to approve the new order, which it hasn’t done. Once that happens, we’ll get another notice in the Federal Register. And that’s when we’ll know for sure when the ISPs will be able to legally start changing their actions.

If your Internet experience hasn’t changed today, don’t take that as a sign that ISPs aren’t going to start acting differently once the rule actually does take effect;  for example, Comcast changed the wording on its net neutrality pledge almost immediately after last year’s FCC vote.

Net neutrality protections didn’t end today, and you can help make sure they never do. Congress can still stop the repeal from going into effect by using the Congressional Review Act (CRA) to overturn the FCC’s action. All it takes is a simple majority vote held within 60 legislative working days of the rule being published. The Senate is only one vote short of the 51 votes necessary to stop the rule change, but there is a lot more work to be done in the House of Representatives. See where your members of Congress stand and voice your support for the CRA here.

Take Action

Save the net neutrality rules

Categories: Privacy

Stupid Patent of the Month: Suggesting Reading Material

Deep Links - Mon, 04/23/2018 - 16:49

Online businesses—like businesses everywhere—are full of suggestions. If you order a burger, you might want fries with that. If you read Popular Science, you might like reading Popular Mechanics. Those kinds of suggestions are a very old part of commerce, and no one would seriously think it’s a patentable technology.

Except, apparently, for Red River Innovations LLC, a patent troll that believes its patents cover the idea of suggesting what people should read next. Red River filed a half-dozen lawsuits in East Texas throughout 2015 and 2016. Some of those lawsuits were against retailers like home improvement chain Menards, clothier Zumiez, and cookie retailer Ms. Fields. Those stores all got sued because they have search bars on their websites.

In some lawsuits, Red River claimed the use of a search bar infringed US Patent No. 7,958,138. For example, in a lawsuit against Zumiez, Red River claimed [PDF] that “after a request for electronic text through the search box located at www.zumiez.com, the Zumiez system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text, as described and claimed in the ’138 Patent.” In that case, the “reading material” is text like product listings for jackets or skateboard decks.

In another lawsuit, Red River asserted a related patent, US Patent No. 7,526,477, which is our winner this month. The ’477 patent describes a system of electronic text searching, where the user is presented with “related concepts” to the text they’re already reading. The examples shown in the patent display a kind of live index, shown to the right of a block of electronic text. In a lawsuit against Infolinks, Red River alleged [PDF] infringement because “after a request for electronic text, the InText system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text.”   

Suggesting and providing reading material isn’t an invention, but rather an abstract idea. The final paragraph of the ’477 patent’s specification makes it clear that the claimed method could be practiced on just about any computer. Under the Supreme Court’s decision in Alice v. CLS Bank, an abstract idea doesn’t become eligible for a patent merely because you suggest performing it with a computer. But hiring lawyers to make this argument is an expensive task, and it can be daunting to do so in a faraway locale, like the East Texas district where Red River has filed its lawsuits so far. That venue has historically attracted “patent troll” entities that see it as favorable to their cases.

The ’477 patent is another of the patents featured in Unified Patents’ prior art crowdsourcing project Patroll. If you know of any prior art for the ’477 patent, you can submit it (before April 30) to Unified Patents for a possible $2,000 prize.

The good news for anyone being targeted by Red River today is that it’s not going to be as easy to drag businesses from all over the country into a court of their choice. The Supreme Court’s TC Heartland decision, combined with a Federal Circuit case called In re Cray, mean that patent owners have to sue in a venue where defendants actually do business.

It’s also a good example of why fee-shifting in patent cases, and upholding the case law of the Alice decision, are so important. Small companies using basic web technologies shouldn’t have to go through a multi-million dollar jury trial to get a chance to prove that a patent like the ’477 is abstract and obvious.

Categories: Privacy

We’re in the Uncanny Valley of Targeted Advertising

Deep Links - Fri, 04/20/2018 - 14:22

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Categories: Privacy

Minnesota Supreme Court Ruling Will Help Shed Light on Police Use of Biometric Technology

Deep Links - Fri, 04/20/2018 - 12:43

A decision by the Minnesota Supreme Court on Wednesday will help the public learn more about how law enforcement use of privacy invasive biometric technology.

The decision in Webster v. Hennepin County is mostly good news for the requester in the case, who sought the public records as part of a 2015 EFF and MuckRock campaign to track mobile biometric technology use by law enforcement across the country. EFF filed a brief in support of Tony Webster, arguing that the public needed to know more about how officials use these technologies.

Across the country, law enforcement agencies have been adopting technologies that allow cops to identify subjects by matching their distinguishing physical characteristics to giant repositories of biometric data. This could include images of faces, fingerprints, irises, or even tattoos. In many cases, police use mobile devices in the field to scan and identify people during stops. However, police may also use this technology when a subject isn’t present, such as grabbing images from social media, CCTV, or even lifting biological traces from seats or drinking glasses.

Webster’s request to Hennepin County officials sought a variety of records, and included a request for the agencies to search officials’ email messages for keywords related to biometric technology, such as “face recognition” and “iris scan.”

Officials largely ignored the request and when Webster brought a legal challenge, they claimed that searching their email for keywords would be burdensome and that the request was improper under the state’s public records law, the Minnesota Government Data Practices Act.

Webster initially prevailed before an administrative law judge, who ruled that the agencies had failed to comply with the Data Practices Act in several respects. The judge also ruled that request a search of email records for keywords was proper under the law and was not burdensome.

County officials appealed that decision to a state appellate court. That court agreed that Webster’s request was proper and not burdensome. But it disagreed that the agencies had violated the Data Practices Act by not responding to Webster’s request or that they had failed to set up their records so that they could be easily searched in response to records requests.

Webster appealed to the Minnesota Supreme Court, who on Wednesday agreed with him that the agencies had failed to comply with the Data Practices Act by not responding to his request. The court, however, agreed with the lower appellate court that county officials did not violate the law in how they had configured their email service or arranged their records systems.

In a missed opportunity, however, the court declined to rule on whether searching for emails by keywords was appropriate under the Data Practices Act and not burdensome. The court claimed that it didn’t have the ability to review that issue because Webster had prevailed in the lower court and county officials failed to properly raise the issue.

Although this means that the lower appellate court’s decision affirming that email keyword searches are proper and not burdensome still stands, it would have been nice if the state’s highest court weighed in on the issue.

EFF is nonetheless pleased with the court’s decision as it means Webster can finally access records that document county law enforcement’s use of biometric technology. We would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.

For more on biometric identification, such as face recognition, check out EFF’s Street-Level Surveillance project.

Categories: Privacy

Dear Canada: Accessing Publicly Available Information on the Internet Is Not a Crime

Deep Links - Thu, 04/19/2018 - 23:00

Canadian authorities should drop charges against a 19-year-old Canadian accused of “unauthorized use of a computer service” for downloading thousands of public records hosted and available to all on a government website. The whole episode is an embarrassing overreach that chills the right of access to public records and threatens important security research.

At the heart of the incident, as reported by CBC news this week, is the Nova Scotian government’s embarrassment over its own failure to protect the sensitive data of 250 people who used the province’s Freedom of Information Act (FOIA) to request their own government files. These documents were hosted on the government web server that also hosted public records containing no personal information. Every request hosted on the server contained very similar URLs, which differed only in a single document ID number at the end of the URL. The teenager took a known ID number, and then, by modifying the URL, retrieved and stored all of the FOIA documents available on the Nova Scotia FOIA website.

Beyond the absurdity of charging someone with downloading public records that were available to anyone with an Internet connection, if anyone is to blame for this mess, it’s Nova Scotia officials. They have both insecurely set up their public records server to permit public access to others’ private information. Officials should accept responsibility for failing to secure such sensitive data rather than ginning up a prosecution. The fact that the government was publishing documents that contained sensitive data in a public website without any passwords or access controls demonstrates their own failure to protect the private information of individuals. Moreover, it does not appear that the site even deployed minimal technical safeguards to exclude widely-known indexing tools such as Google search and the Internet Archive from archiving all the records published on the site, as both appear to have cached some of the documents.

The lack of any technical safeguards shielding the Freedom of Information responses from public access would make it difficult for anyone to know that they were downloading material containing private information, much less provide any indication that such activity was “without authorization” under the criminal statute. According to the report, more than 95% of the 7,000 Freedom of Information responses in question included redactions for any information properly excluded from disclosure under Nova Scotia’s FOI law. Freedom of Information laws are about furthering public transparency, and information released through the FOI process is typically considered to be public to everyone.

But beyond the details of this case, automating access to publicly available freedom of information requests is not conduct that should be criminalized: Canadian law criminalizes unauthorized use of  computer systems, but these provisions are only intended to be applied when the use of the service is both unauthorized and carried out with fraudulent intent. Neither element should be stretched to meet the specifics in this case. The teenager in question believed he was carrying out a research and archiving role, preserving the results of freedom of information requests. And given the setup of the site, he likely wasn’t aware that a few of the documents contained personal information. If true, he would not have had any fraudulent intent.

“The prosecution of this individual highlights a serious problem with Canada’s unauthorized intrusion regime,”  Tamir Israel, Staff Lawyer at CIPPIC, told us. “Even if he is ultimately found innocent, the fact that these provisions are sufficiently ambiguous to lay charges can have a serious chilling effect on innovation, free expression and legitimate security research.”

The deeper problem with this case is that it highlights how concerns about computer crime can lead to absurd prosecutions. The Canadian police are using to prosecute the teen was implemented after Canada sign the Budapest Cybercrime Convention. The convention’s original intent was to punish those who break into protected computers to steal data or cause damage.

Criminalizing access to publicly available data over the Internet twists the Cybercrime Convention’s purpose. Laws that offer the possibility of imposing criminal liability on someone simply for engaging with freely available information on the web pose a continuing threat to the openness and innovation of the Internet. They also threaten legitimate security research. As technology law professor Orin Kerr describes it, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Canada should take the lead from the  United States federal court’s decision in Sandvig v. Sessions, which made clear that using automated tools to access freely available information is not a computer crime. As the court wrote:  

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

The same is true in the case of the Canadian teen.

We've long defended the use of “automated scraping,” which is the process of using web crawlers or bots — applications that run automated tasks over the Internet—to extract content and data from a website. Scraping provides a wide range of valuable tools and services that Internet users, programmers, journalists, and researchers around the world rely on every day to the benefit of the broader public.

The value of automated scraping value goes well beyond curious teenagers seeking access to freedom of information requests. The Internet Archive has long been scraping public portions of the world wide web and preserving them for future researchers. News aggregation tools, including Google’s Crisis Map, which aggregated critical information about the California’s October 2016 wildfires, involve scraping. ProPublica journalists used automated scrapers to investigate Amazon’s algorithm for ranking products by price and uncovered that Amazon’s pricing algorithm was hiding the best deals from many of its customers. The researchers who studied racial discrimination on Airbnb also used bots, and found that distinctively African American names were 16 percent less likely to be accepted relative to identical guests with distinctively white names.

Charging the Canadian teen with a computer crime for what amounts to his scraping publicly available online content has severe consequences for him and the broader public. As a result of the charges against him, the teen is banned from using the Internet and is concerned he may not be able to complete his education.

More broadly, the prosecution is a significant deterrent to anyone who wanted to use common tools such as scraping to collect public government records from websites, as the government’s own failure to adequately protect private information can now be leveraged into criminal charges against journalists, activists, or anyone else seeking to access public records.

Even if the teen is ultimately vindicated in court, this incident calls for a re-examination of Canada’s unauthorized intrusion regime and law enforcement’s use of it. The law was not intended for cases like this, and should never have been raised against an innocent Internet user.

Categories: Privacy

A Little Help for Our Friends

Deep Links - Thu, 04/19/2018 - 21:01

In periods like this one, when governments seem to ignore the will of the people as easily as companies violate their users’ trust, it’s important to draw strength from your friends. EFF is glad to have allies in the online freedom movement like the Internet Archive. Right now, donations to the Archive will be matched automatically by the Pineapple Fund.

Founded 21 years ago by Brewster Kahle, the Internet Archive’s mission is to provide free and universal access to knowledge through its vast digital library. Their work has helped capture the massive—yet now too often ephemeral—proliferation of human creativity and knowledge online. Popular tools like the Wayback Machine have allowed people to do things like view deleted and altered webpages and recover public statements to hold officials accountable.

EFF and the Internet Archive have stood together in a number of digital civil liberties cases. We fought back when the Archive became the recipient of a National Security Letter, a tool often used by the FBI to force Internet providers and telecommunications companies to turn over the names, addresses, and other records about their customers, and frequently accompanied by a gag order. EFF and the Archive have worked together to fight threats to free expression, online innovation, and the free flow of information on the Internet on numerous occasions. We have even collaborated on community gatherings like EFF’s own Pwning Tomorrow speculative fiction launch and the recent Barlow Symposium exploring EFF co-founder John Perry Barlow’s philosophy of the Internet.

EFF co-founder John Perry Barlow with the Internet Archive’s Brewster Kahle.

This month, the Bitcoin philanthropist behind the Pineapple Fund is challenging the world to support the Internet Archive and the movement for online freedom. The Pineapple Fund will match up to $1 million in donations to the Archive through April 30. (EFF was also the grateful recipient of a $1 million Pineapple Fund grant in January of this year.) If you would like to support the future of libraries and preserve online knowledge for generations to come, consider giving to the Internet Archive today. We salute the Internet Archive for supporting privacy, free expression, and the open web.

Categories: Privacy

Patent Office Throws Out GEMSA’s Stupid Patent on a GUI For Storage

Deep Links - Thu, 04/19/2018 - 18:14

The Patent Trial and Appeal Board has issued a ruling [PDF] invalidating claims from US Patent No. 6,690,400, which had been the subject of the June 2016 entry in our Stupid Patent of the Month blog series. The patent owner, Global Equity Management (SA) Pty Ltd. (GEMSA), responded to that post by suing EFF in Australia. Eventually, a U.S. court ruled that EFF’s speech was protected by the First Amendment. Now the Patent Office has found key claims from the ’400 patent invalid.

The ’400 patent described its “invention” as “a Graphic User Interface (GUI) that enables a user to virtualize the system and to define secondary storage physical devices through the graphical depiction of cabinets.” In other words, virtual storage cabinets on a computer. E-Bay, Alibaba, and Booking.com, filed a petition for inter partes review arguing that claims from the ’400 patent were obvious in light of the Partition Magic 3.0 User Guide (1997) from PowerQuest Corporation. Three administrative patent judges from the Patent Trial and Appeal Board (PTAB) agreed.

The PTAB opinion notes that Partition Magic’s user guide teaches each part of the patent’s Claim 1, including the portrayal of a “cabinet selection button bar,” a “secondary storage partitions window,” and a “cabinet visible partition window.” This may be better understood through diagrams from the opinion. The first diagram below reproduces a figure from the patent labeled with claim elements. The second is a figure from Partition Magic, labeled with the same claim elements.

GEMSA argued that the ’400 patent was non-obvious because the first owner of the patent, a company called Flash Vos, Inc., “moved the computer industry a quantum leap forward in the late 90’s when it invented Systems Virtualization.” But the PTAB found that “Patent Owner’s argument fails because [it] has put forth no evidence that Flash Vos or GEMSA actually had any commercial success.”

The constitutionality of inter partes review is being challenged in the Supreme Court in the Oil States case. (EFF filed an amicus brief in that case in support of the process.) A decision is expected in Oil States before the end of June. The successful challenge to GEMSA’s patent shows the importance of inter partes review. GEMSA had sued dozens of companies alleging infringement of the ’400 patent. GEMSA can still appeal the PTAB’s ruling. If the ruling stands, however, it should end those suits as to this patent.

Related Cases: EFF v. Global Equity Management (SA) Pty Ltd
Categories: Privacy

New York Judge Makes the Wrong Call on Stingray Secrecy

Deep Links - Thu, 04/19/2018 - 15:15

A New York judge has ruled that the public and the judiciary shouldn’t second-guess the police when it comes to secret snooping on the public with intrusive surveillance technologies.

He couldn’t be more wrong. 

A core part of EFF’s mission is questioning the decisions of our law enforcement and intelligence agencies over digital surveillance. We’ve seen too many cases where police have abused databases, hidden the use of invasive technologies, targeted people exercising their First Amendment rights, disparately burdened immigrants and people of color, and captured massive amounts of unnecessary information on innocent people. 

We’re outraged about New York Judge Shlomo Hager’s recent ruling against the New York Civil Liberties Union in a public records case. The judge upheld the New York Police Department’s decision to withhold records about its purchases of cell-site simulator equipment (colloquially known as Stingrays), including the names of surveillance products and how much they cost taxpayers. 

As the judge said in the hearing [PDF]: 

The case law is clear … "It is bad law and bad policy to second-guess the predictive judgments made by the government’s intelligence agencies" … Therefore, this Court will defer to Detective Werner, as well as to Inspector Gregory Antonsen’s expertise, that disclosure of the names of the StingRay devices, as well as the prices, would pose a substantial threat and would reveal the nonroutine information to bad actors that would use it to evade detection.

We wholeheartedly disagree. Holding police accountable and shining light on the criminal justice system is absolutely good law, good policy, and good for community relations. Questioning authority is one of the most important ways to defend democracy.

Up until a few years ago, a lot of law enforcement agencies around the country went to extreme lengths to hide the existence of cell-site simulators. These devices mimic cell towers in order to connect to people’s phones. Police would reject public records requests about this technology, while prosecutors would sometimes drop cases rather than let information come to light. One of the main vendors, Harris Corp., even had agencies sign non-disclosure agreements.

Transparency advocates sued and the technology’s capabilities began to surface. Police departments were using the technology to track phones without a warrant. They were sucking up data on thousands of innocent phone owners with each use. They were surveilling protesters. The technology reportedly interferes with cellphone coverage, which disparately impacts people of color because police much more frequently deploy cell-site simulators in their neighborhoods. 

In California, legislators were so outraged by the secrecy that they passed a law requiring any agency using a cell-site simulator to publish a privacy and usage policy online and hold public meetings before acquiring the technology. California also passed a law requiring a warrant before police can use a cell-site simulator as well as mandating annual public disclosures about these warrants. 

What’s good enough for California should be good for New York. Transparency in New York City about high-tech spying is especially important, given the NYPD’s track record of civil liberties violations—including illegal surveillance of Muslims and the practice of “testilying.” 

The argument that transparency is going to put more information in the hands of criminals is a weak diversion. By that logic, nothing law enforcement does should be open to public scrutiny, and we should resign ourselves to an Orwellian America monitored only by secret police. That argument failed to hold water in California. In the years since California legislators mandated greater transparency about acquisition and use of cell-site simulators, there is no evidence that these laws contributed to any crime. In recent years, many other agencies have handed over documents about cell-site simulators with little objection. 

The judge’s misguided ruling is a reminder that we must seek transparency through all available means. That’s why we support efforts in the New York City Council to pass the Public Oversight of Surveillance Technology (POST) Act. This measure would require NYPD to publish a use policy for each electronic surveillance technologies it has or seeks use to use in the future. We’re also supporting a variety of measures across the country that would require even stronger oversight of spy tech, including a public process before equipment is acquired. Already, Santa Clara County, Davis, and Berkeley in California have passed such ordinances. 

The time for secrecy over cell-site simulators has passed. The Stingray is out of the bag, and we’re going to keep fighting to make sure it remains in the open.

Learn more about cell-site simulators at EFF’s Street-Level Surveillance project.

Categories: Privacy

Hearing Monday in Groundbreaking Lawsuit Over Border Searches of Laptops and Smartphones

Deep Links - Thu, 04/19/2018 - 12:46
EFF and ACLU Fight Government’s Move to Dismiss Case

Boston – The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) will appear in federal court in Boston Monday, fighting the U.S. government’s attempts to block their lawsuit over illegal laptop and smartphone searches at the country’s borders.

The case, Alasaad v. Nielsen, was filed last fall on behalf of 10 U.S. citizens and one lawful permanent resident who had their digital devices searched without a warrant. The lawsuit challenges the government’s fast-growing practice of searching travelers’ electronics at airports and other border crossings—often confiscating the items for weeks or months at a time—without any individualized suspicion that a traveler has done anything wrong.

The government has moved to dismiss this case.  In court on Monday, EFF Senior Staff Attorney Adam Schwartz will argue that the plaintiffs have legal standing to challenge these illegal searches, and ACLU Staff Attorney Esha Bhandari will argue that the searches are unconstitutional, violating the First and Fourth Amendments.

What:
Hearing in Alasaad v. Nielsen

When:
Monday, April 23
3 pm

Where:
Courtroom 11 (Judge Casper)
Moakley U.S. Courthouse
1 Courthouse Way
Boston, Massachusetts

For more information on this case:
https://www.eff.org/cases/alasaad-v-duke
https://www.aclu.org/cases/alasaad-v-neilsen-challenge-warrantless-phone-and-laptop-searches-us-border

Contact:  AdamSchwartzSenior Staff Attorneyadam@eff.org
Categories: Privacy

Hearing Monday in Groundbreaking Lawsuit Over Border Searches of Laptops and Smartphones

EFF Press Releases - Thu, 04/19/2018 - 12:46

Boston – The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) will appear in federal court in Boston Monday, fighting the U.S. government’s attempts to block their lawsuit over illegal laptop and smartphone searches at the country’s borders.

The case, Alasaad v. Nielsen, was filed last fall on behalf of 10 U.S. citizens and one lawful permanent resident who had their digital devices searched without a warrant. The lawsuit challenges the government’s fast-growing practice of searching travelers’ electronics at airports and other border crossings—often confiscating the items for weeks or months at a time—without any individualized suspicion that a traveler has done anything wrong.

The government has moved to dismiss this case.  In court on Monday, EFF Senior Staff Attorney Adam Schwartz will argue that the plaintiffs have legal standing to challenge these illegal searches, and ACLU Staff Attorney Esha Bhandari will argue that the searches are unconstitutional, violating the First and Fourth Amendments.

What:
Hearing in Alasaad v. Nielsen

When:
Monday, April 23
3 pm

Where:
Courtroom 11 (Judge Casper)
Moakley U.S. Courthouse
1 Courthouse Way
Boston, Massachusetts

For more information on this case:
https://www.eff.org/cases/alasaad-v-duke
https://www.aclu.org/cases/alasaad-v-neilsen-challenge-warrantless-phone-and-laptop-searches-us-border

Contact: Adam SchwartzTitle: Hearing Monday in Groundbreaking Lawsuit Over Border Searches of Laptops and Smartphones
Categories: Privacy

Privacy as an Afterthought: ICANN's Response to the GDPR

Deep Links - Wed, 04/18/2018 - 14:44

Almost three years ago, the global domain name authority ICANN chartered a working group to consider how to build a replacement for the WHOIS database, a publicly-accessible record of registered domain names. Because it includes the personal information of millions of domain name registrants with no built-in protections for their privacy, the legacy WHOIS system exposes registrants to the risk that their information will be misused by spammers, identity thieves, doxxers, and censors.

But at the same time, the public availability of the information contained in the WHOIS database has become taken for granted, not only by its regular users, but by a secondary industry that repackages and sells access to its data, providing services like bulk searches and reverse lookups for clients as diverse as marketers, anti-abuse experts, trademark attorneys, and law enforcement authorities.

The working group tasked with replacing this outdated system, formally known as the Next Generation gTLD RDS to Replace WHOIS PDP Working Group did not get far. Despite holding 90 minute weekly working meetings for more than two years, deep divisions within the group have resulted in glacial progress, even as the urgency of its work has increased. A key privacy advocate within that Working Group, EFF Pioneer Award winner Stephanie Perrin, ended up resigning from the group in frustration this March, saying "I believe this process is fundamentally flawed and does not reflect well on the multi-stakeholder model."

With the impending commencement of Europe's General Data Protection Regulation or GDPR on May 25, which will make the continued operation of the existing WHOIS system illegal under European law, ICANN's board has been forced to step in. On April 3, members of the Working Group were informed that it had been "decided to suspend WG meetings until further notice while we await guidance from the Board regarding how this WG will be affected by the GDPR compliance efforts."

ICANN Board Cookbook

With this, the Board has floated its own interim solution aimed at bringing the legacy WHOIS system into compliance with the GDPR. The ingredients of this so-called "Cookbook" proposal [PDF] are drawn from responses to a call for public submissions, to which EFF contributed [PDF]. In short, it would make the following changes to the WHOIS regime:

  • Although full contact information of domain name registrants will still be collected, most of this information will become hidden from public view, unless the registrant affirmatively "opts in" to displaying that information publicly. A tiered access model will be put in place to ensure that only parties who have a "legitimate interest" in obtaining access to a registrant's address, phone number, or email address, will be able to do so.
  • Although email addresses will not be displayed in the public WHOIS data record, they will be replaced by a contact form or anonymized email address, which would still allow members of the public to make contact with a domain owner if they need to. (This idea is one of those that EFF had suggested in our submission, with the additional suggestion that the contact form be protected by a CAPTCHA to minimize the potential for misuse.)
  • No differentiation is attempted to be made between domains registered to individuals, and those registered to companies. This makes sense, because many company domain records do include personal contact information for individuals who act as the administrative or technical contacts for the domain. In practical terms, it would be impossible to weed out the entries that do contain such personal information from those that don't.

The board proposal is an improvement on the status quo, but doesn't go as far in protecting privacy as we would like it to. For example, it leaves it up to individual registrars as to whether they should apply these privacy protections to all domain owners worldwide, or attempt to limit them to those within the European Economic Area. It also contains a too expansive suggested list of acceptable purposes for the collection and processing of WHOIS personal data, including "to address issues involving domain name registrations, including but not limited to: consumer protection, investigation of cybercrime, DNS abuse, and intellectual property protection." 

The ICANN board's Cookbook proposal was submitted to the European Data Protection Authorities, who come together in a group called the Article 29 Working Party, for consideration at its next meeting which took place on April 10-11. The board had hoped to receive [PDF] the group's agreement to a moratorium of enforcement of the GDPR over WHOIS until ICANN is able to get its act together and establish its interim accreditation program. But the Working Party's reply of April 11 [PDF] offers no such moratorium, and instead affirms that the purposes for data collection listed by the board are too broad and will require further work if they are to comply with the GDPR.

Another fundamental limitation of the Cookbook proposal is that while it sets up the idea that there should be an accreditation program for "legitimate" users, it leaves unanswered key questions about how that accreditation program should operate in practice, and in particular how it would assess the legitimacy of claimants seeking access to user data. Since there is not enough time to develop an accreditation system before May 25, the board floats the option of an interim self-accreditation process, which somewhat undermines the purposes of limiting access. The other option is that, by default, access to WHOIS data would "go dark" for all users, until a suitable accreditation system was in place.

Business and IP Constituencies Accreditation and Access Model

This prospect has disturbed stakeholders accustomed to receiving free access to registrant data; one goes so far as to describe the Cookbook proposal as "the most serious threat to the open and public Internet for decades." ICANN's Business and Intellectual Property constituencies have responded by proposing an accreditation and access model [PDF] aimed at keeping the WHOIS door open for three loosely-defined categories of actors: cybersecurity and opsec investigators, intellectual property rights holders and their agents, and law enforcement and other government agencies. It attempts to fill in the gaps of the Board's proposal by suggesting how these users might be accredited.

The biggest problem with the Business and IP constituencies' proposal is that the bar for accreditation to access full registrant data would be set so low that it would become essentially meaningless, while still managing to exclude the wider public and keep them in the dark about who might be viewing their personal data. For example, it could allow anyone who has registered a trademark to enjoy carte blanche access to the entire WHOIS database. In a token effort to prevent misuse of WHOIS access there would be random audits, but penalties for misuse might be limited to de-accreditation.

The proposal would structurally elevate the financial interests of intellectual property owners above the privacy and access rights of ordinary users. While the GDPR does allow data sharing that is necessary for the purposes of legitimate interests of third parties, these interests must be balanced with and can be overridden by the interests, rights or freedoms of the domain name registrant. This proposed accreditation and access model doesn't even attempt to strike such a balance.

Although EFF would have preferred a model requiring a court order or warrant for access to such personal information, it seems inevitable that tiered access will be based on some kind of ICANN-administered accreditation system. Community discussions on what that accreditation program should look like continue on a new ICANN discussion list, using the Business and IP constituencies' proposal as a starting point. But this is work that should have been finished long ago. The commencement date of the GDPR has been known since the rule was adopted on April 27, 2016. Although its edges will be difficult for ICANN to navigate, its basic outlines are not rocket science; it has been obvious for over two years that more would need to be done to secure the personal information of domain name registrants.

Unfortunately, ICANN's version of a multi-stakeholder process has broken down over this contentious issue of registrant data privacy. It therefore falls to ICANN's board to make the interim changes necessary to ensure that the WHOIS system is brought into compliance with European Union law. While this interim model may be replaced by a community-based access model in the future, institutional inertia is likely to see to it that the Board's "interim" policy constrains the outlines of that future model. This makes it all the more important that the ICANN Board listens to all segments of its community, and to the advice of the Article 29 Working Party, in order to ensure that the solutions developed strike an appropriate balance between stakeholders' competing interests, and that the human rights of users are put first.

Categories: Privacy

Congressmembers Raise Doubts About the “Going Dark” Problem

Deep Links - Tue, 04/17/2018 - 19:58

In the wake of a damning report by the DOJ Office of Inspector General (OIG), Congress is asking questions about the FBI’s handling of the locked iPhone in the San Bernardino case and its repeated claims that widespread encryption is leading to a “Going Dark” problem. For years, DOJ and FBI officials have claimed that encryption is thwarting law enforcement and intelligence operations, pointing to large numbers of encrypted phones that the government allegedly cannot access as part of its investigations. In the San Bernardino case specifically, the FBI maintained that only Apple could assist with unlocking the shooter’s phone.

But the OIG report revealed that the Bureau had other resources at its disposal, and on Friday members of the House Judiciary Committee sent a letter to FBI Director Christopher Wray that included several questions to put the FBI’s talking points to the test. Not mincing words, committee members write that they have “concerns that the FBI has not been forthcoming about the extent of the ‘Going Dark’ problem.”

In court filings, testimony to Congress, and in public comments by then-FBI Director James Comey and others, the agency claimed that it had no possible way of accessing the San Bernardino shooter’s iPhone. But the letter, signed by 10 representatives from both parties, notes that the OIG report  “undermines statements that the FBI made during the litigation and consistently since then, that only the device manufacturer could provide a solution.” The letter also echoes EFF’s concerns that the FBI saw the litigation as a test case: “Perhaps most disturbingly, statements made by the Chief of the Cryptographic and Electronic Analysis Unit appear to indicate that the FBI was more interested in forcing Apple to comply than getting into the device.”

Now, more than two years after the Apple case, the FBI continues to make similar arguments. Wray recently claimed that the FBI confronted 7,800 phones it could not unlock in 2017 alone. But as the committee letter points out, in light of recent reports about “the availability of unlocking tools developed by third-parties and the OIG report’s findings that the Bureau was uninterested in seeking available third-party options, these statistics appear highly questionable.” For example, a recent Motherboard investigation revealed that law enforcement agencies across the United States have purchased—or have at least shown interest in purchasing—devices developed by a company called Grayshift. The Atlanta-based company sells a device called GrayKey, a roughly 4x4 inch box that has allegedly been used to successfully crack several iPhone models, including the most recent iPhone X.

The letter ends by posing several questions to Wray designed to probe the FBI’s Going Dark talking points—in particular whether it has actually consulted with outside vendors to unlock encrypted phones it says are thwarting its investigations and whether third-party solutions are in any way insufficient for the task.

EFF welcomes this line of questioning from House Judiciary Committee members and we hope members will continue to put pressure on the FBI to back up its rhetoric about encryption with actual facts.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)
Categories: Privacy
Syndicate content