Category Archives: Internet

New Report Shows European Data Protection Authorities are Taking Facebook’s Questionable Terms of Service Seriously

by News on February 26, 2015, no comments

Share on Twitter

Grumblings about changes in Facebook’s layout and policies are standard practice for everyone familiar with the social media giant. But some European governments are taking Facebook’s practices more seriously. This week, interdisciplinary scholars and researchers in Belgium issued a draft report entitled “From social media service to advertising network: A critical analysis of Facebook’s Revised Policies and Terms.” The report is provisional, and “will be updated after further research, deliberation and commentary.”

The report was based on an “extensive analysis of Facebook’s revised policies and terms,” conducted “at the request of the Belgian Privacy Commission.” The Commission is part of a task force of European Union (EU) data protection authorities created specifically to address Facebook’s shifting policies, which also includes Germany and the Netherlands.

This thorough analysis is useful both because it provides an in-depth explanation of items of note in the newly revised 2015 terms and because it explains how the terms fit in with European law. To be fair, it’s not all bad, and the report reiterates some long-standing concerns, that have not been affected by recent changes. The report also notes that Facebook has improved the degree of clarity around how it uses data, though rather large holes remain.

Overall, the report found that many of Facebook’s policies and practices with regards to data are of questionable legality under European Union law.1 This is of increasing concern because:

Facebook’s data processing capabilities have increased both horizontally and vertically. By horizontal we refer to the increase of data gathered from different sources. Vertical refers to the deeper and more detailed view Facebook has on its users.

In particular, this expansion has happened because Facebook has acquired new companies like Instagram and Whatsapp, and because more and more websites use Facebook plug-ins and other services. The report also noted that much of how Facebook uses data is simply opaque.

Privacy Settings

Although Facebook’s privacy settings haven’t changed, the report notes that:

users are able to choose from several granular settings which regulate access by other individuals, but cannot exercise meaningful control over the use of their personal information by Facebook or third parties. This gives users a false sense of control.

That false sense of control is key, since the report emphasizes the many ways in which users cannot actually limit use of their data. What’s more, Facebook’s default settings for “behavioural profiling and advertising” do not constitute legally valid consent because “consent cannot be inferred from the data subject’s inaction,” and this concept of explicit consent, taken from applicable EU law, recurs throughout the report.

Data use

To be legally valid under European Union law, consent to processing and use of user generated data must be “freely given”, “specific”, “informed” and “unambiguous.” The report stresses, “it is highly questionable whether Facebook’s current approach satisfies these requirements.”

Facebook’s practices with regards to how it combines data from a variety of sources, and shares data with other parties are also of questionable legality, according to the report. For example, the report describes a use case in which Facebook combines its own data with data from third-party data brokers. The report notes “Facebook only offers an opt-out system for its users in relation to profiling for third-party advertising purposes,” which in the authors’ view, is insufficient to meet legal requirements.

Facebook’s use of user-generated content, such as photos, is also problematic. Facebook’s terms grant “a non-exclusive, transferable, sub-licensable, royalty- free, worldwide license” to Facebook to use such content. The report notes that this may contradict EU and Belgian law, and has been held “invalid and therefore not enforceable under German Law.” Similarly, “[i]ndividuals have the right control use of their image,” but the lack of clarity in Facebook’s terms and settings makes this hard to do. That’s why the report recommends that users should be specifically required to opt-in to using their images for ads.

Unfair Contract Terms

In addition to the concerns noted above with how Facebook utilizes user data, the report indicates that some portions of Facebook’s terms may violate European consumer protection law, in particular the Unfair Contract Terms Directive (UCTD).

One stands out: Facebook’s right to stop providing access to Facebook without warning. Although the terms indicate that Facebook will notify users by email or the next time a user tries to log in, under the UCTD, “terms that enable ‘the seller or supplier to terminate a contract of indeterminate duration without reasonable notice except where there are serious grounds for doing so’ may be unfair.

As we’ve noted before, Facebook has terminated or suspended many accounts under its names policy. One of the things that users find especially frustrating is the experience of attempting to log in and not being able to access content they may have spent years amassing—all because they weren’t given a warning. Under European law, Facebook’s method of dealing with name violations may not be simply unfair. It may actually be illegal.

In addition to concerns about termination, the report several other problematic terms. It points out that Facebook’s terms require disputes to be settled in California, under California law, even though the company has offices in three EU member states.This is likely unlawful under European Parliament regulations. Also, under the UCTD, the terms that limit Facebook’s liability to $100, disclaim any warranty for content and software, and reserve the right to unilaterally change the terms themselves, are all likely unlawful. Lastly, the clause that “obliges users to indemnify Facebook for any expenses incurred, including legal fees, as a result of a violation of the terms of service” is unlawful in some EU countries.

Tracking and location data

Finally, the report notes that Facebook has increased the ways in which it collects data from users beyond cookies, and collects locational data from a wide variety of sources.

Although Facebook is more explicit in the 2015 terms about gathering locational data, it remains “vague and broad” in its description of what it will do with that data. And that’s a big gap. Users have only the choice to turn access to location data like GPS and WiFi off or on once in the mobile app; they can’t share location data for some purposes but not others. What’s more, Facebook may collect location data not only through explicit means like GPS, but also through other means like the location data in a photograph—and there are no settings that address this. The report recommends offering “granular in-app settings for sharing of location data, with all parameters turned off by default,” and minimizing collection of location data in the first place.

When it comes to tracking, Facebook tracks users through several means, including social plug-ins, fingerprinting, and mobile apps. Social plug-ins are things like Facebook’s “like” button on a news organization’s page. While outside websites can limit the degree of tracking done by plug-ins, the report concludes that Facebook’s current scheme doesn’t provide for legal consent, and that “Facebook should design its social plug-ins in way which are privacy-friendly by default.”

Other forms of tracking are also of questionable legality. Facebook’s practice of fingerprinting (using a different information like operating system and browser settings to create a “fingerprint” of a device) requires collection and use of device information that is likely not legal under article 5(3) of the e-Privacy Directive. And because tracking through apps can only be controlled by opting-out, like other areas where this is the only option, the report concludes that Facebook’s terms don’t “provide for legally valid consent” in this area, either.

Facebook isn’t going away anytime soon, but users should be clear on how the social media giant really operates. You can read the entire report here [PDF]. Hopefully Facebook is reading it too, and plans to address the serious issues raised. We’ve already given them a few suggestions on how to do so.

  • 1. Specifically, the report noted the Unfair Contract Terms Directive, the work of the Article 29 Working Party, and the e-Privacy Directive.

Share this: Share on Google+ Share on Diaspora || Join EFF

Dear Software Vendors: Please Stop Trying to Intercept Your Customers’ Encrypted Traffic

by News on February 25, 2015, no comments

Share on Twitter

Over the past week many more details have emerged about the HTTPS-breaking Superfish software that Lenovo pre-installed on its laptops for several months. As is often the case with breaking security incidents, most of what we know has come from security engineers volunteering their time to study the problem and sharing their findings via blogs and social media.

Unfortunately, the security implications have gone from bad to worse the more we’ve learned. For instance, researchers have determined that the software library Superfish uses to intercept traffic—developed by a company known as Komodia—is present in more than a dozen other software products, including parental control software and (supposed) privacy-enhancing/ad-blocking software. All of these products have the same vulnerability that Superfish does: anyone with a little technical know-how could intercept and modify your otherwise secure HTTPS traffic.

What’s worse is that these attacks are even easier than researchers originally thought, because of the way Komodia’s software handles invalid certificates: it alters the part of the certificate which specifies what website the certificate is for—for example changing www.eff.org to verify_fail.www.eff.org—and then signs the certificate and sends it on to your browser. Since the website listed on the certificate (verify_fail.www.eff.org) doesn’t match the website the user is actually visiting (www.eff.org), the browser shows a warning to the user.

But certificates have another field, called the Subject Alternative Name, which is used to list alternative domain names for which the certificate can be used (so that website operators can re-use the same certificate across all of their domain names). EFF, for example, uses the same certificate for eff.org, www.eff.org, and *.eff.org. Even if the “main” domain name listed in the certificate doesn’t match the domain name of the website the user is browsing, the certificate will still be accepted as long as one of the alternative names match. And because Komodia’s software signs the certificate (and tells your browser that it should trust certificates it signs if they’re otherwise valid), the certificate will pass all the browser’s checks, and come up smelling like roses.

This means that an attacker doesn’t even need to know which Komodia-based product a user has (and thus which Komodia private key to use to sign their evil certificate)—they just have to create an invalid certificate with the target domain as one of the alternative names, and every Komodia-based product will cause it to be accepted.

Evidence of Man-in-the-Middle Attacks in the Decentralized SSL Observatory

We searched the Decentralized SSL Observatory for examples of certificates that Komodia should have rejected, but which it ended up causing browsers to accept, and found over 1600 entries. Affected domains included sensitive websites like Google (including mail.google.com, accounts.google.com, and checkout.google.com), Yahoo (including login.yahoo.com), Bing, Windows Live Mail, Amazon, eBay (including checkout.payments.ebay.com), Twitter, Netflix, Mozilla’s Add-Ons website, www.gpg4win.org, several banking websites (including mint.com and domains from HSBC and Wells Fargo), several insurance websites, the Decentralized SSL Observatory itself, and even superfish.com.1

While it’s likely that some of these domains had legitimately invalid certificates (due to configuration errors or other routine issues), it seems unlikely that all of them did. Thus it’s possible that Komodia’s software enabled real MitM attacks which gave attackers access to people’s email, search histories, social media accounts, e-commerce accounts, bank accounts, and even the ability to install malicious software that could permanently compromise a user’s browser or read their encryption keys.

To make matters worse, Komodia isn’t the only software vendor that’s been tripped up by this sort of problem. Another piece of software known as PrivDog is also vulnerable. Ostensibly, PrivDog is supposed to protect your privacy by intercepting your traffic and substituting ads from “untrusted sources” with ads from a “trusted” source, namely AdTrustMedia. Like Komodia’s software, PrivDog installs a root certificate when it’s installed, which it then uses to sign the certificates it intercepts. However, a bug in certain versions of PrivDog cause it to sign all certificates, whether they’re valid or not. Simply put, this means that any certificate your browser sees while PrivDog is installed could be the result of a man-in-the-middle attack, and you’d have no way of knowing. The Decentralized SSL Observatory has collected over 17,000 different certificates from PrivDog users, any one of which could be from an attack. Unfortunately, there’s no way to know for sure.

So what can we learn from this Lenovo/Superfish/Komodia/PrivDog debacle? For users, we’ve learned that you can’t trust the software that comes preinstalled on your computers—which means reinstalling a fresh OS will now have to be standard operating procedure whenever someone buys a new computer.

But the most important lesson is for software vendors, who should learn that attempting to intercept their customers’ encrypted HTTPS traffic will only put their customers’ security at risk. Certificate validation is a very complicated and tricky process which has taken decades of careful engineering work by browser developers.2 Taking certificate validation outside of the browser and attempting to design any piece of cryptographic software from scratch without painstaking security audits is a recipe for disaster.

Let the events of the last week serve as a warning: attempting to insert backdoors into encryption as Komodia attempted to do (and as others have called for in other contexts) will inevitably put users’ privacy and security at risk.

  • 1. Based on the “verify_fail” pattern, we also found certificates that purport to be from five pieces of software which, to our knowledge, haven’t yet been identified as using Komodia’s proxy software. The issuer fields for these certificates were: “O=Sweesh LTD, L=Tel Aviv, ST=Tel Aviv, C=IL, CN=Sweesh LT”, “O=Kinner lake Gibraltar, L=My Town, ST=State or Providence, C=GI, CN=Kinner lake Gibraltar”, “C=US, ST=California, L=SanDiego, O=EdgeWave.com, OU=Security, CN=EdgeWave.com/emailAddress=support@edgewave.com”, “O=NordNet/emailAddress=cert-ssl@nordnet.net, L=HEM, ST=HEM, C=FR, CN=Nordnet.fr”, and “O=PSafe Tecnologia S.A./emailAddress=psafe@psafe.com, L=Rio de janeiro, ST=Rio de janeiro, C=BR, CN=PSafe Tecnologia S.A.”. While we were unable to identify any organizations associated with the first two certificates, EdgeWave, NordNet, and PSafe appear to sell antivirus or web filtering products.
  • 2. Just last year, for example, researchers found a number of bugs in certificate validation libraries [PDF] through fuzz testing.

Share this: Share on Google+ Share on Diaspora || Join EFF

Congress’s Copyright Review Should Strengthen Fair Use—Or At Least Do No Harm

by News on February 25, 2015, no comments

Share on Twitter

The Internet is celebrating Fair Use Week, and it’s a great time to look at what Congress might do this year to help or hurt the fair use rights of artists, innovators, and citizens. After nearly two years of U.S. House Judiciary Committee hearings and vigorous conversations within government, industry, and the public, it seems like we might see some real proposals. But other than a few insiders, nobody knows for sure whether major changes to copyright law are coming this year, and what they might be.

Fair use is one of copyright’s essential safeguards for free speech, because it allows people to use copyrighted works without permission or payment in many circumstances. It’s critical to education, journalism, scholarship, and many, many uses of digital technology. Because copyright applies to trillions of files and streams, whether trivial or profound, that flow through the Internet every day, and because nearly every transmission or use of digital data involves making a copy, copyright pervades the Internet. Now more than ever, limits like fair use are critical to protect Internet users from runaway copyright liability.

If Congress does take up copyright reform this year, there are changes that would strengthen fair use, and thereby strengthen freedom of speech. One is to fix copyright’s draconian, unpredictable civil penalties. As we explained in our 2014 whitepaper, copyright holders can seek “statutory damages” of up to $150,000 per work without providing any proof of actual harm. The law gives almost no guidance to judges and juries in selecting the right amount. That means that money damages awarded to winning plaintiffs in copyright lawsuits vary wildly, and can be shockingly large. Free Republic, a nonprofit conservative commentary website, was penalized $1 million for posting copies of several Washington Post and Los Angeles Times articles in an effort to illustrate media bias. And a firm sued for making copies of 240 financial news articles for internal use was ordered to pay $19.7 million, or $82,000 per article. It’s hard to see any connection between these massive penalties and any actual harm suffered by the copyright holders. They are far above even the sort of reasonable “punitive damages” multiplier that sometimes gets applied in personal injury cases.

High and unpredictable penalties can make relying on fair use a game of financial Russian roulette for artists and innovators. Although many fair uses are clear and obvious, brave artists and innovators often use copyrighted works in ways that no court has ever considered – and that means a risk of lawsuits. When a loss could mean bankruptcy, many won’t take the risk, even if a court might ultimately confirm their fair use. Copyright’s penalty regime is a major reason why filmmakers must spend months, and thousands of dollars, obtaining licenses for trivial or incidental video and music clips that appear in their films – even when those appearances are likely to be fair use.

Congress could help fix these problems by clarifying that statutory damages should never apply to a copyright user who relies on a fair use defense in good faith, even if the defense is unsuccessful. That would make relying on fair use a predictable, manageable risk that more artists and innovators will be able to take.

Another way that Congress could strengthen fair use is to fix Section 1201, the anti-circumvention provision of the Digital Millennium Copyright Act. Section 1201 prohibits breaking or bypassing DRM and other digital locks that control access to creative works, including the software in many personal devices. This law is a major roadblock for fair use because it can make breaking or bypassing DRM illegal even when we need to break DRM to make fair use of the locked-up material. Essentially, DRM and Section 1201 can take away fair use for artists, innovators, and consumers.

The Copyright Office can grant three-year exemptions to Section 1201. EFF is asking for new exemptions for amateur video, mobile device and car owners, and video game enthusiasts. But exemptions don’t completely fix the harms to fair use. They are difficult to get, are often written narrowly, and they don’t protect people who make tools that enable fair use by others. Any fix for copyright law should include making it legal to bypass digital locks to make fair uses of creative work.

There are also aspects of today’s copyright law that Congress shouldn’t change. One of the most important features of fair use is its adaptability to new technologies and uses that no legislature or expert could foresee. Because this adaptability is so important, Congress should reject calls to replace fair use with specific categories of exceptions to copyright for education, commentary, and the like (a “fair dealing” approach like that used in some other countries). Without a flexible fair use doctrine that lets courts apply copyright’s principles to new situations, artists and innovators would need to ask Congress for permission before they create.

Congress should also reject the disastrous notion of requiring Internet intermediaries like ISPs and websites to filter user-posted content for allegedly infringing material. As we’ve seen over and over again with voluntary filters, filtering software is terrible at recognizing fair uses—and it always will be. Proposals to replace the DMCA’s safe harbors for Internet intermediaries with a regime that requires proactive blocking or filtering based on infringement accusations will inevitably catch many more fair use “dolphins” in the infringement “tuna driftnet.”

Will there be comprehensive copyright reform this year? The answer’s not clear. Major changes take time, and that’s often a good thing. What’s for sure is that any changes must help, and not hurt, fair use.


Share this: Share on Google+ Share on Diaspora || Join EFF

Laura Poitras’ Acceptance Speech For CITIZENFOUR’s Academy Award

by News on February 23, 2015, no comments

Share on Twitter

Laura Poitras won an Academy Award for her documentary CITIZENFOUR. At the ceremony, she gave a brief speech thanking everyone who helped make the film as well as acknowledging the bravery of Edward Snowden and other whistleblowers.

Here is Poitras’ acceptance speech:1

Thank you so much to the Academy. I’d like to first thank the documentary community. It’s an incredible joy to work among people who support each other so deeply, risk so much, and do such incredible work. We don’t stand here alone. The work we do to (unveil?) what needs to be seen by the public is possible through the brave organizations that support us. We’d like to thank Radius, Participant, HBO, BritDoc, and the many, many, many organizations who had our back making this film.
The disclosures that Edward Snowden reveals don’t only expose a threat to our privacy but to our democracy itself. When the most important decisions being made affecting all of us are made in secret, we lose our ability to check the powers that control. Thank you to Edward Snowden for his courage, and for the many other whistleblowers. And I share this with Glenn Greenwald and other journalists who are exposing truth.

More on CITIZENFOUR:

Disclosures: I serve on the board of directors of Freedom of the Press Foundation, a nonprofit working to champion press freedom, along with filmmaker Laura Poitras, her colleague Glenn Greenwald, and whistleblower Edward Snowden.

  • 1. I transcribed this as best I could, but if I made errors then they are wholly mine.
Related Issues:

Share this: Share on Google+ Share on Diaspora || Join EFF

Abuse and Harassment: What Could Twitter Do?

by News on February 20, 2015, no comments

Share on Twitter

The mainstream media has paid a lot more attention to abuse and harassment on Twitter lately, including a recent story by Lindy West on This American Life about her experience confronting an especially vitriolic troll. She isn’t alone—and it appears that for the company at least, the number of Twitter users speaking out about harassment has reached critical mass. In an internal memo obtained by The Verge earlier this month, Twitter CEO Dick Costolo acknowledged Twitter’s troubled history with harassment, writing:

We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day. I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO. It’s absurd. There’s no excuse for it. I take full responsibility for not being more aggressive on this front. It’s nobody else’s fault but mine, and it’s embarrassing.

We’re glad to see Twitter taking abuse1 seriously. In the past, Twitter’s senior executives have been reticent to engage critics on the harassment issue. Mr. Costolo is right. Abuse has (understandably) driven some users off of Twitter and other platforms. And as a digital civil liberties organization, whose concerns include both privacy and free speech, we have some thoughts about what Twitter should and shoudn’t do to combat its abuse problem.

Clearer Policies, Better Tools

Twitter’s rules concerning abusive behavior appear to be very straightforward:

Users may not make direct, specific threats of violence against others, including threats against a person or group on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. Targeted abuse or harassment is also a violation of the Twitter Rules and Terms of Service.

In truth, Twitter’s abuse policies are open to interpretation. And the way Twitter does interpret them can be difficult for outsiders to understand—which perhaps explains complaints from users that they seem to be enforced inconsistently. Users have argued that abuse reports often disappear without response, or take months to resolve. On the other hand, it appears that sometimes tweets are removed or accounts are suspended without any explanation to the user of the applicable policy or reference to the offending content.

Twitter could work to address these criticisms by bringing more transparency and accountability to its abuse practices. We understand why the company would want to avoid bringing more attention to controversial or offensive content. But Twitter could expand its transparency report to include information about content takedowns and account suspensions related to abuse complaints, such as the number of complaints, type of complaint, whether or not the complaint resulted in the takedown of content or the suspension of an account, and so on. Transparency can be a rocky road, and can draw attention to the flaws in a company’s approach as well as highlight its success. But if Twitter wants a better process, and not just less criticism, it’s a powerful feedback loop that can improve responsiveness and user trust.

Twitter could also give users better options for controlling the type of content they see on the service in the first place. Several proposals along these lines have been enumerated by Danilo Campos, including giving the option to block accounts that are less than 30 days old, blocking all users with a low follower count, and blocking keywords in @ replies, and the ability to share lists of blocked users with friends. Not all of these solutions need to start from scratch, either. Applications such as Block Together, which grants many of these options to Twitters users, already exist. But Twitter could and should build these abilities directly into its web and mobile interfaces.

Similarly, Twitter could make it easier to control which notifications users see. Currently, the web interface for Twitter allows people to easily filter notifications so that users can see all notifications, or only notifications from people they follow. However, the mobile interface and app bury this capability deep in its menus. Users facing harassment should be treated as a first class user of the platform, not a special case. That means features meant to help them should be easy to access.

Twitter’s Role in Policing Content

Handling abuse complaints individually is an enormous challenge for a global corporation like Twitter. It’s unclear how Twitter can scale the responses users expect and deserve when dealing with thousands of cases per week, across hundreds of languages and millions of users. That’s one reason why we support seeking solutions to harassment that don’t rely on centralized platforms. If Twitter wants to do any policing of its platform, it will need the human touch—not just an automated algorithm. To do that requires well-staffed abuse teams, with the resources and the tools to be responsive, examine complaints in context, and clearly explain the link between every takedown or suspension and Twitter’s policies to the affected user.

Those abuse teams will need to understand the context from which their users speak. Research in the United States shows that women, African-American, and Hispanic users report disproportionate levels of harassment. The pattern that certain groups—ethnic, economic, political or gendered—receive higher levels of abuse online looks to hold true globally. Companies like Twitter have already publicly recognized the importance of diversity in their workplace. Diversity in their abuse team could be useful in providing necessary context to understand and react correctly to harassment reports. Before that happens, mistakes are going to be made—and we worry that mistakes will disproportionately affect those targeted groups.

What Twitter Shouldn’t Do

There’s plenty there for Twitter to consider in tackling its abuse problem head-on, but there are also a lot of ways in which these efforts can go wrong. Looking a little further down in Costolo’s memo, he writes:

We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.

This is a dangerous sentiment. Kicking people off left and right is exactly the opposite of the kind of contextual, nuanced examination of complaints that Twitter needs to do if it intends to suspend accounts or take down content. And it’s an attitude will inevitably lead to poor decisions. Mr. Costolo’s conflation of trolls and abuse is an indicator of this. While some trolls may also be abusers, as we’ve noted before, ugly or offensive speech doesn’t always rise to the level of harassment.

Solutions that empower users rather than solutions that seek to bury the platform’s problem by swiftly ejecting and silencing users, are in the long term more scaleable, and less arbitrary. We’re glad to see evidence that Twitter is planning to get aggressive about the challenge of harassment, but we hope that aggression doesn’t come at the expense of being smart.

  • 1. We see abuse as synonymous with our definition of harassment—extreme levels of targeted hostility; exposure of private lives; violent, personalized imagery, and threats of violence.
Related Issues:

Share this: Share on Google+ Share on Diaspora || Join EFF

Anonymous Companies Fighting National Security Letters Back Twitter’s NSL Battle

by News on February 17, 2015, no comments

Share on Twitter
EFF’s Clients’ Identities Must Remain Secret, But Still Speak Out About Unconstitutional Gag Orders

San Francisco – Two companies who must remain anonymous about their fight against secret government demands for information known as national security letters (NSLs) are backing Twitter’s lawsuit over its rights to publish information about NSLs it may have received. The companies—a telecom and an Internet company—are represented by the Electronic Frontier Foundation (EFF).

Twitter filed its suit in October, saying users deserved to know certain basic facts about NSLs that the government did or did not serve on the social media company. NSLs—issued by the federal government but not approved by a judge—almost always contain a gag order barring the companies from notifying their customers or the public that any demands have been made.

The companies represented by EFF also want to go public with some details of their fights against NSLs, including their corporate identities and what they have done to protect their customers from unreasonable collection of information. In an amicus brief filed today, they argue that the gag orders are an unconstitutional prior restraint on free speech and a serious infringement of their First Amendment rights. However, the government continues to maintain that even identifying EFF’s clients as having received an NSL might endanger national security.

“The Supreme Court as well as courts across the land have recognized that a prior restraint—preventing speech in the first instance instead of imposing a penalty after the speech—is a serious and dangerous step,” said EFF Legal Fellow Andrew Crocker. “Yet with NSLs, we have prior restraints imposed at the government’s whim, without any judicial oversight or review. Our clients want to talk about their experience with these NSLs, but the government is unconstitutionally shielding itself from any criticism or critique of their procedures.”

In 2013, a federal district court judge in San Francisco agreed with EFF and its clients that the NSL provisions were unconstitutional, and barred any future NSLs and accompanying gag orders. That ruling was stayed pending appeal, however, and the district court has subsequently enforced additional NSLs while EFF is arguing the case in the United States Court of Appeals for the Ninth Circuit.

“The district court in our case against national security letters was right—the First Amendment forbids the FBI from gagging service providers from openly discussing such invasive, secretive, and unaccountable activities,” said EFF Deputy General Counsel Kurt Opsahl. “On behalf of our clients, we are asking this court to reach the same conclusion, and allow the public to get information they need about law enforcement activities.”

For the full brief in Twitter v. Holder:
https://www.eff.org/document/amicus-brief-26

For more on NSLs:
https://www.eff.org/issues/national-security-letters

Contacts:

Kurt Opsahl
Deputy General Counsel
Electronic Frontier Foundation
kurt@eff.org

Andrew Crocker
Legal Fellow
Electronic Frontier Foundation
andrew@eff.org


Share this: Share on Google+ Share on Diaspora || Join EFF

Anonymous Companies Fighting National Security Letters Back Twitter’s NSL Battle

by News on February 17, 2015, no comments

Share on Twitter
EFF’s Clients’ Identities Must Remain Secret, But Still Speak Out About Unconstitutional Gag Orders

San Francisco – Two companies who must remain anonymous about their fight against secret government demands for information known as national security letters (NSLs) are backing Twitter’s lawsuit over its rights to publish information about NSLs it may have received. The companies—a telecom and an Internet company—are represented by the Electronic Frontier Foundation (EFF).

Twitter filed its suit in October, saying users deserved to know certain basic facts about NSLs that the government did or did not serve on the social media company. NSLs—issued by the federal government but not approved by a judge—almost always contain a gag order barring the companies from notifying their customers or the public that any demands have been made.

The companies represented by EFF also want to go public with some details of their fights against NSLs, including their corporate identities and what they have done to protect their customers from unreasonable collection of information. In an amicus brief filed today, they argue that the gag orders are an unconstitutional prior restraint on free speech and a serious infringement of their First Amendment rights. However, the government continues to maintain that even identifying EFF’s clients as having received an NSL might endanger national security.

“The Supreme Court as well as courts across the land have recognized that a prior restraint—preventing speech in the first instance instead of imposing a penalty after the speech—is a serious and dangerous step,” said EFF Legal Fellow Andrew Crocker. “Yet with NSLs, we have prior restraints imposed at the government’s whim, without any judicial oversight or review. Our clients want to talk about their experience with these NSLs, but the government is unconstitutionally shielding itself from any criticism or critique of their procedures.”

In 2013, a federal district court judge in San Francisco agreed with EFF and its clients that the NSL provisions were unconstitutional, and barred any future NSLs and accompanying gag orders. That ruling was stayed pending appeal, however, and the district court has subsequently enforced additional NSLs while EFF is arguing the case in the United States Court of Appeals for the Ninth Circuit.

“The district court in our case against national security letters was right—the First Amendment forbids the FBI from gagging service providers from openly discussing such invasive, secretive, and unaccountable activities,” said EFF Deputy General Counsel Kurt Opsahl. “On behalf of our clients, we are asking this court to reach the same conclusion, and allow the public to get information they need about law enforcement activities.”

For the full brief in Twitter v. Holder:
https://www.eff.org/document/amicus-brief-26

For more on NSLs:
https://www.eff.org/issues/national-security-letters

Contacts:

Kurt Opsahl
Deputy General Counsel
Electronic Frontier Foundation
kurt@eff.org

Andrew Crocker
Legal Fellow
Electronic Frontier Foundation
andrew@eff.org


Share this: Share on Google+ Share on Diaspora || Join EFF

Administration’s New Cyber Threat Center Replaces Old Cyber Threat Center

by News on February 12, 2015, no comments

Share on Twitter

This week the Obama administration is releasing its second Executive Order in as many years on computer (“cyber”) security, which will create a new department in the intelligence community to handle computer security threat information sharing. Officials are hailing the center as “new” and unprecedented.

It’s not. We already have significant information sharing avenues, which makes the new center redundant. Companies can definitely look forward to more red tape when it comes to sharing computer security threats. And it’s not just a question of seemingly unnecessary bureaucracy. We’re concerned that the whole point of the new center is to be IN the intelligence community, and thus all but eliminate any transparency and accountability.

In a press release the Administration lauded the center, formally called the Cyber Threat Intelligence Integration Center, saying:

No single government entity is responsible for producing coordinated cyber threat assessments, ensuring that information is shared rapidly among existing cyber centers and other [government] elements, and supporting the work of operators and policymakers with timely intelligence about the latest cyber threats and threat actors

The description looks awfully familiar. It should; the Department of Homeland Security (DHS) has an entire department called the National Cybersecurity and Communications Integration Center (NCCIC) that seems to do pretty much everything the Administration thinks needs doing. NICCIC is a bridge between government, private sector, and international network defense communities. It’s About page states that the “NCCIC analyzes cybersecurity and communications information, shares timely and actionable information, and coordinates response, mitigation and recovery efforts.”

Digging deeper, NCCIC in turn houses US-CERT (United States Computer Emergency Readiness Team) and ICS-Cert (Industrial Control Systems Cyber Emergency Response Team). Both teams also handle computer security information sharing and threat analysis. Specifically, US-CERT “leads efforts to improve the nation’s cybersecurity posture, coordinate cyber information sharing, and proactively manage cyber risks.”

The descriptions speak for themselves.

Current Public Sharing…

More confusing is trying to reconcile what this new center will contribute to the current public and private information-sharing regime. In 2012 the President signed EO 13636, which created the Enhanced Cybersecurity System, or ECS. The ECS focuses on sharing computer security information from the government to critical infrastructure and other “commercial service providers.” At the time, it was hailed as a critical step to improving information sharing and coordinating cyberattacks since the private sector owns about 85% of the America’s critical infrastructure. Two years later, we’ve heard little about its implementation.

The bottom line is that ECS, US-CERT, ICS-CERT, NCCIC, and other departments appear to be tasked with doing exactly what this new “Cyber Threat Agency” will be doing. And there’s more—the DHS programs complement DOD programs like the DIBNET, or Defense Industrial Base Network, where defense contractors share computer security information between themselves and with the government.

Current Private Sharing

All of this is on top of private-sector hubs known as Information Sharing and Analysis Centers (ISACs). ISACs are often sector specific and facilitate information sharing; they’ve been noted as workingvery well” and are supplemented by public reports and private communications, like the recently launched ThreatExchange. Private sharing was further encouraged when the FTC and DOJ stated they would not prosecute companies under antitrust law for sharing computer security information. Combined, these private centers facilitate sharing and are core parts of the already current information-sharing regime.

What’s New About the New Center?

Given the apparent redundancy of the new center, it’s hard not to believe that its main reason for being is its location: inside the intelligence community and shrouded in near-impenetrable secrecy. Keep in mind that it’s long been settled that a civilian agency should lead the country’s computer security—so settled that even former NSA chief General Keith Alexander declared that civilian agencies should take the lead on government computer security.

If the government wants more information sharing then it should expand the ECS or utilize the already current information sharing regimes in US-CERT and the private sector—or explain why it can’t be done in DHS. And of course, as we’ve often said, it’s not at all clear that information sharing is where we should be putting our security dollars and attention. Many of the past years’ breaches were due to low-hanging fruit like encrypting personal information, making sure passwords aren’t sent in unencrypted emails, and that employees don’t download malware. For instance, the New York Times reported the JP Morgan hack occurred due to an un-updated server.

Devils are in the Details

The exact details of the center will be released later this week, but as of now the new center seems redundant. If we want to improve computer security and the sharing of threat information we must encourage companies and the government to use the already existing information sharing regimes. Creating another new bureaucracy inside the intelligence community will probably hinder, not help, the computer security landscape.


Share this: Share on Google+ Share on Diaspora || Join EFF