Tracking Big Foot: Why GPS Location Requires a Warrant | Center for Democracy & Technology

In a case that raises as many questions as the average sighting of Big Foot, a panel of the Sixth Circuit Court of Appeals ruled earlier this week that law enforcement officers didn’t need a warrant to obtain GPS location information generated by his cell phone.

The court’s analysis has been roundly criticized as legally incorrect, lazy, shallow, and vague. I’d like to focus on one aspect of the case that the court missed:  the Department of Justice recommends that police obtain warrants in the scenario presented by this case, does so for good reason, and there were sufficient facts for the government to obtain the warrant that the Department of Justice recommends investigators obtain.

In this case, U.S. v. Skinner law enforcement officers obtained an order that allowed them to monitor for 60 days the location of a pre-paid cell phone they had good cause to believe was being used by Big Foot, the nickname given trucker eventually identified as Melvin Skinner, who they alleged was transporting marijuana.  They obtained a court order under which the provider, Sprint/Nextel, acting at the behest of law enforcement, pinged the phone repeatedly so it would reveal its location over a three-day period and eventually activated the phone’s GPS functionality to locate the phone’s GPS coordinates.   (Sprint/Nextel recently developed a web portal through which law enforcement can do this automatically for the duration of the court authorization, without contacting the provider each time officers ping the phone.)

The court found that there was “… no Fourth Amendment violation because Skinner did not have a reasonable expectation of privacy in the data given off by his voluntarily procured … cell phone.”  But, as Jennifer Grannick points out cell phones don’t normally “give off” the kind of GPS location data that law enforcement used to locate Skinner.  Unless the user is employing location services – and Skinner wasn’t – the GPS location data has to be created.  In this case, the provider, under court order, remotely activated the GPS function of Skinner’s phone so the police could track him.

There’s a critical difference between GPS location information and cell tower location information a mobile phone creates during normal use.  The GPS data in this case is created at the request of law enforcement for tracking purposes and not through the normal use of the mobile phone. The GPS data doesn’t even exist until the provider prompts the device to deliver its GPS location to the provider so law enforcement can access it.  In contrast, providers maintain cell tower location information for business reasons.  Because providers do not normally maintain GPS location information and because it was not voluntarily conveyed to the provider, it is not a “business record” and does not fit into the third party records doctrine, which says that a person has no Fourth Amendment interest in information that is voluntarily revealed to, and held by, a third party.  While the third party doctrine should probably be re-examined, for now we have to live with it, but not for GPS data created by providers at the behest of law enforcement.  For that data, we retain our Fourth Amendment rights against warrantless GPS tracking.  

Blind Eye to Justice

Apparently recognizing that GPS is different, the Justice Department recommends that prosecutors obtain a warrant to get GPS location information from mobile communications service providers.  For example, in this power point presentation the Associate Director of the Justice Department Office of Enforcement Operations recommends that prosecutors use search warrants to get prospective GPS location information (referred to as “lat/long data” or latitudinal and longitudinal data) for constitutional, not statutory reasons, and because “anything less presents significant risks of suppression.”  In addition, the Justice Department Associate Deputy Attorney General, testified in April last year that when the government seeks to compel disclosure of prospective GPS coordinates generated by cell phones, it relies on a warrant.

The Sixth Circuit missed this point entirely.  It blithely rejected Skinner’s Fourth Amendment claims and implicitly bought into the government’s argument that orders under the Stored Communications Act provision at 18 USC 2703(d) can be used to obtain prospective location information that has never been stored.  It did not consider whether the information sought was within the third party records doctrine and it cited no statutory authority for the proposition that the government can compel a provider to create the GPS information for the government to seize.  

Perhaps most ironically, it seems pretty clear that the government had facts establishing probable cause and could have obtained a warrant if it had applied for one.  As the concurring opinion in Skinner noted, law enforcement officials were watching the drug operation for months, had recorded conversations about an upcoming drug run, learned that the courier was carrying a particular phone that they could track, and that a half ton of marijuana was in transit.  

A warrant requirement for location information, as advocated by the Digital Due Process coalition, would still mean a drug courier like Skinner would get caught.  If followed, a statutory warrant requirement decreases the chances a criminal would elude jail because the seized evidence would not be at risk of suppression, as it is now for Big Foot if he appeals this decision. 

For updates, follow us on Twitter at @CenDemTech.

Related Posts

Defending networks from malicious hacking exploits depends in large part on the voluntary, cooperative efforts of network operators, device makers, and Internet users.Today the Broadband Internet Technical Advisory Group (BITAG) — a group of technical experts dedicated to building consensus about broadband network management — has released a series of targeted, balanced recommendations to help stifle an emerging type of network attack. That attack has been used in recent years by the hacker…

[Editors Note: This is one in a of series of blog posts from CDT on the Cybersecurity Act, S. 3414, a bill co-sponsored by Senators Lieberman and Collins that is slated to be considered on the Senate floor soon.]Two amendments to the Senate cybersecurity bill now being debated would require government agents to get a warrant before reading a person’s email or secretly tracking someone through their mobile phone.  The amendments, if adopted, would be a huge privacy gain and address a long-…

In a new book, CDT experts debate some of the most pressing issues in surveillance law today.Patriot Debates: Contemporary Issues in National Security Law features CDT’s Greg Nojeim in a debate on the third-party records doctrine and its application to criminal investigations in the digital age. The doctrine holds that law enforcement does not need a warrant to search and seize information lawfully held by third parties, such as online file hosting services like Dropbox or online email…

[Editors Note: This is one in a of series of blog posts from CDT on the Cybersecurity Act, S. 3414, a bill co-sponsored by Senators Lieberman and Collins that is slated to be considered on the Senate floor soon.]  

Congress is about to decide whether it is a crime to violate terms of service governing your use of Gmail, Facebook, Hulu, or any other on-line service.

One of the amendments to the Cybersecurity Act that the Senate is likely to take up this week would substantially increase…

https://www.cdt.org/blogs/greg-nojeim/1708tracking-big-foot-why-gps-location-…

Tracking Big Foot: Why GPS Location Requires a Warrant | Center for Democracy & Technology

In a case that raises as many questions as the average sighting of Big Foot, a panel of the Sixth Circuit Court of Appeals ruled earlier this week that law enforcement officers didn’t need a warrant to obtain GPS location information generated by his cell phone.

The court’s analysis has been roundly criticized as legally incorrect, lazy, shallow, and vague. I’d like to focus on one aspect of the case that the court missed:  the Department of Justice recommends that police obtain warrants in the scenario presented by this case, does so for good reason, and there were sufficient facts for the government to obtain the warrant that the Department of Justice recommends investigators obtain.

In this case, U.S. v. Skinner law enforcement officers obtained an order that allowed them to monitor for 60 days the location of a pre-paid cell phone they had good cause to believe was being used by Big Foot, the nickname given trucker eventually identified as Melvin Skinner, who they alleged was transporting marijuana.  They obtained a court order under which the provider, Sprint/Nextel, acting at the behest of law enforcement, pinged the phone repeatedly so it would reveal its location over a three-day period and eventually activated the phone’s GPS functionality to locate the phone’s GPS coordinates.   (Sprint/Nextel recently developed a web portal through which law enforcement can do this automatically for the duration of the court authorization, without contacting the provider each time officers ping the phone.)

The court found that there was “… no Fourth Amendment violation because Skinner did not have a reasonable expectation of privacy in the data given off by his voluntarily procured … cell phone.”  But, as Jennifer Grannick points out cell phones don’t normally “give off” the kind of GPS location data that law enforcement used to locate Skinner.  Unless the user is employing location services – and Skinner wasn’t – the GPS location data has to be created.  In this case, the provider, under court order, remotely activated the GPS function of Skinner’s phone so the police could track him.

There’s a critical difference between GPS location information and cell tower location information a mobile phone creates during normal use.  The GPS data in this case is created at the request of law enforcement for tracking purposes and not through the normal use of the mobile phone. The GPS data doesn’t even exist until the provider prompts the device to deliver its GPS location to the provider so law enforcement can access it.  In contrast, providers maintain cell tower location information for business reasons.  Because providers do not normally maintain GPS location information and because it was not voluntarily conveyed to the provider, it is not a “business record” and does not fit into the third party records doctrine, which says that a person has no Fourth Amendment interest in information that is voluntarily revealed to, and held by, a third party.  While the third party doctrine should probably be re-examined, for now we have to live with it, but not for GPS data created by providers at the behest of law enforcement.  For that data, we retain our Fourth Amendment rights against warrantless GPS tracking.  

Blind Eye to Justice

Apparently recognizing that GPS is different, the Justice Department recommends that prosecutors obtain a warrant to get GPS location information from mobile communications service providers.  For example, in this power point presentation the Associate Director of the Justice Department Office of Enforcement Operations recommends that prosecutors use search warrants to get prospective GPS location information (referred to as “lat/long data” or latitudinal and longitudinal data) for constitutional, not statutory reasons, and because “anything less presents significant risks of suppression.”  In addition, the Justice Department Associate Deputy Attorney General, testified in April last year that when the government seeks to compel disclosure of prospective GPS coordinates generated by cell phones, it relies on a warrant.

The Sixth Circuit missed this point entirely.  It blithely rejected Skinner’s Fourth Amendment claims and implicitly bought into the government’s argument that orders under the Stored Communications Act provision at 18 USC 2703(d) can be used to obtain prospective location information that has never been stored.  It did not consider whether the information sought was within the third party records doctrine and it cited no statutory authority for the proposition that the government can compel a provider to create the GPS information for the government to seize.  

Perhaps most ironically, it seems pretty clear that the government had facts establishing probable cause and could have obtained a warrant if it had applied for one.  As the concurring opinion in Skinner noted, law enforcement officials were watching the drug operation for months, had recorded conversations about an upcoming drug run, learned that the courier was carrying a particular phone that they could track, and that a half ton of marijuana was in transit.  

A warrant requirement for location information, as advocated by the Digital Due Process coalition, would still mean a drug courier like Skinner would get caught.  If followed, a statutory warrant requirement decreases the chances a criminal would elude jail because the seized evidence would not be at risk of suppression, as it is now for Big Foot if he appeals this decision. 

For updates, follow us on Twitter at @CenDemTech.

Related Posts

Defending networks from malicious hacking exploits depends in large part on the voluntary, cooperative efforts of network operators, device makers, and Internet users.Today the Broadband Internet Technical Advisory Group (BITAG) — a group of technical experts dedicated to building consensus about broadband network management — has released a series of targeted, balanced recommendations to help stifle an emerging type of network attack. That attack has been used in recent years by the hacker…

[Editors Note: This is one in a of series of blog posts from CDT on the Cybersecurity Act, S. 3414, a bill co-sponsored by Senators Lieberman and Collins that is slated to be considered on the Senate floor soon.]Two amendments to the Senate cybersecurity bill now being debated would require government agents to get a warrant before reading a person’s email or secretly tracking someone through their mobile phone.  The amendments, if adopted, would be a huge privacy gain and address a long-…

In a new book, CDT experts debate some of the most pressing issues in surveillance law today.Patriot Debates: Contemporary Issues in National Security Law features CDT’s Greg Nojeim in a debate on the third-party records doctrine and its application to criminal investigations in the digital age. The doctrine holds that law enforcement does not need a warrant to search and seize information lawfully held by third parties, such as online file hosting services like Dropbox or online email…

[Editors Note: This is one in a of series of blog posts from CDT on the Cybersecurity Act, S. 3414, a bill co-sponsored by Senators Lieberman and Collins that is slated to be considered on the Senate floor soon.]  

Congress is about to decide whether it is a crime to violate terms of service governing your use of Gmail, Facebook, Hulu, or any other on-line service.

One of the amendments to the Cybersecurity Act that the Senate is likely to take up this week would substantially increase…

https://www.cdt.org/blogs/greg-nojeim/1708tracking-big-foot-why-gps-location-…

Oversight of Government Privacy, Security Rules for Health Data Questioned | Center for Democracy & Technology

Oversight and accountability for following federal privacy and security rules is critical if the public is going to trust that the next generation of electronic health care providers, insurers, and billing services can protect the privacy of their medical information.  A recent report by the Government Accountability Office questions whether sufficient work is being done to build that public trust.

The GAO report says the Department of Health and Human Services has failed to issue new rules for protecting personal health information and lacks a long-term plan for ensuring that those new rules are being followed.  The HHS Office for Civil Rights (OCR), which is responsible for overseeing these efforts, acknowledged these concerns but noted that rules are winding their way through government channels and that they have “taken the necessary first steps towards establishing a sustainable” oversight program.   

The report’s two main concerns are: (1) the urgent need for guidance on de-identification methods, and (2) lack of a long-term plan for auditing covered entities and business associates for compliance with federal privacy and security rules (specifically, HIPAA and HITECH).

De-Identification Guidance

De-identification is a tool that enables health data to be used for a broad range of purposes while minimizing the risks to individual privacy.  Under HIPAA, there are two methods that can be used to de-identify health data. The first is the safe harbor method, which merely requires the removal of 18 specific categories of identifiers, such as name, address, dates of birth or health care services, and other unique identifiers.  The second is the expert determination method that certifies that the data, in the hands of the intended recipient, raises a very small risk of re-identification. The safe harbor method is static and presumes that the removal of the 18 categories of identifiers translates into very low risk of re-identification in all circumstances.

In HITECH, Congress directed HHS to complete a study of the HIPAA de-identification standard by February 2010.  Though covered entities rely more on the safe harbor method because it is easier to understand and more accessible, OCR aimed to produce guidance that would “clarify guidelines for conducting the expert determination method of de-identification to reduce entities reliance on the Safe Harbor method,” according to the report.  Two years later and notwithstanding its good intentions, OCR has not released this guidance.  

CDT has met with industry and consumer stakeholders about how to improve federal policy regarding de-identified health data since 2009. CDT also recently published an article in JAMIA proposing a number of policies to strengthen HIPAA de-identification standards and ensure accountability for unauthorized re-identification.  

The OCR should issue the required guidance on de-identification without further delay and continue seeking public feedback on how to build trust in uses of de-identified data.  Foot dragging on this issue risks impeding progress on the ability to monitor the public’s health in ways that go far beyond mere notification and routine reporting of symptoms, diagnoses, etc.  With these new capabilities in place, public health officials can move beyond traditional detection and response to outbreaks, enabling earlier disease detection, allowing public health officials to take a more active role monitoring health issues from cancer screening to adult immunizations to HIV.

Ensuring Compliance

Routine audits help ensure that covered entities and business associates comply with HIPAA and HITECH regulations.  Audits also provide OCR with important information about how entities covered by HIPAA and HITECH are implementing critically important privacy and security protections, and potentially surface issues needing further regulatory guidance and helping OCR better determine when penalties for noncompliance are warranted.  

HITECH directed HHS to audit entities covered by HIPAA for compliance with HIPAA and new HITECH requirements; OCR officials began those audits earlier this year. The report states that OCR has no plan to sustain these audits beyond 2012; the report also notes that HHS does not have a defined plan for including HIPAA business associates in its audits. HHS responded that OCR plans to review the pilot audit program at the end of this year and move forward with an audit program after that step is complete.

If the public is to trust that the privacy of their health information is well protected, it must know where that information is going and how it’s being used. The report highlights the importance of audits as an effective mechanism for accountability. CDT is encouraged by the progress OCR has made to date in its pilot audit program, and we are pleased to see HHS commit to learning from the pilots to developing and implementing a sustained plan for auditing compliance with federal privacy and security regulations. 

For updates, follow us on Twitter at @CenDemTech.

https://www.cdt.org/blogs/suchismita-pahi/1607oversight-government-privacy-se…

Oversight of Government Privacy, Security Rules for Health Data Questioned | Center for Democracy & Technology

Oversight and accountability for following federal privacy and security rules is critical if the public is going to trust that the next generation of electronic health care providers, insurers, and billing services can protect the privacy of their medical information.  A recent report by the Government Accountability Office questions whether sufficient work is being done to build that public trust.

The GAO report says the Department of Health and Human Services has failed to issue new rules for protecting personal health information and lacks a long-term plan for ensuring that those new rules are being followed.  The HHS Office for Civil Rights (OCR), which is responsible for overseeing these efforts, acknowledged these concerns but noted that rules are winding their way through government channels and that they have “taken the necessary first steps towards establishing a sustainable” oversight program.   

The report’s two main concerns are: (1) the urgent need for guidance on de-identification methods, and (2) lack of a long-term plan for auditing covered entities and business associates for compliance with federal privacy and security rules (specifically, HIPAA and HITECH).

De-Identification Guidance

De-identification is a tool that enables health data to be used for a broad range of purposes while minimizing the risks to individual privacy.  Under HIPAA, there are two methods that can be used to de-identify health data. The first is the safe harbor method, which merely requires the removal of 18 specific categories of identifiers, such as name, address, dates of birth or health care services, and other unique identifiers.  The second is the expert determination method that certifies that the data, in the hands of the intended recipient, raises a very small risk of re-identification. The safe harbor method is static and presumes that the removal of the 18 categories of identifiers translates into very low risk of re-identification in all circumstances.

In HITECH, Congress directed HHS to complete a study of the HIPAA de-identification standard by February 2010.  Though covered entities rely more on the safe harbor method because it is easier to understand and more accessible, OCR aimed to produce guidance that would “clarify guidelines for conducting the expert determination method of de-identification to reduce entities reliance on the Safe Harbor method,” according to the report.  Two years later and notwithstanding its good intentions, OCR has not released this guidance.  

CDT has met with industry and consumer stakeholders about how to improve federal policy regarding de-identified health data since 2009. CDT also recently published an article in JAMIA proposing a number of policies to strengthen HIPAA de-identification standards and ensure accountability for unauthorized re-identification.  

The OCR should issue the required guidance on de-identification without further delay and continue seeking public feedback on how to build trust in uses of de-identified data.  Foot dragging on this issue risks impeding progress on the ability to monitor the public’s health in ways that go far beyond mere notification and routine reporting of symptoms, diagnoses, etc.  With these new capabilities in place, public health officials can move beyond traditional detection and response to outbreaks, enabling earlier disease detection, allowing public health officials to take a more active role monitoring health issues from cancer screening to adult immunizations to HIV.

Ensuring Compliance

Routine audits help ensure that covered entities and business associates comply with HIPAA and HITECH regulations.  Audits also provide OCR with important information about how entities covered by HIPAA and HITECH are implementing critically important privacy and security protections, and potentially surface issues needing further regulatory guidance and helping OCR better determine when penalties for noncompliance are warranted.  

HITECH directed HHS to audit entities covered by HIPAA for compliance with HIPAA and new HITECH requirements; OCR officials began those audits earlier this year. The report states that OCR has no plan to sustain these audits beyond 2012; the report also notes that HHS does not have a defined plan for including HIPAA business associates in its audits. HHS responded that OCR plans to review the pilot audit program at the end of this year and move forward with an audit program after that step is complete.

If the public is to trust that the privacy of their health information is well protected, it must know where that information is going and how it’s being used. The report highlights the importance of audits as an effective mechanism for accountability. CDT is encouraged by the progress OCR has made to date in its pilot audit program, and we are pleased to see HHS commit to learning from the pilots to developing and implementing a sustained plan for auditing compliance with federal privacy and security regulations. 

For updates, follow us on Twitter at @CenDemTech.

https://www.cdt.org/blogs/suchismita-pahi/1607oversight-government-privacy-se…

Vulnerabilities Allow Attacker to Impersonate Any Website | Threat Level | Wired.com

Vulnerabilities Allow Attacker to Impersonate Any Website

moxie-marlinspike

LAS VEGAS — Two researchers examining the processes for issuing web
certificates have uncovered vulnerabilities that would allow an attacker to
masquerade as any website and trick a computer user into providing him with
sensitive communications.

Normally when a user visits a secure website, such as Bank of America, PayPal
or Ebay, the browser examines the website’s certificate to verify its
authenticity.

However, IOActive researcher Dan Kaminsky and independent researcher Moxie
Marlinspike, working separately, presented nearly identical findings in separate
talks at the Black Hat security conference on Wednesday. Each showed how an
attacker can legitimately obtain a certificate with a special character in the
domain name that would fool nearly all popular browsers into believing an
attacker is whichever site he wants to be.

The problem occurs in the way that browsers implement Secure Socket Layer
communications.

“This is a vulnerability that would affect every SSL implementation,”
Marlinspike told Threat Level, “because almost everybody who has ever tried to
implement SSL has made the same mistake.”

Certificates for authenticating SSL communications are obtained through
Certificate Authorities (CAs) such as VeriSign and Thawte and are used to
initiate a secure channel of communication between the user’s browser and a
website. When an attacker who owns his own domain — badguy.com — requests a
certificate from the CA, the CA, using contact information from Whois records,
sends him an email asking to confirm his ownership of the site. But an attacker
can also request a certificate for a subdomain of his site, such as
Paypal.com.badguy.com, using the null character in the URL.

The CA will issue the certificate for a domain like PayPal.com.badguy.com
because the hacker legitimately owns the root domain badguy.com.

Then, due to a flaw found in the way SSL is implemented in many browsers,
Firefox and others theoretically can be fooled into reading his certificate as
if it were one that came from the authentic PayPal site. Basically when these
vulnerable browsers check the domain name contained in the attacker’s
certificate, they stop reading any characters that follow the “″ in the
name.

More significantly, an attacker can also register a wildcard domain, such as
*.badguy.com, which would then give him a certificate that would allow him to
masquerade as any site on the internet and intercept communication.

Marlinspike said he will be releasing a tool soon that automates this
interception.

It’s an upgrade to a tool he released a few years ago called SSLSniff. The
tool sniffs traffic going to secure web sites that have an https URL in order to
conduct a man-in-the-middle attack. The user’s browser examines the attacker’s
certificate sent by SSLSniff, believes the attacker is the legitimate site and
begins sending data, such as log-in information, credit card and banking details
or any other data through the attacker to the legitimate site. The attacker sees
the data unencrypted.

A similar man-in-the-middle attack would allow someone to hi-jack software
updates for Firefox or any other application that uses Mozilla’s update library.
When the user’s computer initiates a search for a Firefox upgrade, SSLSniff
intercepts the search and can send back malicious code that is automatically
launched on the user’s computer.

Marlinspike said Firefox 3.5 is not vulnerable to this attack and that
Mozilla is working on patches for 3.0.

With regard to the larger problem involving the null character, Marlinspike
said since there is no legitimate reason for a null character to be in a domain
name, it’s a mystery why Certificate Authorities accept them in a name. But
simply stopping Certificate Authorities from issuing certificates to domains
with a null character wouldn’t stop the ones that have already been issued from
working. The only solution is for vendors to fix their SSL implementation so
that they read the full domain name, including the letters after the null
character.

(Dave Bullock contributed to this article.)

Photo of Moxie Marlinspike by Dave
Bullock
.

WikiLeaks Attacks Reveal Surprising, Avoidable Vulnerabilities | Threat Level | Wired.com

WikiLeaks Attacks Reveal Surprising, Avoidable Vulnerabilities

Some online service providers are in the cross hairs this week for allegedly
abandoning WikiLeaks after it published secret U.S. diplomatic cables and drew
retaliatory technical, political and legal attacks. But the secret-spilling
site’s woes may be attributable in part to its own technical and administrative
missteps as well as outside attempts at censorship.

Struggling with denial-of-service attacks on its servers earlier this week,
WikiLeaks moved to Amazon’s EC2 cloud-based data-storage service only to be
summarily booted off on Wednesday, ostensibly for violations of Amazon’s terms
of service. Then on Thursday its domain-name service provider, EveryDNS, stopped
resolving WikiLeaks.org, amid a new DoS attack apparently aimed at the DNS
provider.

While WikiLeaks was clearly targeted, its weak countermeasures drew criticism
from network engineers. They questioned its use of a free DNS service such as
EveryDNS, as well as other avoidable errors that seem to clash with WikiLeaks’
reputation as a tech-savvy and cautious enterprise hardened to withstand any
concerted technical attack on its systems.

“If they wanted to help users get past their DNS problems, they could tweet
for assistance, tweet their IP addy and ask to be re-tweeted, ask owners of
authorities to set up wikileaks.$FOO.com to ‘crowd source’ their name, etc.,” observed one
poster
to the mailing list for the North American Network Operating Group.
“So at the very least, they are guilty of not being imaginative.”

“IMHO it is a gambit to ask for money,” wrote another.

WikiLeaks’ downtime was short-lived, with the site
announcing Friday on Twitter that it was operational on WikiLeaks.de,
WikiLeaks.fi, WikiLeaks.nl and WikiLeaks.ch — the country codes respectively for
Germany, Finland, the Netherlands and Switzerland. The scattering followed a
Thursday outage of WikiLeaks.org and the “Cablegate” subsite, that occurred when
EveryDNS cut off the secret-spilling site.

Unlike the incident this week in which Amazon unceremoniously booted
WikiLeaks from its servers, the latest outage appears to have had less to do
with censorship than with WikiLeaks’ inattention to the more-mundane side of
running an organization.

EveryDNS is a free, donation-supported service run by New Hampshire’s Dyn
Inc. Like thousands of other DNS providers it does the small but crucial job of
mapping a user-friendly internet domain name, like wired.com, to a numeric IP
address that actually means something to the internet’s underlying
infrastructure.

It’s unclear why WikiLeaks went with a free provider, instead of paying for
bulletproof DNS that could withstand attack. But according to EveryDNS, the
distributed denial-of-service attacks that have been dogging WikiLeaks were
threatening to overrun EveryDNS’s servers, which serve some 500,000 sites.

The company responded by notifying WikiLeaks on Wednesday that it was going
to drop the organization in 24 hours, according to a statement on EveryDNS’ website. It reached out to
WikiLeaks on the e-mail address associated with the account, on Twitter, and
even visited the group’s encrypted chat room to try and pass word to the
staff.

That should have been more than enough time for WikiLeaks to move its DNS.
Instead, Thursday night, visitors could no longer reach WikiLeaks.org.

“Any downtime of the wikileaks.org website has resulted from its failure to,
with plentiful advance notice, use another DNS solution,” reads EveryDNS’s
statement.

Rather than tweeting the IP addresses of WikiLeaks hosts, which would allow
visitors to continue to reach the site uninterrupted, WikiLeaks initially used
the outage to encourage donations, tweeting instead: “WikiLeaks.org domain
killed by US everydns.net after claimed mass attacks KEEP US STRONG
https://donations.datacell.com/”.

>

And a follow-up tweet noted: “You can also easily support WikiLeaks via
http://collateralmurder.com/en/support.html”.

>

WikiLeaks fans on Twitter discovered and circulated WikiLeaks’ working
addresses on their own, until about three hours after the outage began, when the
organization tweeted: “WIKILEAKS: Free speech has a number:
http://88.80.13.160″.

WikiLeaks followed that up by promoting WikiLeaks.ch as an alternative
address, but that domain, too, turned out to be resolved by EveryDNS, which shut
it down.

WikiLeaks had the four regional domains working on Friday, resolving to hosts
in Sweden and France. Domain-registration records show that WikiLeaks still has
control of the WikiLeaks.org, but for whatever reason, the organization still
has EveryDNS set as its name server for that domain.

The incident isn’t the first time WikiLeaks has suffered from a bureaucratic
snafu. On June 12, WikiLeaks’ secure submission page stopped working when the
site failed to
renew its SSL certificate
, a basic web protection that costs less than $30 a
year and takes only hours to set up.

And for years WikiLeaks promised would-be leakers that they’d enjoy the
protection of strong journalist shield laws in Sweden, where WikiLeaks maintains
some of its servers. It wasn’t until August of this year that it emerged that
WikiLeaks hadn’t
registered as a media outlet
in Sweden, and thus wasn’t protected.

That latter disclosure sent founder Julian Assange to Stockholm in August in
an effort to correct the oversight. His romantic entanglements on that trip led
to an ongoing sex-crime investigation and the issuance this week of an Interpol “red
notice”
putting Assange on the international police agency’s wanted
list.

Photo: Julian Assange
Lily Mihalik/Wired.com

Kevin
Poulsen is a senior editor at Wired.com and editor of the award-winning Threat
Level blog. His new book on cybercrime, KINGPIN, comes out February 22, 2011
from Crown.
Follow @kpoulsen on
Twitter.

http://www.wired.com/threatlevel/2010/12/wikileaks-domain/

Missile Launched And 50 Nukes Reported AWOL | via @FIRETOWN

Missile Launched And 50 Nukes Reported AWOL

  • without health
    insurance
  • health
    insurance
  • checklist
  • contractor
  • inspection

http://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6902930560225366&output=html&h=250&slotname=9306492639&w=300&lmt=1289443291&flash=10.1.82.76&url=http%3A%2F%2Fwww.firetown.com%2Fblog%2F2010%2F11%2F09%2Fmissile-launched-and-50-nukes-reported-awall%2F&dt=1289443291125&shv=r20101104&jsv=r20101102&saldr=1&correlator=1289443291234&frm=0&adk=3789869929&ga_vid=1043686779.1279542444&ga_sid=1289442890&ga_hid=529705006&ga_fc=1&ga_wpids=UA-8660448-2&u_tz=-360&u_his=3&u_java=1&u_h=600&u_w=1024&u_ah=566&u_aw=1024&u_cd=32&u_nplug=0&u_nmime=0&biw=1020&bih=395&eid=30143102&ref=http%3A%2F%2Fprojectworldawareness.com%2F2010%2F11%2Fcommunication-with-50-nuke-missiles-dropped-in-icbm-snafu-oct-26th%2F&fu=0&ifi=1&dtd=171&xpc=Oq4f87U2yz&p=http%3A//www.firetown.com

This is the ideal setup to claim Iranian cybersquatters have attacked nuclear
facilities so we can launch an attack on them:

Communication
With 50 Nuke Missiles Dropped in ICBM Snafu

The Air Force swears there was no panic. But for three-quarters of an hour
Saturday morning, launch control officers at F.E. Warren Air Force Base in
Wyoming couldn’t reliably communicate or monitor the status of 50 Minuteman III
nuclear missiles. Gulp.

Backup security and communications systems, located elsewhere on the base,
allowed the intercontinental ballistic missiles to be continually monitored. But
the outage is considered serious enough that the very highest rungs on the chain
of command — including the President — are being briefed on the incident
today.

A single hardware failure appears to have been the root cause of the
disruption, which snarled communications on the network that links the five launch control centers and 50
silos of the 319th Missile Squadron. Multiple error codes were reported,
including “launch facility down.”

It was a “significant disruption of service,” an Air Force official familiar
with the incident tells Danger Room. But not unprecedented: “Something similar
happened before at other missile fields.”

A disruption of this magnitude, however, is considered an anomaly of
anomalies.

“Over the course of 300 alerts — those are 24-hour shifts in the capsule — I
saw this happen to three or four missiles, maybe,” says John Noonan, a former
U.S. Air Force missile launch officer who first tweeted word of the issue. “This
is 50 ICBMs dropping off at once. I never heard of anything like it.”

“There are plans and procedures available to deal with individual broken
missiles,” Noonan adds, “but they are wholly inadequate to handle an entire
squadron of missiles dropping offline.”

The incident comes at a particularly tricky time for the Obama administration, which is struggling to get the
Senate to ratify a nuclear arms reduction treaty with Russia. In conservative
political circles, there’s a distrust of the nuclear cuts — and a demand that
they be matched with investments in atomic weapon upgrades. Saturday’s shutdown
will undoubtedly bolster that view.

The disruption is also dark news for the Air Force, which has been hustling
to restore the “zero defects” culture that was the hallmark of its nuclear
forces during the Cold War.

After a series of mishaps — including nosecone fuses mistakenly sent to
Taiwan, and warheads temporarily MIA — the Air Force has made restoring
confidence in its nuclear enterprise a top priority. Officers have been fired
and disciplined for nuclear lapses. The Air Force’s top general and civilian
chief have been replaced. A new Global Strike Command has been put in place, to
oversee all nuclear weapons. Nuclear Surety Inspections, once relatively lax, have become pressure
cookers. These days, a few misfiled papers or a few out-of-place troops means
the entire wing flunks the NSI.

“Any anecdotal exposure of a weakness … could result in an unsafe, unsure,
unsecure or unreliable nuclear weapon system,” Maj. Gen. Don Alston, who
oversees the Air Force’s entire ICBM arsenal, told Danger Room last year. “And I
am not encouraged when people can rationalize: ‘but for that mistake, we were,
y’know, kicking ass.’ Well, but for that mistake, you would have passed. But you
didn’t. You failed. Tough business. And it needs to stay that way.”

Yet the Air Force official claims there was “no angst” about Saturday’s
incident.

“Every crew member and every maintainer seemed to follow their checklists and procedures in order to establish
normal communications,” the official says. “I haven’t detected anyone being
particularly upset with what happened.”

http://www.firetown.com/blog/2010/11/09/missile-launched-and-50-nukes-reporte…

file under Dept. of Duh!

love, your favorite cybersquatter at large :)) edd

New Gaza War Reports Combine Tweets, Maps, SMS | Danger Room | Wired.com

New Gaza War Reports Combine Tweets, Maps, SMS

Jazeera_mash_up
Getting tweets from the war zone is so 2008. The latest social
media advance combines tools like Twitter, text messaging, and online mapping to
gather up first-hand reports, straight from Gaza.

The effort, from Al Jazeera
Labs
, just got started; the reporting is still spotty, and the technology is
very much in the testing phase. But the idea is for residents of Israel, Gaza
and the West Bank to send
quick updates
about the conflict from their computers or mobile phones,
through SMS or Twitter. The results are then verified, and posted to a Microsoft
Virtual Earth map.

“Heavy fighting continues in eastern Gaza. Loud explosions,
presumed to be artillery, and machine gun fire heard,” one
report
says. “Palestinian medics confirm Israeli soldiers have shot and
killed a
20-year-old man during clashes at a Gaza protest in
Qalqilya, West Bank,” reads another.
Nine Egyptian lorries carrying medicine and aid have entered Gaza,”
notes a third.

As Zero Intelligence Agents notes, Al Jazeera is using an open-source software tool
called Ushahidi (Swahili for “testimony”) for the online reporting
experiment. The program was created in early 2008, to document the post-election violence
in Kenya
. Coders in Kenya, South Africa, Malawi, Ghana, Netherlands and the
United States have contributed to its development.

Meanwhile, that oh-so-old school way to follow war news,
Twitter, is bubbling with minute-to-minute updates from both sides of the
Israel-Hamas divide.

ALSO:

http://www.wired.com/dangerroom/2009/01/getting-tweets/

JUST CHECKING TO VERIFY IT IS A #404 ON NATIONAL SECURITY. IT IS NOT THE SITE AND [FOR ONCE] IT IS NOT JUST ~MY~ SERVER]

Lawsuit Targets Mobile Advertiser Over Sneaky HTML5 Pseudo-Cookies | Threat Level | Wired.com

Lawsuit Targets Mobile Advertiser Over Sneaky HTML5
Pseudo-Cookies

A New York mobile-web advertising
company was hit Wednesday with a proposed class action lawsuit over its use of
an HTML5 trick to track iPhone and iPad users across a number of websites, in
what is believed to be the first privacy lawsuit of its kind in the mobile
space.

The company, Ringleader Digital, uses HTML5’s client-side
database-storage capability as a substitute for the traditional cookie tracking
employed by all major online ad companies. Mobile Safari users visiting sites
with Ringleader ads are assigned a unique ID number which is stored by the
browser, and recalled by Ringleader whenever they revisit.

But the tracker, labeled RLDGUID, does not go away when one
clears cookies from the browser. Our sister site Ars Technica reported last week that users savvy enough to
find and delete the database have found it returning mysteriously with the same
ID number as before — a result the lawyers suing Ringleader say they’ve
reproduced.

“You can’t get rid of that database,” says Majed Nachawati,
a Dallas attorney behind the Ringleader lawsuit. “You’re left with this database
tracking you and your phone and your viewing habits on the net, which is a
violation of federal privacy laws.”

Ringleader said it committed no wrongdoing. “To the extent
that the plaintiffs are alleging that Ringleader violated any laws relating to
consumers’ privacy, Ringleader intends to defend its practices vigorously,” Bob
Walczak, CEO of Ringleader Digital, said in an e-mail.

The lawsuit lodged Wednesday in Los Angeles federal court also
names as defendants a number of companies who’d allegedly been serving the
Ringleader trackers on the mobile versions of their sites: Surfline,
WhitePages.com, The Travel Channel, CNN Money, Go2 and Merriam-Webster’s
dictionary site.

The lawsuit comes in the wake of a similar suit filed in
July against MTV, ESPN, MySpace, Hulu, ABC, NBC and Scribd for using storage in
Adobe’s Flash player to re-create cookies deleted by users of nonmobile devices,
allegedly in violation of federal computer-intrusion law.

In Threat Level’s testing Thursday, the RLDGUID uncookie was
still being served from The Travel Channel, Go2 and Merriam-Webster, but not the
other sites named in the lawsuit. In our tests, the database entry did not
reappear. It’s not known if Ringleader has changed its system’s behavior.

HTML5’s database storage is a highly touted feature designed
to allow websites to locally store data on the user’s computer — a boon for
offline use of a browser app.

The Ringleader site provides an opt-out action that can be
implemented by pointing your mobile phone’s browser to a special page on its
website referenced in its privacy policy. How anybody would know that is unclear,
because the sites in Ringleaders networks do not inform consumers of that fact,
according to the lawsuit.

“Please note that opting out does not stop advertisements
from being served to your mobile device, rather, it prevents us from associating
non–personally identifiable data with your device’s browser starting from the
time you implement the opt-out utility,” reads Ringleader’s opt-out page. “It
does not affect data collected prior to that time.”

See Also:

http://www.wired.com/threatlevel/2010/09/html5-safari-exploit/

iSPY CyberSpies Twitter – SSL Certificate Errors “We ‘believe this issue has been resolved.” and you would be wrong!

Jul
27th

Tue

permalink

SSL
Certificate Errors
1 day
ago

We’re looking into expired SSL certificate errors now and working quickly to
resolve it. The problem affects a small portion of TweetDeck API requests.

Update (11:05 PST, 18:05 UTC): We believe this issue has
been resolved.

We believe this issue has been resolved.

“Currently disabled”

via slashdot.org

FUCK A DUCK. REQUESTING BACK-UP. #404 “LINK CORRUPTED”

CERTIFICATE HAS ALREADY BEEN TAKEN… IT WAS PROBABLY AN HONEST MISTAKE”

“Whoa there: this triggered algorithim”

iSPY CyberSpies. CyberCrimes perhaps? hi daddy, have YOU met trevor?

[data read via http://www.google-analytics]

http://status.twitter.com/post/867430677/ssl-certificate-errors