Hardware hackers building interactive gadgets based on the Arduino microcontrollers are finding that a recent driver update that Microsoft deployed over Windows Update has bricked some of their hardware, leaving it inaccessible to most software both on Windows and Linux. This came to us via hardware hacking site Hack A Day.
The latest version of FTDI’s driver, released in August, contains some new language in its EULA and a feature that has caught people off-guard: it reprograms counterfeit chips rendering them largely unusable, and its license notes that:
Use of the Software as a driver for, or installation of the Software onto, a component that is not a Genuine FTDI Component, including without limitation counterfeit components, MAY IRRETRIEVABLY DAMAGE THAT COMPONENT
The license is tucked away inside the driver files; normally nobody would ever see this unless they were explicitly looking for it.
The result of this is that well-meaning hardware developers updated their systems through Windows Update and then found that the serial controllers they used stopped working. Worse, it’s not simply that the drivers refuse to work with the chips; the chips also stopped working with Linux systems. This has happened even to developers who thought that they had bought legitimate FTDI parts.
Nice four-hander here: the rights of the end-user, the rights and duties of the vendor, the rights and liabilities of the legitimate parts maker, and the potential liabilities of Microsoft for serving up the malware-to-counterfeits via Windows Update.
According to the affidavit from FBI Special Agent Thomas M. Dalton, the person who sent a fake bomb threat to cause Harvard to evacuate several buildings during exams used a throwaway email address from Guerrilla Mail, which he contacted via Tor. The FBI caught him anyway because the sender of the bomb threat accessed Tor via the Harvard wireless network.
The Guerrilla Mail FAQ says that “Logs are deleted after 24 hours,” but the FBI apparently got there inside that window. Presumably using the Guerrilla Mail logs, the FBI determined that the sender of the emails used Tor, an anonymization tool, to connect to Guerrilla Mail. Although the affidavit doesn’t spell any of this out, Harvard’s logs allowed it to figure out who had been using their wireless network to connect to Tor. They then somehow — correlating who among the limited pool of Tor-users with the people who had exams in the buildings evacuated due to the bomb threat? — fingered a suspect (or suspects?). I’d love to know how many people were in the intersection of those two sets. When confronted by the FBI a Harvard undergrad who confessed. One has to wonder, though, if there would have been sufficient evidence to convict beyond a reasonable doubt without that confession. After all, there are other ways to contact Tor.
Tor is widely considered to be the best tool available for online anonymity, so this serves as a cautionary lesson on how difficult it is to be anonymous on line.
The secure sockets layer (SSL) credentials were digitally signed by a valid certificate authority, an imprimatur that caused most mainstream browsers to place an HTTPS in front of the addresses and display other logos certifying that the connection was the one authorized by Google. In fact, the certificates were unauthorized duplicates that were issued in violation of rules established by browser manufacturers and certificate authority services.
The certificates were issued by an intermediate certificate authority linked to the Agence nationale de la sécurité des systèmes d’information, the French cyberdefense agency better known as ANSSI. After Google brought the certificates to the attention of agency officials, the officials said the intermediate certificate was used in a commercial device on a private network to inspect encrypted traffic with the knowledge of end users, Google security engineer Adam Langley wrote in a blog post published over the weekend. Google updated its Chrome browser to reject all certificates signed by the intermediate authority and asked other browser makers to do the same. Firefox developer Mozilla and Microsoft, developer of Internet Explorer have followed suit. ANSSI later blamed the mistake on human error. It said it had no security consequences for the French administration or the general public, but the agency has revoked the certificate anyway.
An intermediate certificate authority is a crucial link in the “chain of trust” that’s key in connections protected by SSL and its successor protocol, known as transport layer security (TLS). Because intermediate certificates are signed by a root certificate embedded in the browser, they have the ability to mint an unlimited number of digital certificates for virtually any site. The individual certificates will be accepted by default by most browsers.
Maybe it’s time to dust off and update my article on digital signatures and digital certificates, The Essential Role of Trusted Third Parties in Electronic Commerce, 75 Ore. L. Rev. 49 (1996). I think this was the first article published in a US law review on the topic, and even though it’s held up well, there have been many developments in nearly 20 years. On the other hand, there are three new papers I need to finish first…
U.S. law puts the onus on the individual to protect his or her own privacy with only a small number of exceptions (e.g. attorney-client privilege). In order to protect privacy, one usually has three possible strategies: to change daily behavior to avoid privacy-destroying cameras or online surveillance; to contract for privacy; or to employ Privacy Enhancing Technologies (PETs) and other privacy-protective technologies. The first two options are very frequently unrealistic in large swaths of modern life. One would thus expect great demand for, and widespread deployment of, PETs and other privacy-protective technologies. But in fact that does not appear to be the case. This paper argues that part of the reason is a set of government and corporate policies which discourage the deployment of privacy technology. This paper describes some of those polices, notably: (1) requiring that communications facilities be wiretap-ready and engage in customer data retention; (2) mandatory identification both online and off; (3) technology-limiting rules; and also (4) various other rules that have anti-privacy side effects.
The paper argues that a government concerned with protecting personal privacy and enhancing user security against ID theft and other fraud should support and advocate for the widespread use of PETs. In fact, however, whatever official policy may be, by its actions the prevailing attitude of the U.S. government amounts to saying that PETs and other privacy protecting technology, must be kept on a leash.
A last-minute update reconsiders the argument in light of the Snowden revelations about the widespread dragnet surveillance conducted by the NSA.