The title to the presentation is the following:
Disclosing Security Vulnerabilities: How to Do It Responsibly
I'm currently engaged in case law research for a presentation on this topic and using Westlaw in addition to my usual journalism searches. Not surprisingly, most of the cases I've read are more about violations of non-disclosure agreements and disclosure of trade secrets by disgruntled employees. However, my friend, Brenno de Winter, recently published an article titled, "Researchers Show How to Crack Popular Smart Cards." I was interested to read that researchers at universities in The Netherlands and in Germany have broadened the research done by the MIT undergrads who were not permitted to discuss or release their source code.
What I'm discovering from my research about computer security disclosure is that a lot of the heat is primarily focused on academia. Remember Professor Ed Felten with Princeton's computer science department with SDMI? His team won the challenge, but they faced prosecution if they talked about it or tried to publish their academic research. The challenge explicitly stated: "So here's the invitation: Attack the proposed technologies. Crack them."
However, what if the vendor producing an insecure product does not outright demand a challenge, but simply puts the product into the marketplace? A good example is the Mifare Classic and NXP Semiconductor. They fought the battle against the MIT students and, for the most part, won because their source code was not distributed.
However, a group from the Dutch Radbouda University Nijmegen recently assembled complete published research that would allow someone to build a cloned card. The Dutch courts said that, "...researchers shouldn't fall victim to mistakes made by suppliers," and allowed publication. I was also amazed to read that a university in Germany cut down an actual chip, and by viewing the IC layers under a microscope, they were able to figure out how the chip works and derived the algorithm.
Two different ways of figureing out security vulerabilities, but with the same result. It's now out there and readily available to a determined attacker. On the other hand, some might say that it's also readily available to a security researcher who can assess the security vulerabilities and make a better design the next time around.
This debate is nothing new--consider Copernicus's and Kepler's revolutionary teachings and publications. But what surprises me is the fact that it is nothing new. The prosecutions--including criminal--for teaching, dicussing, and publishing are still a reality. Where would we be now if Galileo's Dialogue Concerning the Two Chief World Systems wasn't published or discussed because he feared being burned at the stake? We'd still be in a Ptolemaic system where the planets revolved around us--a pretty ego-centric way to view life (picture of a geocentric universe by Portuguese cosmographer and cartographer Bartolomeu Velho, 1568 [Bibliotèque National, Paris] above).
Considering a modern perspective regarding security vulerability disclosures, it would be a more insecure world without discussions about design flaws. Professors like Ed Felten, although perhaps not (yet!) as influencial as Galileo, are to be lauded, not threatened with criminal prosecution.
Considering a modern perspective regarding security vulerability disclosures, it would be a more insecure world without discussions about design flaws. Professors like Ed Felten, although perhaps not (yet!) as influencial as Galileo, are to be lauded, not threatened with criminal prosecution.
No comments:
Post a Comment