Twitter

    follow me on Twitter

    Sunday, November 16, 2008

    Discussion with Far McKon, The Hacktory

    In the car on the way home from The Last HOPE in NYC in July 2008, three of us (Nothingface, Professor Rad, and Infochown) decided we want a hacker space in Maine. We were inspired by a presentation at HOPE about other hacker spaces.


    Since then, we’ve attended four 2600 meetings where we’ve spent a bit of time discussing the hacker space idea. Funding and a name (“hack” or not “hack” is the question) are the sticking points for establishing the space. With all of the companies going under in Southern Maine, there is a plethora of rental space (both business and industrial), so we have plenty of space options, but just not the funds yet.


    To find a solution to these issues and learn from others’ experiences, I’ve summoned wise hacker space organizers and asked if they could share their ideas, organizational structure, and “don’t make these mistakes like we did” stories with me. (The “hippy problem” stories are always the funniest.)


    The following is a discussion I had today with Far McKon, organizer of Hacktory in Philadelphia. (I took notes while we were talking, so forgive me, Far, if I don’t have everything exactly as you told me.) Far also said that the numbers are a general ballpark, so please don’t take this as a price quote.


    If Maine’s hacker space can pair with the art community and, perhaps, also attract the entrepreneur/writer community like IndiHall in Philadephia, I think we could have a pretty awesome co-work place!


    Discussion with Far McKon, November 16, 2008


    I spoke with Far, organizer of Hacktory in Philadelphia, about how they set up and manage their space. They are moving from a free space they had that was loaned to them by a company, but that space is too small and it’s not possible to use heavy machinery and big tools in that space; they cannot get it up to their 3rd floor space. It has a crazy, narrow winding staircase and zoning would not allow it.


    To solve their space problem, they grouped together with the art community in Philadelphia. (Brilliant! This is something that Maine’s Hacker Space surely could do—we have a HUGE artist community in Portland.) They will soon have a tech incubator space in the basement of a building that is zoned for heavy machinery/industry.


    Hacktory’s affiliation with the art community is crucial for what they are doing. The hacker space in Somerville, Massachusetts also paired with the art community. Many artists are using heavy machinery and tools for their art and either they don’t have the space in their homes/apartments, cannot use fire and big machinery without violating zoning or fire codes, or cannot afford their own large studio.


    The way the deal will work with the two other organizations grouping together with Hacktory is that one of the groups (not Hacktory) is always there (9-5) and controls access, signs for package deliveries, phones, secretarial services, etc.


    Physical Space:


    They have a large industrial space with a big, open work space in the middle. Locking studios (cubicles?) are grouped around the central open space. Those studios are small (about 50-100 sq. ft.). Some of those locked studios contain the more dangerous or expensive equipment; you get a key by taking 1-2 hour classes about “how not to break the stuff.” Some of the studios are rented to hobbyists who what a locked space and others are rented to for-profit businesses; there is a different fee structure for these two groups. All of the studio renters get the benefit of an address, secretary, shipping, etc.


    Another idea, although not what Hacktory is doing, is that some hacker spaces rent the use of your own Craftsman rolling case for about $50.-75.00./month. You use the shared work benches with your tools in your rolling case. When you’re done, you put your stuff away in your case and roll it into a locked area.


    Fee Structure for Studios:


    $1.25 sq. ft. for hobbyists

    $2.25 sq.ft. for-profit business, secretary, shipping


    Organizational Structure:


    Hacktory is a 501c3 non-profit. They have also registered with the state as an educational organization. With that status, they pay a lot less for insurance which is about $180-200/month.

    For paying members, they have about 5 active members and 7 who are occasional.


    How Does Hacktory Raise Funds?

    They offer classes and they have paying members. The classes they run are about $15.00 per hour of class time. The classes a member would need to take to operate the locked machinery is about $30.00 for 2 hours of instruction.


    Membership Rates for Hobbyists:

    • $15.00/month open hours, open means when there is an organizer/manager there;
    • A few students pay no fees if they watch the space, so could allow for more open; hours;
    • $65-85.00/month for Saturday and Sunday access only;
    • $125.00 open access with your own key card.

    Saturday, November 15, 2008

    Maine's Biggest Cyber Crimes Case--James Wieland

    All day Thursday of this week, I got e-mails from friends in different computer law, hacker, cyber crimes, and computer forensics communities. They'd ask, “Did you see the hacker case? What do you think? Did he do it?”


    The case to which they are referring is, to date, Maine’s biggest cyber crime’s case. This is bigger than the warez sales going on in Portland back in 2005.


    But there is something different about this case that’s intriguing to me. In the warez case, there was a group out to profit from copying software; it was an unsophisticated, but high volume, operation. The warez trade isn’t too difficult to do and it’s not terribly interesting.


    This case is different. Although I’ve spoken to James, we knew better than to discuss the case. The facts about which I write are only what are available to the public via news organizations. The news articles—from here down to the D.C. area—all have the same theme of, “evil hacker breaks and steals stuff,” but what’s really going on behind these headlines, only time will tell as the facts--especially the technical aspects of the case--are presented in court. Was this curiosity and experimentation that went beyond rational bounds or, on the other hand, was this a well-designed and calculated attack with fraudulent intent?


    The news reports state that James Wieland, student at University of Maine, spread a Trojan horse program (don’t know if it was a worm or virus, but this will matter in the case) by adding it as an attachment to an e-mail that contained a video game. When the recipients opened the attachment, the malicious program was executed. The Trojan reportedly contained a keystroke logger program. It was reported that James has been collecting and storing that data since August 2007. But the intriguing part—and I’m sure an aspect of the case that will be highly-debated in court—is why James allegedly collected this information and why he didn’t do anything with it? There were no reports that he used anything he allegedly collected.


    There are some hypotheses in the case: 1) James was the victim of a botnet attack on his computer meaning that he didn’t know about the Trojan or what it would do; 2) He released the Trojan as a curious experiment (didn’t write the code, but in script kiddy fashion, applied it and released it) but didn’t quite know what it was doing or how to stop it (classic Morris Worm case); 3) He wrote the code and released the Trojan with malicious intentions and wanted to collect private data from the victims and either sell it or use it for nefarious or fraudulent purposes.


    What’s striking to me about this case is that James has a lot to loose. He just got engaged in Italy last month, worked for a Christian school, has his own business consulting and web design company, and seems to be just starting his professional and family life. He just doesn’t fit the typical profile of a malicious cracker. If convicted, these felony charges could be more than 5 years in prison. The District Attorney states that the 5 year sentencing estimate may be just a start and that there might be more incriminating data found now that they’ve cleaned his place out of all electronic equipment.


    I wonder if University of Maine’s policy of attaching students' and faculty’s first and last names to our host names (HEY DEFENSE COUNSEL, YOU LISTENING?) had anything to do with them tracing an IP address to James Wieland? This can be spoofed, and if James was clever enough to write the program and orchestrate this attack, wouldn’t he have also been able to obscure his IP address?


    What I can do is bring as much as I can from Wieland’s case to our computer law and ethics class, COS 499, at University of Southern Maine in spring 2009. Like everything in my class, we analyze computer law and cyber crimes cases from a non-biased perspective. I have hackers as guest speakers as well as a few influential FBI and CIA agents discussing different perspectives regarding electronic crimes and how to prevent them with better computer security built from the ground up into software, hardware and network design practices we teach at U. Maine.


    I will be going to as many of Wieland’s hearings as I can; most certainly, I’ll be at the arraignment in January 2009. No matter the outcome of the case, there is a lot that security researchers, information technologists, and computer scientists can learn from this case. A lot of lessons will be learned both by James Wieland and by the University of Maine.


    I urge readers to please hold judgment until the facts—especially the technical aspects—are presented. I want to read more articles or hear in court that this is more than tracing an IP address to Wieland. Right now, there are no details beyond that.


    When we hear about the Trojan’s code (such as how it worked—I want the functions of the key algorithms discussed in court!), how and where the data was obtained and stored, 4th amendment practices (forensic hashing of the hard drive), access to the data files, and the Internet access to Wieland’s computer network, then we’ll discuss what went wrong.


    I’ll keep you updated. If you see news articles or find information online, please e-mail them to me.

    Wednesday, November 12, 2008

    Presentation for the Maine Association for Law and Innovation

    Yesterday I presented at Maine Law about my security vulnerability disclosure research. It was great to be back at Maine Law, but this time as a lecturer. I spent so many hours in the moot court room as a student that, just for a split second, I has a fleeting feeling of "OMG, am I going to get cold called on," when I entered the room. Some things from law school never leave you.

    I have made some changes to the disclosure presentation I gave at Pumpcon a few weeks ago. For instance, Juliet (aka, Victoria) said that I had too much stuff on my slides, so I tried to par that down. She was totally right about that.

    Also, I changed the title from "How to Responsibly Disclose," to a title that didn't reflect the ethical ramifications of the word "responsible." Using the industry practice of what is called "responsible disclosure," is not always the most responsible way to disclose vulnerabilities. The more research I do and the more well known security researchers with whom I discuss this topic, I find that sometimes other types of disclosures (full or partial) is what's needed for better security. My legalese peers may not agree with me, but from the computer researcher's perspective, the name "responsible disclosure" is not always as the ethical implications of that word suggest.

    Last, but not least, I added a lot more about disclosing physical/electronic security vulnerabilities. I did a lot of research into lock picking laws and industry practices. Really fascinating and it makes me want to do more lock picking should the opportunity present itself.

    However, are electronic biometric locks with cryptographic keys the future of the lock industry? I think they may be. That's good and bad--like all technology, isn't it? I've always been interested in the idea of faking biometric scans. For example, I know that a retina scan is really hard to fake, but mirrored or cloudy contact lenses mess up the scan. So, if you're getting your initial scan (such as going through the international terminal in Frankfurt, Germany) and you get scanned with these contact lenses in, will the computer reject the scan or will that base line scan, albeit "fake", be "yours"? I'll have to do a post about biometric scans because I did a lot of related research about that when I was working on RFID technologies and legislation.

    Saturday, November 8, 2008

    Altering Airline Boarding Passes—Schneier and Soghoian

    One of the conversations we had at the November 2600 meeting was about Bruce Schneier’s alteration of airline boarding passes and using one to get through a TSA checkpoint. Schneier admits that it is illegal, and if done, there is a possibility of arrest. (Note: If you’re reading this and considering doing it, remember that you are not Bruce Schneier. I don’t truly think that the Feds would arrest him, but they would arrest you.)


    At the meeting, we were discussing what those illegalities might be. To do so, we considered how fraud is different from a hoax or forgery. In short, fraud is where deception is used to unlawfully take property (usually money) or services from another. What about those theories applied to altering a boarding pass? Go to the link to see an altered boarding pass used by Jeffrey Goldberg—he even upgraded himself to 1st class for priority boarding. New York Senator Schumer was nervous about this exact scenario when he offered a bill that would treat these “federal criminals” named “Joe Terror” like a “…19 year old who makes a fake ID to buy a 6 pack of beer.” (Hhmm...Joe Terror sounds a lot like Joe Six Pack.)


    Not a "Joe Terror," or "Joe Six Pack," a PhD student named Chris Soghoian wrote a program accessible through is website that would generate a fake boarding pass. What happened is discussed in his blog: in short, the glass on his front door was smashed by the FBI, his computer equipment taken, and a search warrant (issued at 2 AM) was taped to his kitchen table. But how does the law address altering boarding passes? Consider this section of federal law addressing the falsification of airline tickets or boarding documents (highlighted for emphasis):

    From DHS Code Title 49, Volume 8; October 1, 2004 rev. [Page 302]:

    TITLE 49--TRANSPORTATION

    CHAPTER XII--TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY

    PART 1540_CIVIL AVIATION SECURITY: GENERAL RULES--Table of Contents

    Part 1540.5 -- Terms used in this subchapter.
    §1540.5 Sterile area means a portion of an airport defined in the airport security program that provides passengers access to boarding aircraft and to which the access generally is controlled by TSA, or by an aircraft operator under part 1544 of this chapter or a foreign air carrier under part 1546 of this chapter, through the screening of persons and property.


    Subpart B_Responsibilities of Passengers and Other Individuals and
    Persons

    Sec. 1540.103 Fraud and intentional falsification of records.

    No person may make, or cause to be made, any of the following:

    (a) Any fraudulent or intentionally false statement in any
    application for any security program, access medium, or identification
    medium, or any amendment thereto, under this subchapter.

    (b) Any fraudulent or intentionally false entry in any record or
    report that is kept, made, or used to show compliance with this
    subchapter, or exercise any privileges under this subchapter.

    (c) Any reproduction or alteration, for fraudulent purpose, of any
    report, record, security program, access medium, or identification
    medium issued under this subchapter.


    Below is something under the USC that is applicable to altering a document regarding a “matter within the jurisdiction of executive, legislative, or judicial branch of the Government":



    United States Code
    Title 18. Crimes and Criminal Procedure
    Part I.
    Crimes
    Chapter 47. Fraud and False Statements

    47 U.S.C. § 1001
    a) Except as otherwise provided in this section, whoever, in any matter within the jurisdiction of the executive, legislative, or judicial branch of the Government of the United States, knowingly and willfully--
    (1) falsifies, conceals, or covers up by any trick, scheme, or device a material fact;
    (2) makes any materially false, fictitious, or fraudulent statement or representation; or
    (3) makes or uses any false writing or document knowing the same to contain any materially false, fictitious, or fraudulent statement or entry;

    shall be fined under this title or imprisoned not more than 5 years, or both.


    Although these codes would answer our question about Bruce Schneier’s experiment with altered boarding passes, they do not exactly cover Chris Soghoian with his website that created boarding passes. Most people who saw it when it was up (I did), thought it was a parody. Here’s what Chris recently said about that experience: “In 2006, the FBI investigated me for some of my research into boarding pass security. While no charges were ever filed, it's reasonable to state that I have little affection for the DOJ computer crimes section.”


    In summary, altering boarding passes—for fraudulent purposes or not-- is covered under these statutes. Beware if you’re not Bruce Schneier. And if you are Bruce Schneier or Chris Soghogian, thank you for your security research and for potentially, “taking a hit for the team."

    Wednesday, November 5, 2008

    Pumpcon, Philadelphia, PA, October 25, 2008—PART TWO, Computer Software and Hardware Security Vulnerabilities


    The second part of my research questions what are differences between the electronic/physical security and computer software/hardware communities? If these communities have different ethical opinions regarding disclosure, why are they different?


    At Pumpcon, a conference attendee from one of the U.S.’s largest electronic/physical security companies answered: “It’s because I’ll get my ‘arse’ fired if I talk about vulnerabilities and I’ll probably never work again.” He also made some comment about worrying about his personal safety post-disclosure.


    Another attendee, who works for a computer security firm said, “Do it anonymously!” It’s harder to do that with physical/electronic security vulnerabilities; however, disclosures for both communities are taken more seriously when there is proof of vulnerability, right? It’s easy to post anonymously about a computer vulnerability to a bug report online site with your IP address obscured versus sending photos of yourself clad in burglar attire (or not, see picture above of one of the guys from MIT [picture from their Defcon presentation]) breaking into something. Although possibility for arrest is high, some are willing to take the risk. As John Benson, jur1st, calls it, “Taking a hit for the team.”


    An example of the contrast between these two communities that I used in my Pumpcon presentation was the presentation, Anatomy of a Subway Hack, made at Defcon by three undergraduate students, Russell Ryan, Zach Anderson, Alessandro Chiesa, at MIT. These guys really took a hit for the team with the feds, in large number at Defcon, ready to arrest these guys in Las Vegas before their presentation.


    Does one group have much more to risk than the other? Is it much more risky (to the discloser and the vendor) to disclose how to beat electronic/physical security measures as opposed to electronic? If so, how do proponents of full or responsible disclosure do this?


    They do it at HOPE, Defcon, BlackHat, Pumpcon, Shmoocon and other computer security hacker conferences. The risks disclosures face are evidenced in the all too frequent arrests at these conferences.


    One HOPE, BlackHat, and Pumpcon presenter had some important information to impart and of which I learned directly affected a company I know. Travis Goodspeed disclosed vulnerabilities in a Texas Instruments chip that is commonly used in biomedical devices and small consumer electronics. There are two debugging ports on this chip. If accessed, one could delete and replace software on the chip. If your company was using this chip, would you want to know about this design flaw?


    Travis presented about this at other conferences before Pumpcon, but at this conference I was able to ask him how he addressed this with Texas Instruments. He said that they did talk with him about this security vulnerability and the fact that he has written viruses able to take advantage of this vulnerability. TI was receptive to discussing this with him, but presently, these flaws still exist on the chip.


    When asked what TI could have done better to facilitate more bug reports, he said that it would have been good for them to at least give him a contact person to e-mail so he wouldn’t have to repeatedly go through the general information route when he wants to report vulnerabilities he discovers. And they should fix them, but as of the Pumpcon conference, they still exist.

    Tuesday, November 4, 2008

    Pumpcon, Philadelphia, PA, October 25, 2008—PART ONE, Physical and Electronic Security Vulnerabilities


    When I was growing up, one of the benefits to having a CIA dad was that I got to play with the cool stuff he brought home. As a result, I’ve been trained to beat lie detectors, how to mentally isolate physical stimuli with training on a biofeedback machine, how to make instant keys with a magical metal that melts within seconds in a spoon held over a lighter flame, and how to break into the doors and windows in the house in which I grew up. There was also a very cool lock picking kit that he had, and once when I locked myself out of my car in a dark parking lot, my Dad showed up with the kit and had my car open in seconds. Growing up, I thought that was a skill that most dads had and that every kid should be taught. I was taught these useful skills that ended up helping me in many situations.


    So why are the best lock picking techniques and tools kept a secret or are illegal to possess in D.C.? Is this another example of “security through obscurity?” I’ve spent some time in the lock picking villages at HOPE and Defcon conferences. Why aren’t there more of these opportunities available to the general public, e.g. not only to locksmiths or computer security conference attendees?


    This was one of the issues I addressed in my presentation at Pumpcon. I had a great time talking about this at Pumpcon. Soon after Defcon and inspired by the events surrounding the three undergrads from MIT, I began researching how to disclose security vulnerabilities. Curious about the divide between the electronic/physical security breaches compared to those involving computer software and hardware, why aren’t both groups interested in truthful discussions regarding flaws? Shouldn’t we as consumers who rely on software designers to protect our home computers and mechanical engineers who designed the locks on our houses know if something isn’t quite as secure as advertised?


    It seems as if the hardware and physical/electronic security hacks take a long time to be exposed. For example, Marc Tobias presented at HOPE and Defcon in 2008 about how to pick Medeco key locks. However, according to my government fed sources, Medeco lock vulnerabilities have been known for more than a decade. What’s the reason for it not being publicly discussed? I’ve been told that it takes a highly skilled lock picker—or a locksmith—to successfully pick Medeco locks.


    But surely, there were skilled lock pickers out there, so why the silence? I found the following on a site that discusses Tobias’ book:


    “A detailed analysis is available together with a video demonstration that clearly shows the method of bypass. This publication has been restricted to locksmiths and the professional security community because of the simplicity of the technique and the potential security ramifications that could result from a public disclosure of the exact method. If you have security responsibility, you may contact the author for access to the restricted document. The password has been posted on ClearStar for security professionals.”


    I know that the locksmith community has a lot of power, but do they have the power to silence discussion about how to pick locks? Perhaps this is true especially considering that it’s illegal to own a lock picking kit in some states unless you’re a licensed locksmith. Is this an example of a good way to make things more secure or, on the other hand, an example of legislation protecting jobs in an industry? This is how the Freemasons started; they were protecting the secret ingredients for making cement thus making the formula a coveted secret protected by those sworn into a brotherhood and keeping the brothers with guaranteed jobs.


    For a modern example, someone from the crowd at Pumpcon told me that Joe Grand, aka KingPin, gets death threats regarding his lock picking research. I’d like to find more information regarding this statement. (Denied by Kingpin--see comments.) If true, that’s a sobering example of the power of a brotherhood of some kind protecting their own. Is lock picking worthy of protecting through this extreme measure? Does it make us more secure?

    Will this attitude eventually spill over into the computer hardware and software security industry? I fear it might.


    Note: The picture above is of the back of Kevin Mitnick’s business card. Obviously, it contains lock picking tools. They will work. Funny, but I think his thumb print is still on the other side of card! ; )