Boing Boing 

Circumvention Tools Hackfest in NYC before HOPE

James Losey sez,

The Open Internet Tools Project (OpenITP) is a collection of open source projects that help build a truly unfettered internet -- private, anonymous and resistant to control. In the week before HOPE in New York City, OpenITP has partnered with FreedomBox, InformSec and ISOC-NY to host a circumvention tools hackfest. OpenITP's James Vasile writes:

"We've got four days to plan, code and learn! If you want to hack on anti-censorship or anti-surveillance tools, bring your project, bring your skills and bring your friends. This event will be focused on writing code and solving design problems. We won't have any long presentations (there will be enough of those at HOPE), though we will have lightning talks and will give away a door prize or two."

Circumvention Tools Hackfest in NYC before HOPE

Malware author taunts security researchers with built-in chat


Security researchers from AVG were decompiling a trojan -- it had been originally posted to a Diablo III forum, masquerading as a how-to video -- when the malware's author popped up in a window on their screen. It turned out that the trojan had a built-in chat, as well as a screen-capture facility. The hacker who wrote the malware saw them working on defeating her or his virus and decided to tell them off for their audacity. Franklin Zhao and Jason Zhou, the AVG researchers, wrote up their experience:

The dialog is not from any software installed in our virtual machine. On the contrary, it’s an integrated function of the backdoor and the message is sent from the hacker who wrote the Trojan. Amazing, isn’t it? It seems that the hacker was online and he realized that we were debugging his baby...

We felt interested and continued to chat with him. He was really arrogant.

Chicken: I didn’t know you can see my screen.

Hacker: I would like to see your face, but what a pity you don’t have a camera.

He is telling the truth. This backdoor has powerful functions like monitoring victim’s screen, mouse controlling, viewing process and modules, and even camera controlling.

We then chatted with hacker for some time, pretending that we were green hands and would like to buy some Trojan from him. But this hacker was not so foolish to tell us all the truth. He then shut down our system remotely.

Have you ever chatted with a Hacker within a virus? (via JWZ)

Tor anonymity developers tell all

Runa from the Tor anonymity project sez, "Karen and I will be answering questions on Reddit today. Feel free to ask us anything you'd like relating to Tor and the Tor Project!"

Market for zero-day vulnerabilities incentivizes programmers to sabotage their own work

In this Forbes editorial, Bruce Schneier points out a really terrible second-order effect of the governments and companies who buy unpublished vulnerabilites from hackers and keep them secret so they can use them for espionage and sabotage. As Schneier points out, this doesn't just make us all less secure (EFF calls it "security for the 1%") because there are so many unpatched flaws that might be exploited by crooks; it also creates an incentive for software engineers to deliberately introduce flaws into the software they're employed to write, and then sell those flaws to governments and slimy companies.

I’ve long argued that the process of finding vulnerabilities in software system increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on — and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

The Vulnerabilities Market and the Future of Security (via Crypto-gram)

ToorCamp: Hack/Make under the stars


George writes,

ToorCamp, the American Hacker Camp, is back again this summer! Although there is no missile silo this time, the weather/environment should be a lot more pleasant on the beach in Washington state. The 5-day (August 8-12) open-air event is open to all hackers, makers, breakers, and shakers to build projects, exchange ideas with the brightest technology folk from around the world, toast a few marshmallows, and just geek out amongst the trees.

There are on-going talks, workshops (including things like hardware hacking, welding, penetration testing, brewing and others), contests, and art projects. [PRO-TIP: We're still accepting submissions if you have something you'd like to present.] And of course, there is plenty of outdoors -- stunning scenery, whale watching, surfing, birding, etc. The camp itself has everything you need: power, internet, food and fun.

We are encouraging attendees to set up a campsite with their friends/maker-space/group, and we'd like to offer all Boing Boing readers a discount code ('bboingrocks!' good until July 1st) for an Happy Mutants Campsite!

Bring a tent, make some friends, and learn a few things. Look forward to seeing everybody there!

PGP founder creates secure voice mobile app, bets people will pay for privacy

PGP creator Phil Zimmerman has launched Silent Circle, an encrypted phone-call app for Android and iOS. The service will likely cost $20/month, for which Zimmerman does not apologize: "This is not Facebook. Our customers are customers. They're not products. They're not part of the inventory" (from CNet).

Silent Circle's planned debut comes amid recent polls suggesting that Internet users remain concerned about online data collection (or at least are willing to tell pollsters so), with Facebook topping health insurers, banks, and even the federal government as today's No. 1 privacy threat. Yet even after a decade of startups that have tried to capitalize on these concerns, consumers spending their own money remain consistently difficult to persuade that paying for privacy is worth it.

Zimmermann hopes to overcome this reluctance by offering a set of services designed from the start to be simple to use: encrypted e-mail, encrypted phone calls, and encrypted instant messaging. (Encrypted SMS text messages are eventually planned too.)

Silent Circle | Worldwide Private Encrypted Communications (via O'Reilly Radar)

NSA whistleblower to keynote HOPE hacker conference in NYC

2600 Magazine's Emmanuel Goldstein writes, "Our second keynote speaker at this year's HOPE conference is someone who has been deep inside the National Security Agency. Former analyst William Binney became aware of an increased tendency at the massive center of surveillance to focus their attention on American citizens, something the NSA was never supposed to do. Binney did the right thing - he quit and told the world what he had learned. Such integrity is something we see often in the hacker world, usually kids standing up to authority and telling the world of their wrongdoings. This time, the stage is much bigger."

HOWTO securely hash passwords

In the wake of a series of very high-profile password leaks, Brian Krebs talks to security researcher Thomas H. Ptacek about the best practices for securing passwords. The trick isn't to merely hash with a good salt -- you must use a slow password hash that takes a lot of work, so that making rainbow tables is impractical.

Ptacek: The difference between a cryptographic hash and a password storage hash is that a cryptographic hash is designed to be very, very fast. And it has to be because it’s designed to be used in things like IP-sec. On a packet-by-packet basis, every time a packet hits an Ethernet card, these are things that have to run fast enough to add no discernible latencies to traffic going through Internet routers and things like that. And so the core design goal for cryptographic hashes is to make them lightning fast.

Well, that’s the opposite of what you want with a password hash. You want a password hash to be very slow. The reason for that is a normal user logs in once or twice a day if that — maybe they mistype their password, and have to log in twice or whatever. But in most cases, there are very few interactions the normal user has with a web site with a password hash. Very little of the overhead in running a Web application comes from your password hashing. But if you think about what an attacker has to do, they have a file full of hashes, and they have to try zillions of password combinations against every one of those hashes. For them, if you make a password hash take longer, that’s murder on them.

So, if you use a modern password hash — even if you are hardware accelerated, even if you designed your own circuits to do password hashing, there are modern, secure password hashes that would take hundreds or thousands of years to test passwords on.

The problem is that you really need to make this design decision from the start -- it's hard to retrofit once you've got millions of users.

How Companies Can Beef Up Password Security

Students assigned to cheat on exam use doctored Little Brother cover and many other clever methods


The IEEE's Computer and Reliability Societies recently published "Embracing the Kobayashi Maru," by James Caroland (US Navy/US Cybercommand) and Greg Conti (West Point) describing an exercise in which they assigned students to cheat on an exam -- either jointly or individually. The goal was to get students thinking about how to secure systems from adversaries who are willing to "cheat" to win. The article describes how the students all completed the exam (they all cheated successfully), which required them to provide the first 100 digits of pi, with only 24h to prepare. The students used many ingenious techniques as cribs, but my heart was warmed to learn that once student printed a false back-cover for my novel Little Brother with pi 1-100 on it (Little Brother is one of the course readings, so many copies of it were already lying around the classroom).

James and Greg have supplied a link to a pre-pub of the paper (the original is paywalled), and sent along a video of a presentation they gave at Shmoocon where they presented the work. The students' solutions are incredibly ingenious -- the audience is practically howling with laughter by the end of the presentation.

(Thanks, Ben!)

G4S: the scandal-embroiled "private security" behemoth that will provide 10,000 "security contractors" to London 2012

Laurie Penny takes a look at G4S, the scandal-embroiled "private security firm" (they're not technical mercenaries because the "security" people who work from them are usually born in the same territories in which they operate) that is the world's second-largest private employer, after WalMart. The company is providing 10,000 "security contractors" to the London Olympics, like these people, who think that it's illegal to take pictures from public land. G4S has lots of juicy contracts around the world, like supplying security to private prisons in the West Bank where children are held. They're also the proud inventors of "carpet karaoke," a technique used at private asylum-seeker detention centres, which is a fancy way for "stuffing a deportee's face towards the floor to contain them."

What difference does it make if the men and women in uniform patrolling the world's streets and prison corridors are employed by nation states or private firms? It makes every difference. A for-profit company is not subject to the same processes of accountability and investigation as an army or police force which is meant, at least in theory, to serve the public. Impartial legality is still worth something as an assumed role of the state – and the notion of a private, for-profit police and security force poisons the very idea.

The state still has a legal monopoly on violence, but it is now prepared to auction that monopoly to anyone with a turnover of billions and a jolly branding strategy. The colossal surveillance and security operation turning London into a temporary fortress this summer is chilling enough without the knowledge that state powers are being outsourced to a company whose theme tune features the line: "The enemy prowls, wanting to attack, but we're on to the wall, we've got your back." If that made any sense at all, I doubt it would be more reassuring.

Laurie Penny: Don't listen to what G4S say. Look at what they do

Preliminary analysis of LinkedIn user passwords

As you've no doubt heard, a large tranche of hashed LinkedIn passwords has been leaked onto the net. There's no known way to turn the hash of a password back into the password itself, but you can make guesses about passwords, hash the guesses, and see if the hashed guess matches anything in the leaked database. Bunnie Huang has been making some educated guesses about the passwords, and he's reported on his findings.

I thought it’d be fun to try to guess some passwords just based on intuition alone, using LeakedIn to check the guesses. Here’s some of the more entertaining passwords that are in the database: ‘obama2012′, ‘Obama2012′, ‘paladin’, ‘linkedinsucks’, ‘fuckyou’, ‘godsaveus’, ‘ihatemyjob’, ‘ihatejews’ (tsk tsk), ‘manson’, ‘starbucks’, ‘qwer1234′, ‘qwerty’, ‘aoeusnth’ (hello fellow dvorak user!), ‘bigtits’ (really?), ‘colbert’, ‘c0lbert’, ‘bieber’, ‘ilovejustin’, ’50cent’, ‘john316′, ‘john3:16′, ‘John3:16′, ’1cor13′, ‘psalm23′, ‘exodus20′, ‘isiah40′, ‘Matthew6:33′, ‘hebrews11′ (bible verses are quite popular passwords!).

Interestingly, there is no ‘romney2012′ or any variant thereof.

Leaked In

Tax-refund fraud: filing someone else's return to rip them off

In the New York Times, Lizette Alvarez reports on a "tsumani of fraud" in the form of tax-refund identity theft. Using only a very little information, crooks file tax returns in their victims' names (the IRS helpfully corrects any mistakes they make in the particulars), then collect the victims' tax refunds:

The criminals, some of them former drug dealers, outwit the Internal Revenue Service by filing a return before the legitimate taxpayer files. Then the criminals receive the refund, sometimes by check but more often though a convenient but hard-to-trace prepaid debit card.

The government-approved cards, intended to help people who have no bank accounts, are widely available in many places, including tax preparation companies. Some of them are mailed, and the swindlers often provide addresses for vacant houses, even buying mailboxes for them, and then collect the refunds there.

...J. Russell George, the Treasury inspector general for tax administration, testified before Congress this month that the I.R.S. detected 940,000 fake returns for 2010 in which identity thieves would have received $6.5 billion in refunds. But Mr. George said the agency missed an additional 1.5 million returns with possibly fraudulent refunds worth more than $5.2 billion.

With Personal Data in Hand, Thieves File Early and Often (via Schneier)

Lockdown: free/open OS maker pays Microsoft ransom for the right to boot on users' computers

A quiet announcement from the Fedora Linux community signals a titanic shift in the way that the computer market will work from now on, and a major threat to free/open operating systems. Microsoft and several PC vendors have teamed up to ensure that only operating systems bearing Microsoft's cryptographic signature will be able to boot on their hardware, meaning that unless Microsoft has blessed your favorite flavor of GNU/Linux or BSD, you won't be able to just install it on your machine, or boot to it from a USB stick or CD to try it out. There is a work-around for some systems involving a finicky and highly technical override process, but all that means is that installing proprietary software is easy and installing free/open software is hard.

This is a major reversal. For many years now, free/open OSes have been by far the easiest to install on most hardware. For example, I have installed Ubuntu on a variety of machines by just sticking in a USB stick and turning them on. Because the OS and its apps are free, and because there are no finicky vendor relationships to manage, it Just Works. On some of those machines, installing a Windows OS fresh from a shrinkwrapped box was literally impossible -- you had to order a special manufacturer's version with all the right drivers to handle external CD drives or docking stations or what-have-you. And the free/open drivers also handled things like 3G USB adapters better than the official drivers (not least because they didn't insist on drawing a huge "WELCOME TO $SOME_STUPID_PHONE_COMPANY" box on the screen every time you connected to the Internet.)

At issue is a new facility called UEFI, which allows a computer's bootloader to distinguish between different operating systems by examining their cryptographic signatures. In theory, this can be used to alert you if malicious software has modified your OS, putting you at risk of having your passwords harvested, your video and sound secretly captured, and your files plundered. But rather than simply alerting users to unsigned ("I have found an unknown operating system and I can't tell if it has dangerous software in it, continue? [Y/N]") or changed OSes ("Your computer has been modified since the last time it was turned on, and now has a version of Windows that can't be verified") Microsoft and its partners have elected to require a very complex and intimidating process that -- by design or accident -- is certain to scare off most unsophisticated users.

Fedora has opted to solve this problem by paying to receive Microsoft's blessing, so that UEFI-locked computers will boot Fedora without requiring any special steps. The payment is comparatively small ($99). When you multiply $99 by all the different versions and flavors of free/open operating systems, it adds up to a substantial revenue stream for Microsoft cost to, and drag upon the free/open software world.

What's more, free/open OSes that don't pay the $99 Microsoft tax will not boot at all on Microsoft-certified ARM-based computers, because Microsoft has forbidden it partners from booting an OS that hasn't been signed by Microsoft, even if the user takes some affirmative step to install a competing system.

This is a tremor before an earthquake: the hardware vendors and the flagging proprietary software vendors of yesteryear are teaming up to limit competition from robust, elegant and free alternatives.

Here's Fedora's Matthew Garrett explaining their decision:

We've been working on this for months. This isn't an attractive solution, but it is a workable one. We came to the conclusion that every other approach was unworkable. The cause of free software isn't furthered by making it difficult or impossible for unskilled users to run Linux, and while this approach does have its downsides it does also avoid us ending up where we were in the 90s. Users will retain the freedom to run modified software and we wouldn't have accepted any solution that made that impossible.

But is this a compromise? Of course. There's already inequalities between Fedora and users - trademarks prevent the distribution of the Fedora artwork with modified distributions, and much of the Fedora infrastructure is licensed such that some people have more power than others. This adds to that inequality. It's not the ideal outcome for anyone, and I'm genuinely sorry that we weren't able to come up with a solution that was better. This isn't as bad as I feared it would be, but nor is it as good as I hoped it would be.

What about ARM

Microsoft's certification requirements for ARM machines forbid vendors from offering the ability to disable secure boot or enrol user keys. While we could support secure boot in the same way as we plan to on x86, it would prevent users from running modified software unless they paid money for a signing key. We don't find that acceptable and so have no plans to support it.

Thankfully this shouldn't be anywhere near as much of a problem as it would be in the x86 world. Microsoft have far less influence over the ARM market, and the only machines affected by this will be the ones explicitly designed to support Windows. If you want to run Linux on ARM then there'll be no shortage of hardware available to you.

Implementing UEFI Secure Boot in Fedora (Thanks, Deborah!)

Safecracking


Ken Doyle, a professional safecracker who's been practicing his trade since 1978, explains the ins and outs of safecracking to McSweeney's Suzanne Yeagley:

Q: How often do people get locked in vaults?

A: More often than you’d think and bank PR departments would like.

...

Q: Do you ever look inside?

A: I NEVER look. It’s none of my business. Involving yourself in people’s private affairs can lead to being subpoenaed in a lawsuit or criminal trial. Besides, I’d prefer not knowing about a client’s drug stash, personal porn, or belly button lint collection.

When I’m done I gather my tools and walk to the truck to write my invoice. Sometimes I’m out of the room before they open it. I don’t want to be nearby if there is a booby trap.

Q: Why would there be a booby trap?

A: The safe owner intentionally uses trip mechanisms, explosives or tear gas devices to “deter” unauthorized entry into his safe. It’s pretty stupid because I have yet to see any signs warning a would-be culprit about the danger.

Over the years I’ve found several tear gas devices in safes and vaults I’ve opened. These devices were marketed with names like “BEAVER” and “BADGER.” There are safecrackers that collect them.

Interviews With People Who Have Interesting or Unusual Jobs (via Schneier)

(Image: It Is Not Often That I Find A Sealed Safe On The Footpath, a Creative Commons Attribution Share-Alike (2.0) image from infomatique's photostream)

Cyber-weapon Flame, "most complex malware ever," identified by Kaspersky Lab

The Moscow-based security firm credited with solving various mysteries around Stuxnet and Duqu today announced the discovery of Flame, a data-stealing virus said to have lurked on thousands of computers in the Mideast for as long as 5 years. A Kaspersky Lab spokesperson described it in a Reuters interview as "the most complex piece of malicious software discovered to date."

Adds Bruce Sterling, "Given that this has been out in the wild for a couple of years now, what’s five times bigger than 'Flame' and even less understood?"

Writing today at Wired News, Kim Zetter reports that Flame is believed to be "part of a well-coordinated, ongoing, state-run cyberespionage operation."

Kaspersky has a FAQ about Flame, here.

(Image: Kaspersky Labs)

Security researcher: I found secret reprogramming backdoors in Chinese microprocessors

Sergei Skorobogatov, a postdoc in the Security Group at the Computer Laboratory of the University of Cambridge has written up claims that reprogammable microchips from China contained secret back-doors that can be used to covertly insert code:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Key features of our technology:

* scans silicon/hardware for backdoors, Trojans and unexpected behaviour
* low cost
* very fast result turnaround time
* high portability
* adaptable - scale up to include many types of chip

Further funding is needed for us to progress to testing further silicon chips and to develop better search algorithms which would allow us to detect possible spy systems or vulnerabilities in a greater range of systems.

Currently there is no economical or timely way of ascertaining if a manufacturer's specifications have been altered during the manufacturing process (99% of chips are manufactured in China), or indeed if the specifications themselves contain a deliberately inserted potential threat.

This block of text is undated, though it appears on a page whose last-modified date is reported as 14-05-2012. I couldn't find any further information on which chips were affected or the methodology used to discover the backdoors.

Hardware Assurance and its importance to National Security (via MeFi)

7-year-old's threatening note regarding home PC security policies


Redditor Surprisemailbox posted this image of a note left by a seven year old for her parents, regarding security policies at home: "If you put a pasword on that I will make your life a nitmare."

The day Poesy leaves me a comparable note, I will have validation that all my parenting was not in vain. (Of course, that's assuming she doesn't just shoulder-surf the password and leave me in a fool's paradise.)

My friends 7 year old sister left this note for her parents on their computer. (via Neatorama)