The BBC website is currently down, and, not to be outdone by the rest of the Internet, they're trying their own interpretation of a "fail pet", a "fail clown":

I don't know about you, but I think that thing is kind of creepy. Aren't fail pets supposed to make you feel warm and fuzzy while you're waiting for the site to come back? This one screams "make it stop!"... strange.

Thanks for the hint, nigelb!

Read more…

In the Euro 2012 group stage, we've seen an awful lot of ties (usually: 1:1) so far, which begs the question: What are the tie-breaker rules in case two or more teams in the same group are tied for points at the end of the group stage?

A quick look at the competition rules, section 8.07 shows (emphasis mine):

If two or more teams are equal on points on completion of the group matches, the following criteria are applied, in the order given; to determine the rankings: a) higher number of points obtained in the matches among the teams in question; b) superior goal difference in the matches among the teams in question (if more than two teams finish equal on points); c) higher number of goals scored in the matches among the teams in question (if more than two teams finish equal on points); d) superior goal difference in all the group matches; e) higher number of goals scored in all the group matches; f) position in the UEFA national team coefficient ranking system (see Annex I, paragraph 1.2.2); g) fair play conduct of the teams (final tournament); h) drawing of lots.

Compare this to the 2010 FIFA world-cup tie-breaker rules and you'll see that they are quite different: The FIFA puts the overall team performance in the group stage first, while the UEFA cares more about how the teams in question compare with each other. That means that two teams can be tied for points, and the one with a much higher goal difference can still draw the shorter straw if they lost the match against the other team in question.

Also, I applaud the UEFA's choice to consider fair-play conduct as part of their tie-breaking rules, though putting it behind the "team coefficient ranking system" makes it look not too sincere. After all, the current rankings don't show a single pair of teams with the same coefficient, so the fair play rule would never apply. Likewise, to cynics, the UEFA rules might convey the message: "If you're ranked high enough, you don't need to care about fair play."

Read more…

I totally forgot blogging about this!

Remember how I curate a collection of fail pets across the Interwebs? Sean Rintel is a researcher at the University of Queensland in Australia and has put some thought into the UX implications of whimsical error messages, published in his article: The Evolution of Fail Pets: Strategic Whimsy and Brand Awareness in Error Messages in UX Magazine.

In his article, Rintel attributes me with coining the term "fail pet".

Attentive readers may also notice that Mozilla's strategy of (rightly) attributing Adobe Flash's crashes with Flash itself by putting a "sad brick" in place worked formidably: Rintel (just like most users, I am sure) assumes this message comes from Adobe, not Mozilla:

Thanks, Sean, for the mention, and I hope you all enjoy his article.

Read more…

Note: This is a cross-post of an article I published on the Mozilla Webdev blog this week.

During the course of this week, a number of high-profile websites (like LinkedIn and have disclosed possible password leaks from their databases. The suspected leaks put huge amounts of important, private user data at risk.

What's common to both these cases is the weak security they employed to "safekeep" their users' login credentials. In the case of LinkedIn, it is alleged that an unsalted SHA-1 hash was used, in the case of, the technology used is, allegedly, an even worse, unsalted MD5 hash.

Neither of the two technologies is following any sort of modern industry standard and, if they were in fact used by these companies in this fashion, exhibit a gross disregard for the protection of user data. Let's take a look at the most obvious mistakes our protagonists made here, and then we'll discuss the password hashing standards that Mozilla web projects routinely apply in order to mitigate these risks. <!--more-->

A trivial no-no: Plain-text passwords

This one's easy: Nobody should store plain-text passwords in a database. If you do, and someone steals the data through any sort of security hole, they've got all your user's plain text passwords. (That a bunch of companies still do that should make you scream and run the other way whenever you encounter it.) Our two protagonists above know that too, so they remembered that they read something about hashing somewhere at some point. "Hey, this makes our passwords look different! I am sure it's secure! Let's do it!"

Poor: Straight hashing

Smart mathematicians came up with something called a hashing function or "one-way function" H: password -> H(password). MD5 and SHA-1 mentioned above are examples of those. The idea is that you give this function an input (the password), and it gives you back a "hash value". It is easy to calculate this hash value when you have the original input, but prohibitively hard to do the opposite. So we create the hash value of all passwords, and only store that. If someone steals the database, they will only have the hashes, not the passwords. And because those are hard or impossible to calculate from the hashes, the stolen data is useless.

"Great!" But wait, there's a catch. For starters, people pick poor passwords. Write this one in stone, as it'll be true as long as passwords exist. So a smart attacker can start with a copy of Merriam-Webster, throw in a few numbers here and there, calculate the hashes for all those words (remember, it's easy and fast) and start comparing those hashes against the database they just stole. Because your password was "cheesecake1", they just guessed it. Whoops! To add insult to injury, they just guessed everyone's password who also used the same phrase, because the hashes for the same password are the same for every user.

Worse yet, you can actually buy(!) precomputed lists of straight hashes (called Rainbow Tables) for alphanumeric passwords up to about 10 characters in length. Thought "FhTsfdl31a" was a safe password? Think again.

This attack is called an offline dictionary attack and is well-known to the security community.

Even passwords taste better with salt

The standard way to deal with this is by adding a per-user salt. That's a long, random string added to the password at hashing time: H: password -> H(password + salt). You then store salt and hash in the database, making the hash different for every user, even if they happen to use the same password. In addition, the smart attacker cannot pre-compute the hashes anymore, because they don't know your salt. So after stealing the data, they'll have to try every possible password for every possible user, using each user's personal salt value.

Great! I mean it, if you use this method, you're already scores better than our protagonists.

The 21st century: Slow hashes

But alas, there's another catch: Generic hash functions like MD5 and SHA-1 are built to be fast. And because computers keep getting faster, millions of hashes can be calculated very very quickly, making a brute-force attack even of salted passwords more and more feasible.

So here's what we do at Mozilla: Our WebApp Security team performed some research and set forth a set of secure coding guidelines (they are public, go check them out, I'll wait). These guidelines suggest the use of HMAC + bcrypt as a reasonably secure password storage method.

The hashing function has two steps. First, the password is hashed with an algorithm called HMAC, together with a local salt: H: password -> HMAC(local_salt + password). The local salt is a random value that is stored only on the server, never in the database. Why is this good? If an attacker steals one of our password databases, they would need to also separately attack one of our web servers to get file access in order to discover this local salt value. If they don't manage to pull off two successful attacks, their stolen data is largely useless.

As a second step, this hashed value (or strengthened password, as some call it) is then hashed again with a slow hashing function called bcrypt. The key point here is slow. Unlike general-purpose hash functions, bcrypt intentionally takes a relatively long time to be calculated. Unless an attacker has millions of years to spend, they won't be able to try out a whole lot of passwords after they steal a password database. Plus, bcrypt hashes are also salted, so no two bcrypt hashes of the same password look the same.

So the whole function looks like: H: password -> bcrypt(HMAC(password, localsalt), bcryptsalt).

We wrote a reference implementation for this for Django: django-sha2. Like all Mozilla projects, it is open source, and you are more than welcome to study, use, and contribute to it!

What about Mozilla Persona?

Funny you should mention it. Mozilla Persona (née BrowserID) is a new way for people to log in. Persona is the password specialist, and takes the burden/risk away from sites for having to worry about passwords altogether. Read more about Mozilla Persona.

So you think you're cool and can't be cracked? Challenge accepted!

Make no mistake: just like everybody else, we're not invincible at Mozilla. But because we actually take our users' data seriously, we take precautions like this to mitigate the effects of an attack, even in the unfortunate event of a successful security breach in one of our systems.

If you're responsible for user data, so should you.

If you'd like to discuss this post, please leave a comment at the Mozilla Webdev blog. Thanks!

Read more…

In my home network, I use IPv4 addresses out of the 10.x.y.z/8 private IP block. After AT&T U-Verse contacted me multiple times to make me reconfigure my network so they can establish a large-scale NAT and give me a private IP address rather than a public one (this might be material for a whole separate post), I reluctantly switched ISPs and now have Comcast. I did, however, keep AT&T for television. Now, U-Verse is an IPTV provider, so I had to put the two services (Internet and IPTV) onto the same wire, which as it turned out was not as easy as it sounds. <!--more-->

tl;dr: This is a "war story" more than a crisp tutorial. If you really just want to see the ebtables rules I ended up using, scroll all the way to the end.

IPTV uses IP Multicast, a technology that allows a single data stream to be sent to a number of devices at the same time. If your AT&T-provided router is the centerpiece of your network, this works well: The router is intelligent enough to determine which one or more receivers (and on what LAN port) want to receive the data stream, and it only sends data to that device (and on that wire).

Multicast, the way it is supposed to work: The source server (red) sending the same stream to multiple, but not all, receivers (green).

Turns out, my dd-wrt-powered Cisco E2000 router is--out of the box--not that intelligent and, like most consumer devices, will turn such multicast packets simply into broadcast packets. That means, it takes the incoming data stream and delivers it to all attached ports and devices. On a wired network, that's sad, but not too big a deal: Other computers and devices will see these packets, determine they are not addressed to them, and drop the packets automatically.

Once your wifi becomes involved, this is a much bigger problem: The IPTV stream's unwanted packets easily satisfy the wifi capacity and keep any wifi device from doing its job, while it is busy discarding packets. This goes so far as to making it entirely impossible to even connect to the wireless network anymore. Besides: Massive, bogus wireless traffic empties device batteries and fills up the (limited and shared) frequency spectrum for no useful reason.

Suddenly, everyone gets the (encrypted) data stream. Whoops.

One solution for this is only to install manageable switches that support IGMP Snooping and thus limit multicast traffic to the relevant ports. I wasn't too keen on replacing a bunch of really expensive new hardware though.

In comes ebtables, part of netfilter (the Linux kernel-level firewall package). First I wrote a simple rule intended to keep all multicast packets (no matter their source) from exiting on the wireless device (eth1, in this case).

ebtables -A FORWARD -o eth1 -d Multicast -j DROP

This works in principle, but has some ugly drawbacks:

  1. -d Multicast translates into a destination address pattern that also covers (intentional) broadcast packets (i.e., every broadcast packet is a multicast packet, but not vice versa). These things are important and power DHCP, SMB networking, Bonjour, ... . With a rule like this, none of these services will work anymore on the wifi you were trying to protect.
  2. -o eth1 keeps us from flooding the wifi, but will do nothing to keep the needless packets sent to wired devices in check. While we're in the business of filtering packets, might as well do that too.

So let's create a new VLAN in the dd-wrt settings that only contains the incoming port (here: W) and the IPTV receiver's port (here: 1). We bridge it to the same network, because the incoming port is not only the source of IPTV, but also our connection to the Internet, so the remaining ports need to be able to connect to it still.

dd-wrt vlan settings

Then we tweak our filters:

ebtables -A FORWARD -d Broadcast -j ACCEPT
ebtables -A FORWARD -p ipv4 --ip-src ! -o ! vlan1 -d Multicast -j DROP

This first accepts all broadcast packets (which it would do by default anyway, if it wasn't for our multicast rule), then any other multicast packets are dropped if their output device is not vlan1, and their source IP address is not local.

With this modified rule, we make sure that any internal applications can still function properly, while we tightly restrict where external multicast packets flow.

That was easy, wasn't it!

Some illustrations courtesy of Wikipedia.

Read more…

I've been making a bunch of memes lately, mostly for the fabulous Mozilla Memes tumblr site. Now, not all of the silly ideas I come up with are Mozilla-related, so naturally, I should publish my memes here for your general entertainment!

So today, I had the "sobering" realization that the ruby gems I bought from a trustworthy gentleman on the Internet recently* might not be as valuable as expected:

I knew it!

*) this may not be a factual statement.

Read more…

During the work week of Mozilla's Rapid Web Development in the Bay Area a few weeks ago, we gave a bunch of lightning talks.

In my talk, I am looking at two math problems from Project Euler. For each of them, I am contrasting an intuitive solution with one that is, arguably, faster/better/uses less memory. But is it actually a better solution and an optimization worth spending time and effort on?

Check it out and let me know what you think!

Read more…

A few weeks ago on my Minecraft server of choice, I made a thing:

It's about 50 by 50 blocks. How did I make this, you ask? I took the actual Firefox logo, resized it to 50x50 pixels in Gimp, and reduced the colors pretty drastically. Then it just boils down to putting a grid over the image and turning the pixels into blocks. (Luckily, the Minecraft server is in Creative Mode, so I could freely pick blocks according to their colors, even though some of them are really rare when mined.)

Read more…

Oh noes! Yesterday I learned that I never posted the followup photos to my Limoncello experiment a few months ago! Sorry about that. Here it goes!

A few weeks after preparing the lemon zest and Everclear concoction for the steeping, I got it back out of the basement: Limoncello

Meanwhile, I had obtained and cleaned some small, reclosable lemonade bottles (after drinking their content, of course): <!--more--> Limoncello

Straining the zest from the concentrated lemon extract reveals that it has transferred virtually all its color into the liquid and is now almost white. Fun! Limoncello Limoncello

Prepared some simple syrup (same amount as the lemon extract, i.e., about half a liter) in a pot and stirred it in. As you may have noticed, this changed the tint of the liquid a little: I used raw cane sugar, which unlike refined white sugar has a little bit of a color of its own. But hey! Trading color for flavor is fine by me. Limoncello

Stirred it good, funneled into the bottles! Limoncello

Finally, off into the freezer, and after a few hours... Prost!

Read more…

A recent blog post syndicated to Planet Mozilla (trying to recruit supporters for a petition against marriage equality in the UK) led to a veritable storm in the Mozilla community around a content policy for Planet, and finally it turned into a general discussion about a Code of Conduct for the entire Mozilla Community.

This proposal isn't new: A few months ago, it shortly came up surrounding a panel discussion at Mozilla's internal All-Hands conference, where I asked for clarification on recent vague remarks from the CEO concerning anti-discrimination policies at Mozilla. Unsurprisingly (and rightly), the CEO reaffirmed that Mozilla would not tolerate illegal discrimination of any kind (paraphrase: mine).

At the time, I concluded that there was no need for a written Code of Conduct, believing that the basic concept of treating each other with mutual respect was so universal and simple that it can even (and should) be instilled into any preschooler, not to mention adult. It does not matter what you personally believe about anyone else's gender, religion, sexual orientation, body shape, color of skin, handicap, or funny accent. The instant you walk through Mozilla's virtual "doors", you have to exhibit professionalism and respect towards whoever you interact with. I expected it to be an obvious prerequisite for acceptance into the Mozilla community.

Apparently, this is not shared by everyone. A small minority of community members seem to believe they don't (always) have to adhere to such standards. Unfortunately, they are supported by another group of people who misunderstand this as a question of freedom of speech. It isn't.

Still, let's not get carried away: Inflating this occurrence into an outright crisis would utterly disregard the contributions of many individuals (me included) who take the mandate of mutual respect very seriously and have--regardless of their own background or even opinion on the topic at hand--been speaking up and demanding the person in question adhere to such standards while acting within the Mozilla community. Not to mention the many community members who may be less vocal but exhibit flawless behavior towards their peers on a daily basis as a matter of course.

In the light of all this, I concede now that writing down a Code of Conduct would be helpful to Mozilla. It would perform the important function of reminding people of these basic standards and urge superiors and peers to enforce them. It would also serve as a valuable reference in case of confusion, or when new members are unsure what's expected from them.

Christie has listed some good examples of existing codes of conduct by other open source communities. I particularly like the Code of Conduct set forth by Ubuntu, because it does not make the need to act civilly dependent on any particular attribute of a person. Instead, it demands consideration, respect, and collaborative behavior from all community members and towards all community members equally.

I'd wish for a Mozilla Code of Conduct to do the same.

Read more…