After Apple and Microsoft have (finally!) publicly announced they are ready to pull the plug on Adobe Flash, the first makers of Flash webapps are starting to ditch it in favor of HTML5: As Techcrunch writes, Scribd, an online document hosting service, will focus its efforts on HTML5 from now on.

Scribd co-founder and chief technology officer Jared Friedman tells [Techcrunch]: “We are scrapping three years of Flash development and betting the company on HTML5 because we believe HTML5 is a dramatically better reading experience than Flash. Now any document can become a Web page.”

I am very pleased to hear that. Now that web standards are finally offering the kind of versatility modern web applications need, it is a fantastic development that companies are getting rid of the monster that is Flash. That's good for the user for so many reasons, and it's a great example of what HTML5 can really do.

Update: Ryan points out in the comments that Scribd has a demo document online of what this is going to look like. It's fantastic!

By the way: Another company I would like to see getting rid of Flash (in fact, I never understood why they used it in the first place) is slideshare. They are turning into a de-facto standard for posting presentation slides online, but as of yet, their main UI is solidly in Flash's claws. :(

Read more…

On a growing number of projects at Mozilla, we use a tool called Hudson that runs a complete set of tests on the code with every check-in. The beauty of this is that if you accidentally break something, you (and everyone else) will know immediately, so you can fix it quickly. We also use a bunch of plugins with Hudson, one of which assigns points to every check-in: For example, if all tests pass, you get a positive number of points, or if you broke something, you get a negative score.

An innocent little commit of mine gained me a whopping -100 points (yes, that is minus 100) today.

How did that happen? The build broke badly, not because I wrote a pile of horrendous code, or because I didn't test before committing. In fact, I've made it a habit to commit like this:

./manage.py test && git push origin master

This fun little one-liner will result in my code being pushed to the origin repository if and only if all tests pass.

So in my case, all tests passed locally, and then horribly broke once the server ran the tests again. After a little research, it turned out that when I deleted a now unneeded Python file, I did not remove its compiled cousin, the .pyc file, along with it. Sadly, this module was still imported somewhere else, and because Python still found the .pyc file locally, it did not mind the original .py file being gone, so all tests passed. On the server, however, with a completely clean environment, the file wasn't found and resulted in the failures of dozens of tests (all of which threw an ImportError).

What's the lesson? In the short term, I should wipe my .pyc files before running tests. One way to do that would be adding something like

find . -type f -name '*.pyc' | xargs rm

to my ever-growing commit one-liner, but a more general solution might want to perform this inside the test running script. On the other hand, since that script is written in Python, some of the imports that could break have already been performed by the time the script runs.

In general, run your tests on as clean an environment as possible. While any useful test framework will take care of your database having a consistent state for every test run, you also need to ensure that you start with a plane baseline of your code -- especially if Hudson, the merciless butler, will rub it in your face if you don't ;) .

Read more…

If you noticed an unexpected outage of my blog and all other sites on this web server, I apologize. This morning, I was greeted by a dead lighttpd web server and when restarting it, it decided to give me this error message instead:

2010-03-05 10:23:01: (network.c.529) SSL: error:00000000:lib(0):func(0):reason(0)

Luckily, a little bit of googling showed that this is a bug in lighttpd 1.4.26's SSL interface that can be fixed with this little workaround until a new version is released:

cd /tmp
wget http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.26.tar.gz
tar xzf lighttpd-1.4.26.tar.gz
cd lighttpd-1.4.26

cd src
rm network.c
wget http://redmine.lighttpd.net/projects/lighttpd/repository/revisions/2716/raw/branches/lighttpd-1.4.x/src/network.c
cd ..
./configure && make install

As you can see, the server is happily back up and running. Thanks to sekuritatea for the fix.

Read more…

I know, I know. Technically, it's only a fail pet if a web site uses a nice little creature on an error page announcing unplanned down-time of the service.

That makes this ASCII cow from Craigslist not really a fail pet, but I find it a nice enough idea to blog it anyway:

This little fellow shows up on the 404 error page (i.e., any page that does not exist on craigslist.org, such as this one). While it is just taken from a well-known UNIX command, I like it a lot because it goes very well with the simplicity of craigslist itself, which is intentionally so much different than all the shiny "Web 2.0" applications.

Thanks for the hint, Jabba!

Read more…

So, you have a bunch of .avi video files (from your cell phone, for example) that you'd like to combine into one (so you can upload the collection to YouTube)?

Here are two options on how to do this. The first one uses a tool from the transcode package:

avimerge -i one.avi two.avi three.avi -o output.avi

avimerge is appropriately named, and if it works, it works well. Sometimes, however, it produces out-of-sync audio, which is kind of lame if people are actually, you know, talking in your videos.

Second method to the rescue: mencoder is part of the MPlayer family and can also concatenate avi files:

mencoder -oac copy -ovc copy one.avi two.avi three.avi -o output.avi

Note: Both methods are lossless, as neither the video nor audio stream is re-encoded in any way, but they also require all input files to use the same stream formats. If you took the different videos with the same device though, that shouldn't be a problem.

Read more…

Another addition to my ever-growing fail pet collection. Today: Neatorama, a website collecting everything, well, neat.

Their "fail pet" is an octopus, the "Neatokraken":

Fail whale, make room: You've got company.

Read more…

A few days ago, a colleague of mine mentioned that the font I was using on my blog looked borderline ugly on Linux. Here's a screen shot:

As you can see, the uneven glyphs make it look goofy and certainly hard to read. The problem was that I used a font that seems to be present on many Mac and Windows computers, but was unavailable on my colleague's Linux box. His browser tried to substitute it with a different font -- with limited success.

So I decided to use a nifty little web feature called @font-face that allows me to define and embed my desired fonts into the website. Ideally, every browser on every platform will download the fonts I am using, and display my blog the way it is intended to look. The fonts I am using now are called Goudy Bookletter 1911 (for the headings) and Droid Serif (for the text).

I hope you like the new fonts and find them pleasant to read. If you notice any problems, however, please let me know!

Thanks for the hint, Lars, and thanks to all commenters for providing valuable feedback!

Read more…

A while ago, when I was flying to Idaho and had a layover in Salt Lake City, Utah, I was for the first time confronted with full-body scanners at an airport security checkpoint. It was at the time a pilot test, and there were signs saying I had the right to refuse the scanner. Appalled by the idea of doing a digital strip dance for the security officers, I refused, and while I the security officer didn't appreciate the extra work, I had to wait in line shortly, received a quick pat-down, and was sent on my way.

Full-body scanners have since received a lot of attention, and were introduced in many airports, some mandatory for primary screening, others opt-out, and finally some only use it for secondary screening, that is, when the metal detector beeps, or similar.

Today I am pleased to read that the Idaho House voted in favor of a bill restricting the use of such scanners in the state (the bill would forbid using such scanners as primary screening method in airports). The bill is now moving to the Senate. While I may not agree with many views of American conservatives, (given I am European, probably not too shocking a statement), I agree with the assessment that full-body scanners entail an unreasonable strip search of people who haven't given any indication that would warrant such treatment.

Now let's hope the law passes, and that other states, and perhaps countries, follow suit.

Thanks for the link, Jenny!

Read more…

Today, Mozilla is starting the public process on revising its signature code license, the Mozilla Public License or MPL. Mitchell Baker, chair of the board of the Mozilla Foundation and author of the original MPL 1.0, has more information about the process on her blog.

The discussion is happening on the website mpl.mozilla.org that looks something like this:

I am happy about this for a number of reasons. Of course, I made the website (the design is borrowed from mozilla.org), so I am naturally happy to see it being available to a wider audience.

But I also hope that the revision process itself will be successful. While the MPL has been a remarkable help in Mozilla desktop projects' success, it is unpleasant (to say the least) to use in web applications, for a number of reasons:

The hideous license block. The MPL is a file-based license. It allows any file in the project, even in the same directory, to be licensed differently. Therefore, each MPL-licensed code file must have an over 30 lines long comment block on top. For big code modules, that's fine. For web applications, whose files often have a handful of lines, this balloons up the whole code base and makes files horribly unreadable. Sadly, the current license only allows an exception from that rule if that's impossible "due to [the file's] structure" which would essentially only be the case if that file type did not allow comments.

The copyleft. This one is debatable, but it's a fact that some open source communities, one prominent example is the Python community, does not appreciate strong copyleft provisions. While the MPL (unlike the GNU GPL) does not have a tendency to "taint" other code, this is not at all compatible with the BSD or MIT licenses' notion of "take it and do (almost) whatever you please with it". (As you may have noticed, the file-based MPL is both a curse and a blessing here). I hope that the revision process can make it clearer how this applies to hosted applications (i.e., mostly web applications).

I am excited to see what the broad community discussion will bring to light over the next few months.

Read more…

Update: The author of pdftk, Sid Steward, left the following comment:

A new version of pdftk is available (1.43) that fixes many bugs. This release also features an installer [for] OS X 10.6. Please visit to learn more and download: www.pdflabs.com.
This blog post will stick around for the time being, but I (the author of this blog) advise you to always run the latest version so that you can enjoy the latest bug fixes.

OS X Leopard users: Sorry, neither this version nor the installer offered on pdflabs.com works on OS X before 10.6. You might be able to compile from source though. Let us know if you are successful.


Due to my being a remote employee, I get to juggle with PDF files quite a bit. A great tool for common PDF manipulations (changing page order, combining files, rotating pages etc) has proven to be pdftk. Sadly, a current version for Mac OS X is not available on their homepage. In addition, it is annoying (to say the least) to compile, which is why all three third-party package management systems that I know of (MacPorts, fink, as well as homebrew), last time I checked, did not have it at all, or their versions were broken.

Now I wouldn't be a geek if that kept me from compiling it myself. I took some hints from anoved.net who was nice enough to also provide a compiled binary, but sadly did not include the shared libraries it relies on.

Instead, I made an installer package that'll install pdftk itself as well as the handful of libraries you need into /usr/local. Once you ran this, you can open Terminal.app, and typing pdftk should greet you as follows:

$ pdftk
SYNOPSIS
       pdftk <input PDF files | - | PROMPT>
            [input_pw <input PDF owner passwords | PROMPT>]
            [<operation> <operation arguments>]
            [output <output filename | - | PROMPT>]
            [encrypt_40bit | encrypt_128bit]
(...)

You can download the updated package here: pdftk1.41_OSX10.6.dmg

(MD5 hash: ea945c606b356305834edc651ddb893d)

I only tested it on OS X 10.6.2, if you use it on older versions, please let me know in the comments if it worked.

Read more…