As mentioned earlier, I dove a little into the world of non-relational databases for web applications. One of the more interesting ones seems to be MongoDB. By the way, a video of the presentation I attended is meanwhile online as well.

MongoDB does not only seem to be "fully buzz-word compatible" (MongoDB (from "humongous") is a scalable, high-performance, open source, schema-free, document-oriented database.), it also looks like an interesting alternative storage backend for web applications, for various reasons I'd like to outline here.

Note that I haven't extensively worked with MongoDB, nor have any of the significant web applications I worked with used non-relational databases yet. So you are very welcome to point out points I got wrong in the comments.

First, some terminology: Schema-free and document-oriented essentially means that your data is stored as a loose collection of items in a bucket, not as rows in a table. Different items in the bucket can be uniform (in OOP-terms, instances of the same class), but they needn't be. In MongoDB, if you access a non-existent object, it'll spring into existence as an empty object. Likewise for a non-existent attribute.

How can that help us? Web applications have a much faster development cycle than traditional applications (an observation, for example reflected in the recent development changes on AMO). With all feature changes, related database changes have to be applied equally as frequently, every time write-locking the site up to several minutes depending on how big the changes. In a schema-free database, code changes can smoothly be rolled out and can start using fields right away, on the affected items only. For example, in MongoDB, adding a nickname to the user profiles would be trivial, and every user object that never had a nickname before would be assumed to have an empty one by default. The tedious task of keeping the database schema in sync between development and deployment is basically going away entirely.

In traditional databases, we have gotten accustomed to the so-called ACID properties: Atomicity, Consistency, Isolation, Durability. By relaxing these properties, unconventional databases can score performance-wise, because less locking and less database-level abstraction is needed. Some exemplary ACID relaxations that I gathered about MongoDB are:

  • MongoDB does not have transactions, which affects both Atomicity and Isolation. This will let other threads observe intermediate changes while they happen, but in web applications that is often not a big deal.
  • MongoDB relies on eventual consistency, not strict consistency. That means, when a write occurs and the write command returns, we can not be 100% sure that from that moment in time on, all other processes will see the updated data only. They will only eventually be able to see the changes. This affects caching, because we can't invalidate and re-fill our caches immediately, but again, in web applications it's often not a big deal if updates take a few seconds to propagate.
  • Durability is also relaxed in the interest of speed: As we all know, accessing RAM takes a few nanoseconds, while hitting the hard drive is easily many thousands of times (!) slower. Therefore, MongoDB won't make sure your data is on the hard drive immediately. As a result, you can lose data that you thought was already written if your server goes down in the period between writing and actual storing to the hard drive. Luckily, that doesn't happen too often.

As you see, if our application is not a banking web site and we are willing to part with some of the guarantees that traditional databases offer, we can use a database like MongoDB, that much more closely fits the way modern web applications are developed than regular RDBMSes do. If that's an option, every project needs to decide on a case-by-case basis.

Read more…

There is a Belorussian version of this article provided by PC.


One of the talks I really enjoyed at recent FOSDEM was Paul and Tristan's presentation on Hackability. (Tristan uploaded the English slides to slideshare, as well as the French ones).

Essentially, it was a great promotion for keeping the Web (and Firefox as the tool we view it through) (both legally and technically) open, its building blocks visible and interchangeable. If you can't open it, you don't own it.

As a result, this also means the "view source" function is not there to feed the user's idle curiosity, it is a vital and irreplaceable part of the Web. Likewise, a tool like Firebug does not exist to "break" other people's websites. Instead, it helps us to use the web the way it was meant to be used.

Recently, a colleague of mine (don't remember who, sorry) linked to a little website called patch culture.org, that, in spite of its simple appearance, promotes exactly that: using the Web the way it was meant to be used, fixing, improving the Web on our way through other people's sites, and better yet, share our changes with the people who own the sites. Their steps are easy: 1) Install Firebug, 2) change a website, 3) email a patch to the owner.

Sounds easy (to geek ears, anyway) but is harder than it looks. For starters, how do I get my changes out of Firebug? It's a concept we could call "diffability". If I have to write a book describing what I did to some website's DOM nodes and CSS rules, I am far less likely to fix someone else's website for them than when there is an easy way for me to do it. Granted: Even if Firebug let me export a unified diff, owners of non-trivial, framework-based web sites wouldn't be able to just go ahead and apply it on their codebase. However, diffs are human engineer readable. Without losing a ton of words, the website owner could look at the changes I made and choose to apply them to their software in the appropriate spots.

Second, how do I make my changes stick? We Open Source developers are of course some of the more altruistically inclined citizens of the Web, still if you are going to fix someone's website, you are likely to do so to lower your own annoyance level first, then everybody else's. Therefore, you want your changes to "stick", if or if not the website owner decides to accept and deploy your changes.

Thankfully, this is achievable, though it involves a little bit of a hassle. There are add-ons out there, most notably Stylish (for CSS-based changes) and Greasemonkey (for JS-based changes). These two were recently joined by Jetpack Page Mods. While Greasemonkey is a solid platform with tons of contributions, I see its biggest flaw in missing a solid standard library that takes the pain out of JavaScript, a problem Jetpack mitigates by shipping with jQuery included. In comparison, using jQuery with Greasemonkey is many things, none of which is "beautiful". If Greasemonkey wants to stay the technology of choice for "web hackers", it needs a standard library. Only then will it fill its place as a lightweight extension engine in the future, (yes, in spite of its recent inclusion in Chrome). It would be a twisted situation if it became easier to write full-blown (Jetpack-based) extensions than writing a user script. It's the reason I am already writing small website changes as Jetpacks and not GM scripts, and I am not the only one. But because competition is good for business, on the Web as much as elsewhere, I hope the Greasemonkey guys stay on top of their game.

In summary:

  • Let's make and keep the Web open and hackable!
  • We can change web sites, but it's hard to share what we did. A great way towards more open hacking would be a diff engine in Firebug. Even if it only exports pseudo-diffs, or even if the diffs can't be applied with one click unless you run a fully static website.
  • Finally, it's possible but hard to make changes stick. Greasemonkey is a strong contender in the field, but if they want to keep being the number one "hackability engine", they'll need to make writing scripts easier by adding a decent standard library. After all, it is not the 20th century anymore.

Read more…

Last weekend I spent at FOSDEM 2010, the tenth installment of the "Free and Open Source Software Developers' European Meeting". It was my first time there, and it was great. It was a full-blown conference and meeting point for both big and small open source projects from all over Europe.

Let me outline some of the highlights:

  • As expected, the Mozilla presentations were highly frequented, and the Mozilla Europe team presented great HTML 5 features that'll make the future of the Web (and web developers' future) bright. Another presentation focused on the importance of Hackability for making the future of technology what we want, not what we are being fed.
  • Sunday I spent some time on the NoSQL track. It started off with a good presentation on what non-relational databases can do for you, and why they are not supposed to replace SQL. While NoSQL is a buzz word, it's important to note that there is a potential for faster, smoother applications by dropping the rigid framework that relational databases impose on us developers when its advantages are not needed.
  • Another NoSQL related presentation, Introduction to MongoDB, showed off the features of this particular, schema-free, document-oriented, database. I found it highly interesting for web applications and am looking forward to giving it a shot on an upcoming project.
  • Finally, two Facebook engineers explained what Open Source projects they have used and improved to scale their infrastructure to accommodate its enormous user base. What's impressive is that they have introduced improvements on almost all parts of the software stack. In order to serve pictures faster, for example, they wrote a file system that allows them to grab a file in a single read. Another interesting technology is HipHop, their PHP-to-C++ compiler. This ensures that they can hire PHP developers, yet have a ridiculously fast web application. That's probably as ugly as it sounds, but luckily not everybody has to do it ;)

On some of these issues, I am going to go into more detail in followup posts.

I also went to some presentations that affect my work on the Mozilla project slightly less:

  • One of the keynotes, Evil on the Internet, was equally as insightful as it was scary. Not only are the scams out there on the Internet getting smarter and harder to detect, it is also frightening how long some scam sites stay online, if no-one feels responsible for them.
  • Professor Andrew Tanenbaum showed off his MINIX microkernel, version 3, for which he recently received a significant research grant from the European Union. He would also like to see Firefox ported to MINIX, anyone want to help him out? :)

All in all, fosdem 10 was a great success, thanks to all the volunteers who made it happen!

Read more…

So you wrote a fancy little Django app and want to run it on a lighttpd webserver? There's plenty of documentation on this topic online, including official Django documentation.

Problem is, most of these sources do not mention how to use virtualenv, but the cool kids don't install their packages into the global site-packages directory. So I put some scripts together for your enjoyment.

I assume that you've put your django app somewhere convenient, and that you have a virtualenv containing its packages (including django itself).

1. manage.py

You want to set up this file so it adds the virtualenv's site-packages path to its site-packages: site.addsitedir('path/to/mysite-env/lib/python2.6/site-packages'). Note that you need to point directly to the site-packages dir inside the virtualenv, not only the main virtualenv dir. For obvious reasons, this line needs to come before the django-provided from django... import, because you can't import django files if Python doesn't know where they are.

2. settings.py

The lighttpd setup will result in mysite.fcgi showing up in all your URLs, unless you set FORCESCRIPTNAME correctly. If your django app is supposed to live right at the root of your domain, set this to the empty string, for example.

3. django-servers.sh

This is an initscript (for Debian, but you can modify it to work with most distros, I presume). Copy it to /etc/init.d, adjust the settings on top (and possibly other places, depending on your setup), then start the Django fastcgi servers. Note that you need to have the flup package installed in your virtualenv.

4. lighttpd-vhost.conf

Set up your lighttpd vhost pretty much like the Django documentation suggests. Match up the host and port with the settings from your init script. By using mod_alias for the media and admin media paths, you'll have lighttpd serve them instead of passing them on to Django as well.

That's it! You've deployed your first Django application on lighttpd. If you have any questions or suggestions, feel free to comment here or fork my code.

You can look at all the scripts together over on github or download them in a package.

Read more…

[caption id="attachment_2563" align="alignright" width="150" caption="CC by-sa licensed by twicepics on flickr"][/caption]Over the recent weeks I've got frequent blog spam along the lines of:

Hi. I just noticed that your site looks like it has a few code problems at the very bottom of your site's page. I'm not sure if everybody is getting this same problem when browsing your blog? I am employing a totally different browser than most people, referred to as Opera, so that is what might be causing it? I just wanted to make sure you know. Thanks for posting some great postings and I'll try to return back with a completely different browser to check things out!

(emphasis: mine)

Not only does my blog display just fine in Opera (yes, I checked), I get even more bogus comments at times claiming that my blog looks horrible in Firefox, of all browsers. Dear spammers, now you're just making fools of yourselves.

The main thing identifying this kind of comment as spam (other than the bogus claim that my blog doesn't render correctly in non-Internet-Explorer browsers) is the URL these comments come with. Usually, they promise a "free" iPod, MacBook, car, house, airplane or ride to the moon (exaggeration: mine).

I wonder how many bloggers actually publish these, thinking it's well-meant advice. :(

Photo credit: "Spam" CC by-sa licensed by twicepics on flickr

Read more…

To keep track of my ever-growing to-do list, I am using a fabulous little application called "Things". And most of my work-related to-do items are bugs in Mozilla's bugzilla bug tracker.

So, me being a geek and all, I quite naturally wanted to integrate the two and wrote a little AppleScript that asks the user for a bugzilla.mozilla.org bug number, obtains its bug title, and makes a new to-do item for it in Things' Inbox folder.

The script is available as a gist on github. Click here to download it.

If you look at the code, you'll notice that I went ahead and embedded some Python code to the script to do the heavy lifting. The problem with AppleScript is not only that it has a hideous syntax, it also completely lacks a standard library for things like downloading websites and regex-parsing strings. Let's look at it a little closer:

set bugtitle to do shell script "echo \"" & bugzilla_url & "\" | /usr/bin/env python -c \"
import re
import sys
import urllib2
bug = urllib2.urlopen(sys.stdin.read())
title = re.search('<title>([^<]+)</title>', bug.read()).group(1)
title = title.replace('&ndash;', '-')
print title
\""
  • set bugtitle to do shellscript "" means, assign whatever this shell expression returns to the variable bugtitle. This way, we just need to print our final result to stdout and keep using it in AppleScript.
  • echo \"" & bugzilla_url & "\" | /usr/bin/env python feeds some input data into the Python script through stdin. We read that a few lines later with sys.stdin.read(). Another method, especially for more than one input values, would be command-line parameters, all the way at the end of the Python block (after the source code).
  • Finally, in python -c \"mycode\" the -c marks an inline code block to be executed by the Python interpreter. Other languages, such as Perl, PHP, or Ruby, have similar operating modes, so you can use those as well.

If you want to install the Things-Bugzilla AppleScript, make sure to download the entire Gist as it also contains an install script for your convenience.

Read more…

A few days ago techtarget published a short interview about the OSU Open Source Lab, where I worked while studying at OSU:

"Lance Albertson, architect and systems administrator at the Oregon State University Open Source Lab, uses a sys admin staff of 18-21-year-old undergrads to manage servers for some high-profile, open-source projects (Linux Master Kernel, Linux Foundation, Apache Software Foundation, and Drupal to name a few). In this Q&A, Albertson talks about the challenges of using young sys admins and the lab's plans to move from Cfengine to Puppet for systems management."

(via slashdot).

I must say, the work I've seen student sys admins do at the OSL is outstanding, and I've met some of the sharpest people there I've ever worked with. Glad to hear they are still going strong.

Thanks for the link, Justin!

Read more…

With the Subversion VCS, one way to import external modules or libraries into a code tree is by defining the svn:externals property of your repository. Subversion will then check out the specified revision (or the latest revision) of the other repository into your source tree when checking out your code.

Submodules are basically the same thing in the "git" world.

And since git can talk to subversion repositories with git svn, we should be able to specify a third-party SVN repository as a submodule in our own git repository, right? Sadly the answer is currently: No.

Here is a workaround that I have been using to at least achieve a similar effect, while keeping both SVN and git happy. This assumes that you have a local git repository that is also available somewhere else "up-stream" (such as github), and you want to import an external SVN repository into your source tree.

1: Add a tracking branch for SVN

Add a section referring to your desired SVN repository to your .git/config file:

(...)
[svn-remote "product-details"]
    url = http://svn.mozilla.org/libs
    fetch = product-details:refs/remotes/product-details-svn

Note that in the fetch line, the part before the colon refers to the branch you want to check out of SVN (for example: trunk), and the part after that will be our local remote branch location, i.e. product-details-svn will be our remote branch name.

Now, fetch the remote data from SVN, specifying a revision range unless you want to check out the entire history of that repository:

git svn fetch product-details -r59506:HEAD

git will check out the remote branch.

2: clone the tracking branch locally

Now we have a checked-out SVN tracking branch, but to use it as a submodule, we must make a real git repository from it -- a branch of our current repository will keep everything in one place and work as well. So let's check out the tracking branch into a new branch:

git checkout -b product-details-git product-details-svn

As git status can confirm, you'll now have (at least) two branches: master and product-details-git.

3: Import the branch as a submodule

Now let's make the new branch available upstream:

git push --all

After that's been pushed, we can import the new branch as a submodule where we want it in the tree:

git checkout master
git submodule add -b product-details-git ../reponame.git my/submodules/dir/product-details

Note that ../reponame.git refers to the up-stream repository's name, and -b ... defines the name of the branch we've created earlier. Git will check out your remote repository and point to the right branch automatically.

Don't forget to git commit and you're done!

Updating from SVN

Updating the "external" from SVN is unfortunately a tedious three-step process :(. First, fetch changes from SVN:

git svn fetch product-details

Second, merge these changes into your local git branch and push the changes up-stream:

git checkout product-details-git
git merge product-details-svn
git push origin HEAD

And finally, update the submodule and "pin it" at the newer revision:

git checkout master
cd my/submodules/dir/product-details
git pull origin product-details-git
cd ..
git add product-details
git commit -m 'updating product details'

Improvements?

This post is as much a set of instructions as it is a call for improvements. If you have an easier way to do this, or if you know how to speed up or simplify any of this, a comment to this post would be very much appreciated!

Read more…

The Firefox 3.6 Release Candidate 1 has been released and its new features are just fantastic! From the first-run page:

Go check it out.

Read more…

Here at Mozilla, a bunch of webdevs use git svn instead of plain Subversion to interact with our svn repositories -- mostly because of in-place branching, better merging, and all these things that make a dev's life happier.

As you probably know when reading this article, you push all uncommitted changes to the remote svn repository by typing git svn dcommit. This will take your last, say, five commits, push them to SVN, and mark them locally as committed. But what if you only want to dcommit some of your changes?

(If you don't need explanations, jump straight to the summary).

Step 1 (optional): Reorder commits

I am not going to go into a lot of detail on interactive rebasing (git rebase -i), but to start off, make sure your commits are ordered, so that the ones you do want to commit to SVN are before the ones you do not want to push up-stream for now. Example: If in the following history, you want to commit all changes but a023fea, you want to rebase your commits so a023fea is last:

In git rebase -i HEAD~4, change...

pick 07c26c5 some de L10n
pick a023fea adding free-text messages to localizer dashboards
pick 8597f47 adding featured collections as l10n category
pick 19f3df3 making existing localizer pages work with amo2009 layout
... to...
pick 07c26c5 some de L10n
pick 8597f47 adding featured collections as l10n category
pick 19f3df3 making existing localizer pages work with amo2009 layout
pick a023fea adding free-text messages to localizer dashboards
Make sure to resolve any merging problems that might occur due to the reordering.

Step 2: Step in between commits

To only push the desired commits to svn, execute another git rebase -i and mark the last desired commit for editing (by changing pick to edit):

pick 07c26c5 some de L10n
pick 8597f47 adding featured collections as l10n category
edit 19f3df3 making existing localizer pages work with amo2009 layout
pick a023fea adding free-text messages to localizer dashboards

When exiting the editor, git will drop you off after the marked commit, but before the one you don't want, as a quick look at git log can tell you.

Step 3: dcommit desired changes

After making sure this is really what you want, just execute git svn dcommit as usual and watch git push all desired changes to SVN, while omitting the rest.

Step 4: Fast-forward to HEAD

When the dcommit is done, remember we are still in the middle of a rebase, so just run git rebase --continue to fast-forward to the HEAD of your branch. Again, a quick look at git log can reassure you that only the changes you wanted to have been pushed to SVN.

Success!

Summary: Quick cheat sheet

Here's a quick cheat sheet for you (and me) to come back to in case you forget:

  • Reorder commits (git rebase -i HEAD~4) so that the undesired ones are after the ones you want to push
  • In your commit history, jump right after the last wanted commit by marking it for editing in git rebase -i
  • git svn dcommit
  • git rebase --continue

Read more…