Configure nginx to serve downloads for Redmine

You know, I’m a big fan of nginx. Lately I’ve been using redmine as a project management tool too, and it’s really, really great - I can’t recommend it highly enough! Of course redmine is written in Ruby on Rails, or Rails to be short, but setting up is pretty much non-sweat. Once it’s up and running, you’ll wonder how you could live without it in the past.

Since the webserver running redmine, or Rails to be precise - whatever it is: WEBrick, mongrel, thin, passenger… - is not designed for static file handling, you’ll most likely end up using nginx as a proxy for the upstream Rails webserver, which often listens at port 3000. Assuming you have the directory structure similar to mine, a typical nginx configuration may look like this:

server {
    listen       80;
    server_name  cool.redmine.com;
    error_log /var/www/redmine-2.0.1/log/error.log;
    access_log /var/www/redmine-2.0.1/log/access.log;
    location /(themes|javascripts|stylesheets)$ {
        root /var/www/redmine-2.0.1/public;
    }
    # proxy all other requests to thin webserver
    location / {
        proxy_pass        http://127.0.0.1:3000;
    }
}

This is kindly straightforward: nginx is configured to listen on port 80. Any public requests to the static contents (themes, javascripts, sylesheets) will be served by nginx, directly. All other requests are sent upstream to Rails webserver (thin in this case) listening locally on port 3000. We’re good to go at this point.

“How about the downloads? Shouldn’t we let nginx handle the downloads also? Nginx rocks at it!” you ask. Good question indeed, but not that simple. If we configure nginx that way, all the downloads will be open to public through nginx, which is absolutely not what we want. We want the requests to be authorized by redmine (Rails) first! So with this configuration, Rails will handle the file download requests, authorize them, and send the files on valid authorization, or an error message otherwise.

“OK so let it be. You said we’re good to go? Great, let’s just go then” you say.

Yes. But no, wait.

Indeed, with the configuration above, we’re good to go. But not *that* good. There’s a big problem lying there. And it’s about how Rails handles file downloads. To serve a file download, Rails firstly loads the whole file into memory (read: RAM) and only starts to send chunk by chunk to the client once this loading process is done. It sucks, if you ask. Even worse: the consumed memory will NOT be released, though it may be reused. Now, it sucks by a megaton! If you have a one-gigabyte file, say a Photoshop PSD, your whole server may be dead.

So now to sum up: First, we want all downloads to be authorized by Rails. Second, we want the file transfers to be done by nginx. Seems legit.

The question is, how?

As a both a nginx and Rails newbie, I dug up the whole internet for an answer. Finally, it came, in a form of - wait for it - X-Accel nginx headers. This will be the scenario:

  1. Client (Chrome, Firefox, IE, you name it) sends a request to a download
  2. nginx receives the request
  3. It will then add some indication, literally “This guy asks for a download. Please authorize him. If OK, let me know the where the file is, so that I can serve him.”
  4. It will the pass the request upstream to Rails (thin) as normal
  5. Rails authorizes the request successfully
  6. It will then send an internal request to nginx with the file’s real location
  7. nginx happly serves the file to the lucky user

As simple as it sounds, the very few available tutorials really confused me - they’re written by advanced users, when again I’m a newbie. After hours of trying though, I finally made it. Here is the configuration which works for me:

In /var/www/redmine-2.0.1/config/environments/production.rb, before the ending end I added this line

config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'

which instructs Rails to detect nginx’s X-Accel-Redirect header and use it instead of native Rails’ send_file. And the above nginx configuration was replaced with this:

server {
    listen       80;
    server_name  cool.redmine.com;
    # same old same old
    error_log /var/www/redmine-2.0.1/log/error.log;
    access_log /var/www/redmine-2.0.1/log/access.log;
    location /(themes|javascripts|stylesheets)$ {
        root /var/www/redmine-2.0.1/public;
    }
    # ! The following two blocks enable nginx to serve downloads instead of Rails !
    location /attachments
    {
        proxy_redirect    off;
        proxy_set_header  X-Sendfile-Type   X-Accel-Redirect;
        proxy_set_header  X-Accel-Mapping   /var/www/redmine-2.0.1/files=/files;
        proxy_pass        http://127.0.0.1:3000;
    }
    location /files {
        root /var/www/redmine-2.0.1/;
        internal;
    }
    # proxy all other requests to thin webserver
    location / {
        proxy_pass        http://127.0.0.1:3000;
    }
}

Two location directives were added: location /attachment and location /files. The first is for public requests (step 1 and 2 in the scenario). The latter is for internal requests (step 6 in the scenario). We don’t want these internal requests to be reachable by the public, hence the internal keyword. Notice these two lines:

proxy_set_header  X-Sendfile-Type   X-Accel-Redirect;
proxy_set_header  X-Accel-Mapping   /var/www/redmine-2.0.1/files=/files;

The first tells Rails that nginx will be serving the file. The second is a map, which can be literally translated into “If this file is located at /var/www/redmine-2.0.1/files, send me an internal request at /files directive.” In our case, the file is indeed at that location, so Rails will request nginx at

location /files {
    root /var/www/redmine-2.0.1/;
    internal;
}

Here, nginx serves the file, beautifully. And that’s how I did it! Let me know if this works for you as well.

A new plugin, and hi… it’s been 2 years

So yes, it’s been 2 years since my last post. Been receiving quite a few comments on this blog still, but I was too busy with other projects (sorry folks). As a result, all of my plugins are now out of date. Some of them may not even work flawlessly with the newer versions of WordPress as they should anymore.

But now that I have just quit my 9 to 5 life, I think I’ll dedicate some time into this blog and plugins again. So the first thing I’m doing is write a new plugin, Lazy Moderator. In short, it populates WordPress comment notification emails with true one-click-to-moderate links - you don’t have to log in and confirm. You can read more about it here or here.

The next things in plan is a complete overhaul of the old plugins. Though I have the motivation, this is not a promise, so don’t bet on it.

Wish me luck!

How to configure nginx to run Kohana on Ubuntu

As a web developer I’ve been using Apache for a long long time. Recently though, I’ve started to move away from Apache in favor of nginx (pronounced “engine-X”). It’s not that I really need its power, it’s just that I wanted to learn something new to break my box.

It’s fairly simple to set up and get nginx running with FastCGI and MySQL on Ubuntu - a very well-written tutorial can be read on HowtoForge, which should take you less than 15 minutes for everything. In this article therefore I will only write about how to configure nginx to actually run a Kohana-powered site, with virtual host and rewriting and such. If you’re not familiar with Kohana, take a look at my article here.

The prerequisites

  • I have my Kohana-power site located under /home/phoenixheart/www/my-kohana/ directory with proper permission set (owner being www-data, that is).
  • nginx has been set up properly and listening on port 80, with the configuration directory being /etc/nginx/
  • I want my site to be locally accessible via my-kohana.dev. Any requests to www.my-kohana.dev should be permanently redirected to my-kohana.dev - which is also called “force non-www”.
  • I want to have neat URL rewriting without “index.php”, for example index.php?controller=product&function=get&id=1 should be rewritten into /product/get/1
  • I also want that all existing files and directories under the root directory are accessible, except Kohana’s system directories system, application, and modules. Any attempt to access system files and directories(beginning with dots, like .htaccess or .settings) should be disallowed also.

All clear. So let’s do it!
Read more »

Social Sketches - a free icon set released for Six Revisions

Social Sketches – a free icon set released for Six Revisions

Today I’m so pleased to announce the release of Social Sketches, my hand-drawn icon set exclusively done for Six Revisions. Initially it was made for Referrer Detector on my just-started sketch project The Daily Faces, but then I decided to make it available for public use, hence the featuring on Six Revisions yesterday.

Here is the preview of the set:

For more information and download, please head to Six Revisions’ post.

P.S. I have a plan to add some more icons into the set, so stay tuned ;)

New domain hack idea

Today, I happened to visit The Daily Monster. It’s a very cool site, I highly recommend you guys to visit it.

This post is not about Stefan and his monsters however, but about some domain hack ideas that I’ve just come up with today. In case you’re not familiar with the term, Wikipedia has a clear definition for domain hack:

A domain hack (sometimes known as a domain name hack) is an unconventional domain name that combines domain levels, especially the top-level domain (TLD), to spell out the full “name” or title of the domain. Well-known examples include blo.gs, del.icio.us, and cr.yp.to.

So upon visiting The Daily Monster, I was quite surprise to see it didn’t have its own domain name. I was expecting something like thedailymoster.com or dailymonster.com or dailymonsters.com or so, but it turned out that the url was a Typepad subdomain, not a standalone. I totally believe that content is King, but a good domain name in this case would be the crown. I was telling myself “Maybe the domains had been purchased by other guys” and I was right - none of them are available.

Then, I thought based on this “one design a day” concept I’d do a similar site on my own. “Daily face”, how is that? A drawing of a face each day. I used to sketch a lot, sometimes with pencil, sometimes with the computer mouse, like this one:

One of my sketches Read more »

WordPress: Thank that first time commentator!

WordPress: Thank that first time commentator!

Thumbnail credit: Premshree Pillai

To a website, comments are important - this you must agree. But not all visitors leave comments - in fact, very, very few. Most of them care about the content only, and tend to leave (bounce) the site right after getting the information they need (so sad a life, huh?)

Many tips have been introduced and used to encourage visitors to contribute to your site via comments. To my knowledge, and to name a few:

  • Use dofollow links in comments. By default, WordPress and other blogging systems mark links in comments with rel="nofollow" attribute. This attribute tells search engines to not follow the links, which means the commenter’s site will not be able to share any Google juice with you. While effective in fighting spammers, this technique may a bit disappoint the real visitors. Plugins like Dofollow address this problem and remove “nofollow” attribute from comment links.
  • Further promote the commenter’s blog (if any). CommentLuv is a plugin that “will visit the site of the comment author while they type their comment and retrieve a selection of their last blog posts, tweets or digg submissions which they can choose one from to include at the bottom of their comment when they click submit”.
  • Choose a (random) comment to give small prizes such as free ebooks, preminum themes etc.
  • Explicitly ask the readers to give comments at the end of the article - “Please share your thoughts”, “What do you think?”, “What say you?” etc. etc.

Today I would like to mention another method to encourage commenting. Though this won’t likely attract more commenters, it may encourage existing ones to leave more comments and become more effective contributors.

The method is called “Thank first time commenters” and it works like this: Read more »

How I sped up my Thica.net

How I sped up my Thica.net

Thumbnail credit: Amnemona

If you didn’t notice, I have another site called Thica.net - Vietnam poetry network, a WordPress (what else) powered blog dedicated to poems in Vietnamese. The site is receiving about 60K of views per month, which is 12x to that of the moment when it was started back on March 2008, and I’m rather happy about it.

About one month ago, Thica.net started to become very slow and tent to produce strange problems. More than often it threw 503 Internal Server Error just when I attempt to add a new post, or 404 Page Not Found for a page that I knew it was there, such as admin panel, plugin section etc. After some deep look inside, I decided that my site was too bloated and then it was time to optimize things to start it up. To admit, the result is nowhere near perfection, but it satisfies my need. So I think I’ll share with you here.

1. Eliminate unused plugins

Plugins
Original image by smackfu

Being a developer, I’m a big fan of plugins and addons. My Firefox has about 30 addons, ranging from Adblock Plus to UltraSurf (I’m living in a communist country FYI) and YSlow. Similarly, Thica.net had like 50 plugins, active and inactive alike. So you know, plugins power up WordPress in many ways, but on the downside slow it down because of all the added functions, hooks, data and so on. Some plugins are even terribly written (like one random post plugin which gets ALL posts from the database and uses PHP loop to get 5 random posts - WTH) and may cause serious problems: slowness, security holes, or even crashes your site. Read more »

Code Snippet 3 - Create post slugs

Code Snippet 3 – Create post slugs

If you’re used to WordPress, you must have noticed that usually a blog doesn’t use the default permalink structure (like http://site.com/?p=43, where 43 is the post ID store in the database). Instead, almost all blog owners tend to use the built-in option form to set the permalinks to something similar to http://site.com/a-great-post and leave the rest to Apache’s mod_rewrite to handle. In this case, a-great-post is called a post slug, or to be short, a slug. According to WordPress Codex:

A slug is a few words that describe a post or a page. Slugs are usually a URL friendly version of the post title (which has been automatically generated by WordPress), but a slug can be anything you like. Slugs are meant to be used with permalinks as they help describe what the content at the URL is.

In case you are wondering, slugs play a really, really important part in SEO. This is due to the fact that search engines like Google analyze an URL, and if it is relevant to the page’s content, the page’s rank point may be increased. Just like to us human, ?p=43 doesn’t tell anything, but how-to-create-post-slugs surely does.

So how is a slug generated? Read more »

Here we go - CDN Rewrites

Right after Free CDN was released, I got a request to enhance the plugin to support commercial Content Delivery Networks - you know, those big guys like Akamai, Limelight, EdgeCast, Velocix etc. The implementation is not too complicated: specify an origin host, and rewrite it into a destination host. That origin is of course usually http://www.a-busy-site.com, and the destination is something a Content Delivery Network would provide you with: http://static.a-busy-site.com, or http://images.a-busy-site.com, or http://a-static-host.com etc. This way, all static contents will be served from that CDN host.

So, instead of developing the enhancement as a new feature for Free CDN, I decided to create a new plugin called CDN Rewrites. Read more »

First version of Free CDN WP plugin released!

First version of Free CDN WP plugin released!

Office life has its advantages and disadvantages. On one hand, it keeps me so exhausted and leaves me so little free time for other hobbies - I’m talking about my books, my December Flower guitar self-training, my photographic stuffs etc. On the other hand, it does improve my knowledge and skills with all those working requirements.

I’m rather lucky to be working as an R&D guy in my current company, thus got a (legal) chance to (legally) spare a lot of time for (sometimes illegal) new and cool stuffs. Among them is CDN, a solution to distribute (mostly static) contents across a network and let end-user access their copies from the cloud instead of the central server itself, thus reduces bottle neck problems during peak hours. To enterprise websites like those of Microsoft, Yahoo, Amazon, eBay etc., this is vital, as the number of concurrent visitors and downloads very frequently exceeds millions. Some of them build their own CDN, when the others rather hire third party services to handle the load to save time and money. Most well known among these 3rd services are properly Akamai and Limelight, though there is a vast of them, naturally. For instance, Windows 7 downloads (~2GB each!) were served through Akamai network, when the live internet broadcast of Barrack Obama’s inaugural speech was done with help from Limelight. Read more »