Force SSL for your site with Varnish and Nginx

Hello!

For those of you who depend on Varnish to offer robust caching and scaling potential to your web stack, hearing about Google’s prioritization (albeit arguably small, for now) of sites that force SSL may cause pause in how to implement.

Varnish currently doesn’t have the ability to handle SSL certificates and encrypt requests as such. It may never actually have this ability because its focus is to cache content and it does a very good job I might add.

So if Varnish can’t handle the SSL traffic directly, how would you go about implementing this with Nginx?

Well, nginx has the ability to proxy traffic. This is one of the many reasons why some admins choose to pair Varnish with Nginx. Nginx can do reverse proxying and header manipulation out of the box without custom modules or configuration. Combine that with the lightweight production tested scalability of Nginx over Apache and the reasons are simple. We’re not interested in that debate here, just a simple implementation.

Nginx Configuration

With Nginx, you will need to add an SSL listener to handle the ssl traffic. You then assign your certificate. The actual traffic is then proxied to the (already set up) non-https listener (varnish).

The one thing to note before going further is the second last line of the configuration. That is important because it allows you to avoid an infinite redirect loop of a request proxying to varnish, varnish redirecting non-ssl to ssl and back to nginx for a proxy. You’ll notice that pretty quickly because your site will ultimately go down 🙁

What nginx is doing is defining a custom HTTP header and assigning a value of “https” to it :

So the rest of the nginx configuration can remain the same (the configuration that varnish ultimately forwards requests in order to cache).

Varnish

What you’re going to need in your varnish configuration is a minor adjustment :

What the above snippet is doing is simply checking if the header “X-Forwarded-Proto” (that nginx just set) exists and if the value equals (case insensitive) to “https”. If that is not present or matches , it sets a redirect to force the SSL connection which is handled by the nginx ssl proxy configuration above. Its also important to note that we are not just doing a clean break redirect, we are still appending the originating request URI in order to make it a smooth transition and potentially not break any previously ranked links/urls.

The last thing to note is the error 750 handler that handles the redirect in varnish :

You can see that were using a 302 temporary redirect instead of a permanent 301 redirect. This is your decision though browsers tend to be stubborn in their own internal caching of 301 redirects so 302 is good for testing.

After restarting varnish and nginx you should be able to quickly confirm that no non-SSL traffic is allowed anymore. You can not only enjoy the (marginal) SEO “bump” but you are also contributing to the HTTPS Everywhere movement which is an important cause!

Amazon S3 Backup script with encryption

With the advent of cloud computing, there have been several advances as far as commercial cloud offerings, most notably Amazon’s EC2 computing platform as well as their S3 Storage platform.

Backing up to Amazon S3 has become a popular alternative to achieving true offsite backup capabilities for many organizations.

The fast data transfer speeds as well as the low cost of storage per gigabyte make it an attractive offer.

There are several free software solutions that offer the ability to connect to S3 and transfer files. The one that shows the most promise is s3sync.

There are already a few guides that show you how to implement s3sync on your system.

The good thing is that this can be implemented in Windows, Linux, FreeBSD among other operating systems.

We have written a simple script that utilizes the s3sync program in a scheduled offsite backup scenario. Find our script below, and modify it as you wish. Hopefully it will help you get your data safely offsite 😉

Now if your data happens to be sensitive (most usually is), usually encrypting the data during transit (with the –ssl flag) is not enough.

You can encrypt the actual file before it is sent to S3, as an alternative. This would be incorporated into the tar command with the above script. That line would look something like this :

Alternative to gpg, you could utilize openssl to encrypt the data.

Hopefully this has been helpful!