Use AWS CLI to automate the removal and addition of instances in your ELB

Hello!

Sometimes its necessary to automate the removal and addition of instances in your elastic load balancer. Perhaps for the purposes of auto scaling or deploying updates to your web application. Either way, there is many tools at the disposal of the systems administrator to automate this process. Below we will share some simple steps as well as some (very) simple scripts to make it that much easier to manipulate the instances that are receiving live traffic via the ELB.

Install AWS CLI

This is pretty straightforward (and obvious). Amazon themselves provide a great guide to installing the AWS Cli toolset on your linux distribution. Below I’ll provide the shorthand for setting up AWS Cli on CentOS/RedHat or Ubuntu/Debian systems.

CentOS/RedHat

Ubuntu/Debian

So simple, right? There are other ways to install the toolset, such as through Python or to download the source and install directly from the source. After installing it, you will want to configure it with the access credentials in order to authenticate against your AWS account :

Before you do that, it might be a good idea to create a new IAM user with restricted access.

Create IAM user in AWS Security Console to access only your ELB

Restricting access for your IAM user is a good best practice. It will ensure that the access you delegate will never go beyond what was originally intended and will also mitigate any damage a malicious user might do should they be able to gain access to the credentials.

What you would want to do is create a group first, with the following two policies attached : AmazonEC2ReadOnlyAccess (a pre-made policy that you can search for and attach automatically), and a custom policy detailed below.

The policy above will allow the users in the IAM group to only access the specified ELB (where “your-elb-name” is specified). If you are in a different availability zone, you would also want to change us-east-1 to whatever zone your in.

Once the policy is attached to the group, then you simply need to create the user, add them to the group you created and create the access credential key/secret to use with the aws configure command.

The purposes of the script for us was to create a script on each actual instance so that we could simply run the script locally and automatically take the instance out of the pool. This means, for us, we ran the aws configure command on each instance that the script was running on. If you are using a centralized server (i.e. Jenkins, Ansible, Puppet, etc) then your script may look different. Perhaps your script in this case would parse the instances that are currently active in the ELB, and then iterate through each, taking them out of the ELB, running the update (or whatever you need to do) and then putting them back before moving on to the next.

Bash script to automate adding and removing servers to an ELB

This bash script is dead simple. We are simply grabbing the instance id of the script its running on and then read the user input to determine if the request is to add or remove the instance in question from the ELB. The script can definitely be improved further to perhaps include an error checker to make sure if you are removing the instance that there is at least 1 other active instance in the ELB before doing that (to avoid outages).

Pretty straightforward! Again, better error checking and perhaps parsing the output of the aws commands may add better checks and balances to this kind of manipulation of your ELBs. For that level of checking and parsing it might be worth exploring using Python instead of Bash.

Use Varnish and Nginx to follow, hide and cache 301 / 302 redirects

Hello!

Varnish is a robust, open source and stable caching solution that has been employed on many different high traffic environments with a significant amount of success.

One of the things that we have come across, specifically with environments such as Amazon Web Services is that websites tend to spread their web stack across multiple services. For example static media such as JS, CSS and image files may be hosted on Amazon S3 storage. This would require either implementing additional CNAMES for your domain (i.e. static.yourdomain.com) that point to the S3 URL, or have your CMS redirect requests for static media to the S3 backend.

Remember with S3, you have to generate the static files and copy them over to S3, so these URLs may need to be generated and maintained by the CMS often times with a redirect (301 or 302) that rewrites the URL to the S3 backend destination.

When Varnish is caching a website, and it comes across a request that is rewritten in a 301/302 redirect by the backend response (“beresp”), varnish typically will simply cache the 301/302 redirect as a response, saving that minuscule amount of processing power that needed to happen to process the request and send the rewrite. Some may argue that that is simply negligible!

Wouldn’t it be nice to actually cache the content after the redirect happens? There’s two ways one could go about doing this.

Cache 301 / 302 Redirects with Varnish 4

In simpler setups, Varnish can simply process the rewrite/redirect and change the url and issue a return(restart) to restart the request with the new URL. This means that Varnish is processing the rewrite and returning the new url so that it can be re-requested and ultimately cached. So in vcl_deliver you can add the following :

The above should work for you if, lets say, you are using Varnish in front of a simple apache/nginx server that is then processing the request. If Varnish is sending traffic to another proxy (i.e. nginx + proxy_pass), then this above directive may not work for you. The reason why one may want to proxy traffic from varnish to another proxy like nginx may be in a scenario where you want to do some fancy redirection of traffic + DNS resolution. Still confused?

Lets say varnish is at the edge of a network, caching a complicated website. Requests to varnish need to go to another load balancer (i.e. an Amazon ELB). ELB endpoints are never going to be a static IP address and Varnish (as of v4) cannot do DNS resolution of hostnames on a per request basis, so you would need to proxy the request to Nginx which would handle the reverse proxy over to ELB which would then load balance the backend fetch to the CMS.

If your scenario sounds more like the aforementioned one, then you could try following the 301/302 redirect with nginx instead of varnish.

Cache 301 / 302 Redirects with Nginx

Nginx and Varnish seem to go hand in hand. They’re great together! In this scenario you are using Varnish as your edge cache and sending all backend requests to an nginx proxy_pass directive. In order to tell Nginx to follow a redirect before sending any response to Varnish (and ultimately the end-user), you can tell varnish to simply save the redirect location and return the response after redirecting back to Varnish so it can simply cache the response!

You can see that the proxy_pass directive is configured normally. In the event of any 301, 302 or 307, process it with the @handle_redirects location directive. Then simply proxy pass the $saved_redirect_location as if it were the backend server! This means that even if the proxy_pass location is not even in your Varnish configuration as a valid hostname (i.e. random S3 url) Varnish will still cache it, thinking it is backend-server.com.

Hopefully this will help someone out there!

Auto updating Atomicorp Mod Security Rules

Hello!

If any of you use mod_security as a web application firewall, you might have enlisted the services of Atomicorp for regularly updating your mod_security ruleset with signatures to protect against constantly changing threats to web applications in general.

One of the initial challenges, in a managed hosting environment, was to implement a system that utilizes the Atomicorp mod_security rules and update them regularly on an automated schedule.

When you subscribe to their service, they provide access credentials in order to pull the rules. You then need to integrate the rule files into your mod_security implementation and gracefully restart apache or nginx to ensure all the updated rules are loaded.

We developed a very simple python script, intended to run as a cron scheduled task, in order to accomplish this. We thought we would share it here in case anyone else may find it useful at all to accomplish the same thing. This script could easily be modified to download rules from any similar service, alternatively. This script was written for nginx, but can be changed to be integrated with apache.

Find the code below. Enjoy!

Automated Amazon EBS snapshot backup script with 7 day retention

Hello there!

We have recently been implementing several different backup strategies for properties that reside on the Amazon cloud platform.

These strategies include scripts that incorporate s3sync and s3fs for offsite or redundant “limitless” backup storage capabilities. One of the more recent strategies we have implemented for several clients is an automated Amazon EBS volume snapshot script that only keeps 7 day retention on all snapshot backups.

The script itself is fairly straightforward, but took several dry-runs in order to fine tune it so that it would reliably create the snapshots, but more importantly would clear out old snapshots older than 7 days.

You can see the for loop for deleting older snapshots. This is done by parsing snapshot dates, converting the dates to a pure numeric value and comparing said numeric value to a “7 days ago” date variable.

Take a look at the script below, hopefully it will be useful to you! There could be more error checking, but that should be fairly easy to do.

Managed VPS hosting services from Star Dot Hosting

Click here for a free quote for managed vps hosting!

Hello,

I thought I’d update our blog to let everyone know that we offer our extensive managed services catalog on our VPS infrastructure!

If you are looking for high quality managed services from systems administrators with extensive experience, then contact our sales department for a consultation / quotation!

Find below just a SAMPLE of some of the managed services we provide :

Support Services
– 24/7/365 Technical Support (Ticketing system)
– 24/7/365 Alert monitoring on-call rotation pager system
– Online Customer Service Center

Linux Administration
– Installs, reinstalls and updates
– Trouble Shooting and configuration
– Apache configuration, optimization & Setup
– Linux X64 installation, optimization and maintenance
– Security optimizations
– Proactive and reactive maintenance
– Installs, reinstalls and updates
– Trouble Shooting and configuration

Managed MySQL and MSSQL
– MySQL and MS SQL first time install service
– MySQL maintenance and troubleshooting

24/7/365 System Availability
– Availability monitoring (Ping and Synthetic Transactions)
– Performance monitoring of key server parameters
– Graphing and trending

Security Services
– Routine Security Patching
– Routine Security scanning and auditing
– Penetration testing, SQL Injection
– Quarterly security audit reporting
– Dedicated hardware firewalls for all clients

Hardware Maintenance
– Hardware failure alerting and monitoring

Offsite Backups
– Remote off-site backups per 50gb nightly

Click here for a free quote for managed vps hosting!