Monday, November 15, 2010

Using the Mighty IPTables to Prevent an HTTP(s) DoS Attack

Using this procedure, the kernel netfilter will deny (and log to /var/log/messages) packets to ports 80,443 from hosts that exceed 20 requests in 5 seconds. IPTables will then DROP packets for 5 seconds, then allow them back to. This has the benefit of not blocking legitimate traffic, only slowing it to a reasonable amount.
So, let's get started, install iptables:

yum -y install iptables

IPTables, by default only timestamps and tracks up to 20 connections. Which isn't very many. This means that by default if you use --hitcount 21 you'll error out. You can control the limit by updating /etc/modprob.d/modprobe.conf:

options ipt_recent ip_pkt_list_tot=50

Then, reload the ipt_recent kernel module:
rmmod ipt_recent
modprobe ipt_recent


Next, create the script that will add the rules (vi /tmp/limit.sh):

# Create a LOGDROP chain to log and drop packets
iptables -N LOGDROP
iptables -A LOGDROP -j LOG
iptables -A LOGDROP -j DROP

iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -m recent --set --name "limit-http" --rsource
iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -m recent --update --seconds 5 --hitcount 20 --name "limit-http" --rsource -j LOGDROP
iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT

iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -m recent --set --name "limit-https" --rsource
iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -m recent --update --seconds 5 --hitcount 20 --name "limit-https" --rsource -j LOGDROP
iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT


Execute that script:
sh /tmp/limit.sh


Now, we want these rules to be applied on each reboot, so only do this if you have nothing in /etc/sysconfig/iptables - which most ec2 clients don't even have. If you have stuff in there, just bake the above in. so do (on CentOS/RHEL derivatives):

iptables-save > /etc/sysconfig/iptables

To check, do:

service iptables stop
service iptables start
service iptables status

Helpful NOTES:

You can see what the packet filter is doing, in real time, like this:

watch iptables -nvx -L

Also, you can use apache's benchmarking program, ab to trip the filter for testing purposes, like this:

ab -n 1000 -c 5 http://IP/index.html

Where 1000 is the total number of requests and 5 is the number of concurrent requests.


So, to test it out, point that at your webserver and tail /var/log/messages, and you'll find that you start dropping packets from the client running apache bench.


You're welcome! :)

Monday, November 8, 2010

Running OpenVAS Security Scanner: Ubuntu 10.10

The nessus project is now a for-pay company. I think you can get a free home license but if you want to scan your infrastructure at work, they no likey. A project called OpenVAS is a fork of the Nessus project that's all open source and free.

I want to continually scan some systems and generate reports I can diff to see if any jokers have added services or change rules that expose network services I don't want exposed. So, I'm installing OpenVAS on a VM instance that I'm going to use to scan my infrastructure. I'm not going to use the GUI client because I want this to be scripted. The way OpenVAS works is, you have an OpenVAS server, which you connect to with clients and tell it what to do. So, you could install the software on a system in a data center or EC2 or whatever, then run the client from your desktop and have it do you're bidding.

In this case, my client - is the command line client which is going to run on the same system as the server.

To install on Ubuntu 10.10, simply do:

# Update your distro
apt-get update && apt-get dist-upgrade

# Install openvas server and client software + security plugins
apt-get install openvas-server openvas-client \
   openvas-plugins-base openvas-plugins-dfsg

# Update the vuln. database
openvas-nvt-sync

Add a user that you're going to use from the client, to login:
openvas-adduser

Here, you'll add a user/pass combination.

When prompted to add a 'rule' - I allow my user to do everything. The rules allow/disallow scanning of hosts. If you want you can let bob scan 192.168.0.0/24 or whatever. I want my user to scan all, so when prompted, simply enter

default accept




Now, fire up the server. Note that the first time you run, it loads all those checks into memory so it takes a LONG time for the server to actually start.

/etc/init.d/openvas-server start

Now, you can start scanning. Create a file with IP's and/or hostnames that your client will feed to the server to scan. Something like this:

192.168.1.5
www.mydomain.com
dns.mydomain.com
10.1.19.0/24

etc.


The server listens on port: 9390 by default so you'll want to tell your client to connect there. Once you have the file created, you can kick off your scan like this:

OpenVAS-Client -q 127.0.0.1 9390 admin scanme.txt -T html \
     ~/Desktop/openvas-output-`date`.html 

You'll be prompted to accept the SSL certificate, go ahead, it's automagically created by the pkg when it's installed. Then, open that file in a browser when it's done and start going through it. Be warned, scanning is very hostile so you should really only scan your own systems.. and those of your enemies.

Tuesday, November 2, 2010

Deploying a Set of HAProxy Servers as EC2 Instances

If you want high availability in EC2, one option is to deploy a couple of load balancers in front of a bunch of application servers, all in the same security group. I really like HAProxy. It's fast, and very configurable. I remember we got a couple of F5's back in the day and I wanna say that was like 60k for two. Which is a lot if you're a startup. Not that you can deploy them in EC2 but it's just kinda cool how all this stuff trickles on down to the cloud.

I think it makes complete sense. Anyway, here's how I did it recently. Since you can't get a floater IP in EC2, or more than one IP per instance, I setup DNS round robin for the proxy addresses. So, in this case we have two dedicated ec2 instances, running CentOS. Each one will have haproxy installed and configured. So, once you get them spun up, you'll want to get an elastic IP for each, then configure DNS to point to both.

www.domain.com { 1.2.3.4, 1.2.3.5 }

Each IP is the public address of each of your HAProxy servers. To test, you can just setup a dummy hostname, like test.domain.com pointing to those IP's and do the cut over when you're sure you're happy with the setup.

So, next, login to each instance and run:

yum -y install haproxy

HAProxy supports two modes, tcp and http. You can't do SSL in http mode so, this deployment was in tcp mode. HTTP mode has some really cool and interesting features with HAProxy's recent acl additions. Google around for HAProxy and ACL. You can get super granular on which app server handles what kind of traffic to include or where to direct certain kinds of request.. for example: go here for SSL, here for dynamic content and here for static HTML and images. It's really pretty cool and new in 1.3, I think. Maybe I'll try all that out someday.

Next, edit /etc/haproxy/haproxy.cfg and enter the following:
# HA Proxy Configuration
defaults
balance roundrobin

global
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     20000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode        tcp
    log         127.0.0.1       local0
    log         127.0.0.1       local1 notice
    option      dontlognull
    option      redispatch
    timeout connect 2000 # default 2 second time out if a backend is not found
    timeout client 300000
    timeout server 300000
    maxconn     60000
    retries     3

# This is what we're listening on.
frontend haproxy *:443
   mode tcp
   maxconn 20480
   default_backend app_servers

# This is who we send requests to
backend app_servers
   mode tcp
   server app1 10.19.127.30:443
   server app2 10.19.127.49:443
   server app3 10.19.127.21:443
   server app4 10.19.127.19:443




So, in this example we have 4 app servers. I feel like it's so simple and self explanatory that you can just get in there and edit and test it out. Both HAProxy instances have the exact same configuration file - assuming you've deployed everything to the same security group.

The only other thing i did was to add a snippit to /etc/syslog-ng/syslog-ng.conf to log all HAProxy's messages via syslog. That's here /etc/syslog-ng/syslog-ng.conf:

source s_udp {
       udp(ip(127.0.0.1) port(514));
};
destination d_haproxy { file("/var/log/haproxy"); };
filter f_local0 { facility(local0); };
log { source(s_udp); filter(f_local0); destination(d_haproxy); };


Now, just fire it all up:

service haproxy start
chkconfig haproxy on
service syslog-ng restart


HAProxy has a stats interface which I haven't enabled here. If I do, I'll edit the above with the stat's info config.