Sunday, December 29, 2013

CentOS SSHFS Howto

I have a need to remotely mount a file system over the Internet. I'd like to do so without doing a VPN tunnel and all the encryption myself. So, I gave sshfs a shot and turn's out, it's super easy. Here's how on CentOS:
# Install EPEL
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

# Install sshfs
yum -y install sshfs fuse

# Load FUSE Kernel Module
modprobe fuse

# Mount remote file system.
mkdir /mnt/sshfs/
sshfs root@my.server.com:/export/opt/ /mnt/sshfs/

Now if you want to get fancy, you can push your ssh key and auto mount on boot, like this:
# Push identity
ssh-copy-id user@my.server.com

# Edit /etc/fstab and add:
root@my.server.com:/export/opt/ /mnt/sshfs fuse.sshfs defaults,noauto,user 0 0

# Mount it up
mount /mnt/sshfs/

Thursday, December 26, 2013

Using IPTables to Blackhole Large Set's of IP's

I found a host where I was getting a bunch of POST's in my apache server log files which looked to be malicious. I wanted to go through and just block all IP's which were trying to post to my web server, since I don't have anything but static content on it. So, I came up with this little one liner:
grep POST /var/log/apache2/*log* | perl -lane 'print $1 if (/^.*?:(.*?)\s/)'|sort | uniq | perl -lane 'system "iptables -A INPUT -s @F[0] -j DROP"'
This is useful for any group of IP's you wish to black-hole.

Tuesday, November 19, 2013

Creating an LVM Volume on EBS with XFS

Well that's mouth full.. I wanted to add some storage to a VM but wanted to be able to add to it later if I got to that point. So, I attached a 100G EBS volume, and here's how I formatted it. It's EBS as the physical volume, then LVM formatted as XFS. You can attach another EBS volume to this VM, then add it to the volume group later, striped for increased capacity. Anyway, here's how it's done:
rpm -q xfsprogs &> /dev/null || yum -y instal xfsprogs
mkdir /export
pvcreate /dev/vdb
vgcreate VolGroup_EBS /dev/vdb
lvcreate -I 2M -l 100%FREE -n export VolGroup_EBS
mkfs.xfs /dev/mapper/VolGroup_EBS-export
mount /dev/mapper/VolGropu_EBS-export /export/
Finally, add this to /etc/fstab to get it mounted when your system boots:
/dev/mapper/VolGroup_EBS-export /export  xfs defaults 1 1

Friday, October 4, 2013

Multi Hop SSH SOCKS Proxy

From a corporate network I had a need to jump to one system, and through it to another in order to have an open web proxy to another 'internal' network - or in this case it was a lab network where I had to hit a openstack Horizon dashboard. It's a somewhat simple concept and I was sure SSH could do it but I had some trouble figuring out how. The scenario looks like this:
my macbook -> ssh server <- INTERNET -> second-ssh-server -> browse internal network
Note that the second ssh server was running sshd on port 22000, you probably don't need that. The command I came up with to accomplish this was:
ssh -t -t -v -L9999:localhost:9932 root@ssh-server ssh -t -D 9932 root@second-ssh-server -p 22000
A whole blog-post for one command? Yes, it was that cool!

Monday, June 24, 2013

Find the Fastest Mirror in Bash

I needed a script that would determine the fastest mirror in a list of addresses. In this example I'm using the mighty 'www.google.com' and port 80 but it's obviously configurable. I also needed to use nano seconds rather than seconds because all the results (for google) came back in less than 1 second. It should actually be a bash function but I'll leave that to you, fine reader. You're welcome!

mirror='www.google.com'
port='80'
iplist=()

for ip in $(dig $mirror A | perl -lane 'print $1 if (/A\s+(.*?)$/)')
do
   # Give me a resonable starting point (Using epoch in nano seconds).
   a=$(($(date +'%s * 1000 + %-N / 1000000')))
   nc -w 2 -z $ip $port &> /dev/null &
   b=$(($(date +'%s * 1000 + %-N / 1000000')))
   msec=`expr $b - $a`
   iplist+=("$msec $ip")
done

# Grab the fastest one..
fastest=$(for a in "${iplist[@]}"; do echo "$a"; done | sort -n | head -1 | awk '{print $2}')

echo $fastest

Thursday, July 5, 2012

sshwatch using geoiptool.com

I've written a script called 'sshwatch' that will process the default /var/log/secure file, associate the folks that have logged in against a a geoip lookup (using geoiptool.com). The idea is that you should be wary of people logging in from countries that are not expected. It will only look up each IP address once, so if you have multiple logins from the same IP, we only do a single geoip lookup. If you put this script in /etc/cron.daily/sshwatch - you'll get an e-mail each night about who's been on your box. I think it only works on RHEL and CentOS right now. Enjoy:
#!/usr/bin/perl
# Send a little report of who's been loggin in and from where.
# Joey 

use strict;
my %userblob;
$|++;

for (`cat /var/log/secure*`) {

   if (/Accepted/) { # Somebody logged in..

      my ( $user, $ip) = ( $1, $2 ) if (/for (.*?) from (.*?) /);
      $userblob{$ip}->{'IP'} = $ip;

      unless ( $userblob{$ip}->{'COUNTRY'} ) {
         $userblob{$ip}->{'COUNTRY'} = get_country($ip);
      }

      my $seen;
      for ( @{$userblob{$ip}->{'USER'}} ) {
         $seen ++ if ( $_ eq $user );
      }
      push @{$userblob{$ip}->{'USER'}}, $user unless ($seen)
   }
}

my @mail;
while ( my ($ip, $ref) = each %userblob ) {
   push @mail, "$ip :: $ref->{'COUNTRY'} :: @{$ref->{'USER'}}\n";
}

send_mail(@mail);

sub get_country() {

   my $ip = shift;
   print "Looking up: $ip: ";
   my $url = "http://www.geoiptool.com/en/?IP=$ip";
   my $data = `GET "$url"`;
   my $country = $1 if ( $data =~ /Country:.*\n.*\> (.*?)<\/a/m );
   print "$country\n";

   return $country if ($country);
   return undef;
}

sub send_mail() {

   my @body = @_;

   open  MAIL, "|/usr/sbin/sendmail.postfix -t";
   print MAIL "to: your.e-mail\@domain.com\n"
            . "from: your.mama\@domain.com\n"
            . "Subject: SSHWatch Report\n\n";

   map { print MAIL "$_" }@body;
   close(MAIL);
}


Friday, November 25, 2011

Linux: Splitting a Large File into Small Files

Recently I was trying to transfer a large ISO file across a horribly unstable VPN. The transfer would fail at various amounts of transfer percentages. So, I thought I'd best split the file up into 10MB chunks, then rsync those over and stitch it back together.

That way if it failed 90% of the way through, I wouldn't have to resend all the data, just that last 10%. The way I managed to do this, was to:

split --bytes=10m file.iso file_part 

What happens now is, you have a bunch of 10MB files called

file_partaa
file_partab
flie_partac
file_partad
...

So, just rsync all those files to the destination:

rsync -e ssh -a --progress file_part* user@desination.host.com:

When that completes, login to the remote host and put them back together:

cat file_part* > orig_file.iso


Done and done.