Docker Swarm: keepalived

Over the last couple of months I decided to finally really learn docker (and eventually kubernetes). I started by converting my home network services to docker via docker-compose then quickly figured out that docker swarm is where I needed to be since this is where I could get some decent fault-tolerance. I got that running experiences some instability with docker swarm mode where, for some reason, on some nodes I couldn’t access my services using docker swarm mesh at some times. This posed a problem mainly because me port forwarding from my router was pointing to a single IP. I need to “fix” that.

This is where I figured I need some sort of HA and keepalived seemed to been the perfect solution. With this I would be able to have 2 nodes share a VIP and if one goes down it would failover to the other. However unlike most other configs that I found on the web I didn’t just want an IP failover because when my server have it’s issues I could still connect to the server via ssh but the services I was publishing from docker was not available. Because of that I wanted some to monitor the service, not just the IP.

The following config will setup keepalived (master & slave) on on 2 nodes, monitoring the HTTP port on 2 nodes in the cluster using netcat (nc). If it fails it will failover to the other node. Before I share the actual configs I have give credit to the many sites through which I was able to sew bits a pieced to together to make this possible. This article is hopefully a single place where a fully working solution can reside.

Firstly, I create a private image (I might publish it if I feel it could help others). My base image is from https://github.com/angelnu/docker-keepalived. However there are a few things in that image that didn’t work for me. One, it was based on arm while I was working with a standard server, not raspberry pi. Next I wanted to use netcat to monitor for TCP ports and that was not in there image. Based on that I copied their Dockerfile and modified it to look like this

ARG BASE=alpine
FROM $BASE

RUN echo "http://nl.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && \
apk --update -t add keepalived iproute2 grep bash tcpdump sed perl netcat-openbsd && \
rm -f /var/cache/apk/* /tmp/*

COPY run.sh /run.sh
COPY keepalived.conf /etc/keepalived/keepalived.conf

ENTRYPOINT [ "/run.sh" ]

I removed the lines relating to arm architecture and added “netcat”. After we get that out of the way we can now create our own custom image to be used in our containers.

Now using Docker Swarm mode we create 2 services, 1 restricted to the manager node and the other to another node (I thought it best that it runs on a worker). To accomplish that you’ll need the following files.

dkr-01:~$ cat keepalived/docker-compose.yml
version: "3.7"
services:
  keepalived-master:
    image: repo-01:5000/keepha:latest
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - NET_BROADCAST
    volumes:
      - ./keepalived-master:/etc/keepalived/
    networks:
      - host
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == manager]

  keepalived-slave:
    image: repo-01:5000/keepha:latest
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - NET_BROADCAST
    volumes:
      - ./keepalived-slave:/etc/keepalived/
    networks:
      - host
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == worker]

networks:
  host:
    external: true

The important piece in the block above which seem to me missing/not needed when implementing keepalived on the base OS instead of in docker is the “cap_add”. These are linux level capabilities which docker doesn’t normally have access to. (Reference) I almost gave up trying to get this service running in dockerized fashion until I made this “one last try” and added that block. It seems that without this section the service can’t allocate the VIP to the service.

The other thing I added, as mentioned before, was the port check in the form of a “script” embedded in the config files on each node.

dkr-01:~$ cat keepalived/keepalived-master/keepalived.conf
vrrp_script keepalived_check {
      script "nc -zvw1 node-01 80"
      interval 5
      timeout 5
      rise 3
      fall 3
}

vrrp_instance VI_1 {
      state MASTER
      interface eth0
      virtual_router_id 51
      priority 200
      advert_int 1
      authentication {
         auth_type PASS
         auth_pass 12345
      }
      virtual_ipaddress {
         192.168.14.24
      }
      track_script {
         keepalived_check
      }
}

dkr-01:~$ cat keepalived/keepalived-slave/keepalived.conf
vrrp_script keepalived_check {
      script "nc -zvw1 node-02 80"
      interval 5
      timeout 5
      rise 3
      fall 3
}

vrrp_instance VI_1 {
      state MASTER
      interface eth0
      virtual_router_id 51
      priority 100
      advert_int 1
      authentication {
         auth_type PASS
         auth_pass 12345
      }
      virtual_ipaddress {
         192.168.14.24
      }
      track_script {
         keepalived_check
      }
}

LetsEncrypt for new vHost

So, I have a VM with Apache2 configured as reverse proxy for a couple of services which I run internally that I want to be able to access while I am on out and about. This Apache setup makes it convenient for number of reasons, for example, some of these services do not have a very robust security setup since they might be just hobbies that “nerds” play around with. On the other hand, Apache is more of an enterprise grade product which has been tested and proven to a far greater degree.

With fact that you can get a couple free domains for DDNS and free certificates from LetsEncrypt, I have been applying SSL certificates to all my exposed services. However I always end up scratching my head for a minute when it comes to setting up LetEncrypt cleanly on a new vHost because I can never remember the commands or steps needed.

I think these steps below are the most straightforward way to get a SSL cert added to a new vHost. (If you want to do it from scratch check this article)

1. Setup the vHost as normal in your sites-available folder but don’t enable it. (You can also set the proper cert path while you are at it)

2. Run sudo certbot --apache -d <site.example.com> to start the certificate generation process.

3. It should say it was unable to locate the matching vhost and ask you to select from the ones already enabled. Instead press ‘c’ to cancel since you did not enable your site as yet.

4. After that it will say “No vhost selected” and display the path of the new certs, as can be seen in the screen capture below.

5. You can now ensure that the cert path matches the location given in the previous step, then enable the site with sudo a2ensite <config-file> and reload apache

That’s it! You should now be able to visit your site with that satisfying padlock in the address bar.

Snags

I ran into an issue with renewing one of my certificates. When I tried to renew manually, it showed this error:

Failed to renew certificate [URL] with error: Problem binding to port 80: Could not bind to IPv4 or IPv6

After some digging what I understood is that it was caused by the fact that the cert was installed as a standalone and when the renewal is trying to do the same it is being “blocked” because Apache2 is already using that port. To get around this I could have stopped the service or, the option I choose, run the renewal command with “–apache”

certbot renew --apache 

Debian Time Machine Server

As usual, a little background for perspective. This the second time I have had to do this and to get it working I had to go scouring the Internet a little bit. This involved taking bits and pieces from a few sites then keeping my fingers crossed that it would work in the end.

There are two Macs in my household. I configured this at first so that both would dump their backups into one folder. This worked well but I noticed a few weird stuff from time to time and I remembered also reading were some people suggested keeping them separated because of issues that, I guess, could cause the stuff I was noticing. Because of this an a few other cosmetic factors, I decided to move the service to a different server and assign both computers to separate folders (I used separate LVM volumes so that they will have no effect on each other and I have flexibility to grow)

Installing Packages

The newer releases of OS X requires Netatalk 2.2.x+. However, Debian 6.0 (Squeeze) comes with 2.1, which won’t work with Mac OS X 10.8 “Mountain Lion”. If you are still running Debian 6.0 you can get netatalk 2.2 from Debian 7.0 (Wheezy) by doing the following as root.
First add the following line to /etc/apt/sources.list:

deb http://http.debian.net/debian wheezy main contrib non-free

Then run the following commands:

 aptitude update
 aptitude install netatalk avahi-daemon avahi-utils

You can revert the changes to /etc/apt/sources.list now and run “aptitude update” again. Obviously if you were already on Wheezy you won’t have to worry about this.

Setting up Netatalk
Let’s do some configs…

Change your /etc/netatalk/AppleVolumes.default file to export the Time Machine volume.

Look for the following line:

 #:DEFAULT: options:upriv,usedots 

And change it to something like this.  Also remove the hash sign:

 #:DEFAULT: cnidscheme:dbd options:upriv,usedots 

At the end of the file you’ll find a line that reads:

~/                     "Home Directory"

Add something like this below it:

/mnt/timemachine  "Time Machine"  allow:username cnidscheme:dbd volsizelimit:250000 options:usedots,upriv,tm
 
  • /mnt/timemachine is your backup folder.
  • “Time Machine” is a random label to identify your Time Machine volume.

The rest of the line contains various parameters to allow the Mac to “play nice” with this server as a Time Machine target. It’s important to add the options:tm at the end of the line so that Netatalk enables various special options for Time Machine. You can also add fancy options to restrict access to users logging in with specified accounts. But I have decided to keep it simple, at least for this round 😉

The next config file is /etc/netatalk/afpd.conf. Comment the last line like this:

# - -tcp -noddp -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword

…and add this:

- -tcp -noddp -uamlist uams_dhx.so,uams_dhx2_passwd.so -nosavepassword

I guess you could also just replace it but I like an easy rollback path, just in case.

I am not sure if this command is actually needed for it to work but it didn’t cause it not work 🙂

touch /mnt/timemachine/.com.apple.timemachine.supported

Restart netatalk for the new configuration to take effect:

sudo service netatalk restart

For an additional layer of security I decided to create a dedicated user account that will only have access to the write to the backup folder. Time Machine will ask for this information on initial setup.


 sudo useradd -s /bin/false timemachine
 sudo passwd timemachine
 sudo chown -R timemachine:timemachine /mnt/timemachine
 

This takes care of the server side.

Client Setup
Now configure your OS X installation so it sees unsigned time machine volumes. Open the terminal app and execute the following command:

defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

PS.
Older articles refer to creating a /etc/avahi/services/afpd.service file. With netatalk 2.2, this file is redundant: you do not need to create it.

SNMP with CygWin

I love CygWin. I helps me to combine the best of 2 worlds, the easy of use and convenience of Windows and the command line functionality of Linux. (This statement is blasphemy in some circles. I know). The challenge though is that sometimes it is difficult to have all the tools you need without first connecting to a Linux box, especially for times when this is not possible.

This is one example. I needed snmpwalk but it was not installed on the Linux boxes at my disposal. This is how I got it working in CygWin.

Installing Cygwin

The first requirement is to have Cygwin installed. Cygwin can be download from the following Web site:

http://www.cygwin.com

You will find a a simple “Setup” program. Simply follow the instructions in the setup program to complete the install. If space is not a concern, I suggest you install the entire suite of tools. But there are certain packages which might be more relevant for this setup. These include:

  • make
  • gcc
  • openssl

Basically you want to look for and install packages necessary to compile/build binaries. If you run the “./configure” command (while installing a package from source) and it complains about missing libraries or packages you can rerun this process and add them. Then run “./configure” again.

Installing Net-SNMP

Net-SNMP is quite a safe install as well. Once you have Cygwin installed, you simply need to download the Net-SNMP source and execute a few commands from a bash shell and you will have finished.

Net-SNMP can be downloaded by following the link below:

http://www.net-snmp.org

You should download to a directory such as c:\cygwin\src\net-snmp-5.x.x. Once downloaded to that directory, open a bash shell and follow the steps below:

tar xvfz /src/net-snmp-5.x.x

cd /src/net-snmp-5.x.x
./configure
make
make install

If you haven’t done so already, you need to specify the following system environment variables:

MIBS=ALL
MIBSDIR=C:/PHP/MIBS

Test your work:
     snmpwalk -v 2c -c public [ipAddr] system

RaspBerry Pi – 1st Encounter

Back Story
I have wanted one of these boards to replace my Apple TV2 running XBMC for a long while now but as everybody knows the first version sold out almost immediately and the average price for one on the after-market is sometimes 3-4 times what the Element14 sells it for. They announced the second version and I decided to try get my hand on one before it hit the after-market. Long story short, after about a month of waiting on order fulfillment, I have it.
What to do?
At my office a couple of the guys have these configured as HTPCs (XBMC of course). I couple have positive feedback while others are so-so. The number one complaint I have heard it the speed. and digging further, I find a couple of them attribute it to slow SD cards. So I was worried that I might have to purchase another SD even though I already have one laying around. Of course I decided to try what I have on hand first, and I am happy with the results so far.
Here is what I have:
Hardware
Software
 Configuration
  • Installed with python script (install.py) from the site with a downloaded image
  • Install NTP
  • sudo dpkg-reconfigure tzdata for America/Montreal
  • Inside XBMC (Settings > Appearance > International) set location to Canada/Montreal
  • I use NFS shares, so I add those with the appropriate scrapers and allow it to scan the library
Conclusion
This is not my final setup but I think this gets me back to where I was before and I am okay with that. The main thing I wanted to test here is the performance of the SD card with RaspBMC and that seem to be working out. I don’t know if Class 4 would be too slow or if the doubling of the RAM from the previous model is making all the difference here (512MB vs 256MB). I don’t have a class 4 to test the difference, so all I know is this setup works…..for now 😀

HTC Evo 3D – Charging Issues

It’s not like my expertise, or lack thereof, will influence the decisions others make but since I made it public that, based on my experience, the Evo 3D might be flawed it is only fitting that I comeback here and withdraw the statement.

The original statement was made in a Google+ post where I basically made it known that people should think twice about buy HTC phones. This was basically the anger talking after I found my phone refusing to charge after I had allowed the battery to run completely dry. I called Rogers and got a replacement but I was reluctant to make the exchange because, even though it was just ~$30, it would mean I would have to spend it again to redo the carrier unlock the phone & get a proper screen protector.

So, I did some work on the phone over this last weekend and it turned out that ClockWorkMod 5.0.2.0, which I had installed since forever (at least 4-5mths), has a bug which causes it not to charge when your battery is totally dead. So the short version of the story is that I downgraded to 4.0.1.4 and it is all good now. But this also showed that there might be other issues with version 5 of CWM too. I say this because, when I was running it, I couldn’t install the OTA updates for my phone and I thought it was the fact that it was rooted which caused me not to be able to install these patches. But once I downgraded the CWM it worked like a charm.

I learnt a lot from this experience though:

  • How to relock the bootloader and reflash the factory ROM (Downloading the ROM was the hardest part)
  • How to setup ADB on Ubuntu 11.110 (I have always used it from Windows)
  • Most importantly, when a forum is 60+ pages long try to read and understand all of it before trying the things recommended. I might be working perfectly on page 5 but someone on page 40 is saying you better watch out for “this” really big nasty bug which will bite you in the ass 5 months down the road.

Moving Ringtones To Internal Memory – HTC EVO 3D

I just started my dance with the “Android” a couple weeks ago on the HTC Evo 3D and it has been a learning experience from typing on the on-screen keyboard to rooting. As is the purpose of this blog I try to post my learning experiences for later reference.

Ever since I got the phone I noticed that whenever I mounted the mircoSD to a computer, the ringtones would go back to the default/stock tones. Basically what was happening is, when the memory is mounted to the computer it is not accessible to the phone therefore no tones and it reverts to the settings it came with. This can be a bit annoying so I set out to do something about it. In this quest I discovered that the internal memory (/system to be exact) is mounted as read only therefore writing to that partition (read: placing files there) is not possible. So I couldn’t simply copy files onto it even though the phone was rooted.

I want this post to be as clear as possible so the the next person needing to do this wont have to go hunting around the web for bits and pieces like I had to. In order to do that though, and remain on topic, I will have to do a second post about how to get the adb stuff working. It took me a couple weeks, on and off, to get it but the crazy part is that I had it working from day 1 and didn’t even know. So I will put that together in another post as I said before.

Overview
I have pre-edited mp3s that I use as ringtones stored in my music folder on the mircoSD and what we will need to do is copy those to the Continue reading

Backup PostgreSQL Schemas

I have been handed the responsibility to manage the backups of a PostgreSQL database. Easy enough you would say, but there is a catch.

  1. The owner wants backups to be kept of each individual schema within the database
  2. I have never managed a PostgreSQL DB before

So not being one to back down from such a challenge, I searched high and low all over the Internet, but either the Internet community isn’t interested in doing this or I am not looking in the right places. So here, with the help of fragments of scripts from around the web, I have worked out a solution which accomplishes task at hand. It is all bottled in the script below which I think is self-explanatory with the aid of a generous helping comments scattered throughout the file.

#Description:
#This script will create a compressed backup of the Genesis Postgres Db and store it on a predefined folder.
#Backups that are older than 30days will also be removed automatically

####### Make and change to directory where backups will be saved #######
BASE_DIR="/path/to/backup/folder"
YMD=$(date "+%Y-%m-%d")
DIR="$BASE_DIR/$YMD"
mkdir -p $DIR
cd $DIR

####### Full Postgres Backup #######
sudo -u postgres pg_dumpall | gzip -c > All_Db.sql.gz

####### Individual Schema Backups #######
# 1. Select individual schemas within the database and pipe the results into sed which does Step 2
# 2. Clean up the output from the SQL above
#     - Get rid of the empty spaces at the beginning of each line
#     - Remove the head and tail info from the file(Title, labels, etc)
for schema in $(sudo -u dbOwnerUN psql -d DBName -c "SELECT schemata.schema_name FROM information_schema.schemata;"|sed 's/^[ \t]*//;s/[ \t]*$//;1,2d;N;$!P;$!D;$d');
do
sudo -u dbOwnerUN pg_dump -Ft -n "$schema" DBName | gzip -c > "$schema".sql.gz
done

####### Delete backup files older than 30 days #######
OLD=$(find $BASE_DIR -type d -mtime +30)
if [ -n "$OLD" ] ; then
echo deleting old backup files: $OLD
echo $OLD | xargs rm -rfv
fi

As I said, this is my first attempt at anything like this so it may not be the best or easiest way of accomplishing this. (I am just sharing what I know and recording for my reference). So please post comments if you have a better solution.

….Questions are also welcomed

Putty – Save Passwords

Ever since I started working on multiple servers on a regular basis I have been looking for a solution in which PuTTY is able to store passwords for the target machine. Now I think I have what can be deemed the closest thing to that (without applying keystrings or 3rd party programs), login sessions which are seeded with the relevant passwords though batch scripts.
First create and save putty session with username@IPAddr  and whatever other details are required from with PuTTY.
Next, open Notepad++ (or your preferred text editor) and create a windows batch (*.bat) file with following lines:
     cd c:\Program Files\Putty\
     putty -load "saved session name" -pw "password"
and save it like “session name.bat”
*** On some systems,especially 64 bit OS’s, Putty may be installed in “Program Files (x86)” instead of “Program Files” so make the necessary adjustment to the lines above.

Now just by double clicking on this batch file, you will be automatically logged on to the server without prompt for password. Downside is unencrypted password on your computer. You could explore using a bat-to-exe compiler which would help to mitigate some of the risk, at least from a casual prowler. ^_^

vsFTPd – Install and Configure

In this post I will go over how to install and configure one of my favorite FTP servers, vsFTPd. It’s a linux application which is known within the IT circles as being feature-rich, fast and secure, so I have adopted it as my tool of choice when the ‘job’ requires a FTP service.

Installing vsFTPd

I am using Ubuntu linux therefore my installation command is as follows:

 sudo apt-get install vsftpd

 

Configuration

To configure vsftpd to authenticate system users and allow them to upload files edit
/etc/vsftpd.conf:

local_enable=YES

write_enable=YES

Now when system users login to FTP they will start in their home directories where they can download, upload, create directories, etc. Similarly, by default, the anonymous users are not allowed to upload files to FTP server. To change this setting, you should uncomment the following line, and restart vsftpd:

anon_upload_enable=NO

The configuration file consists of many configuration parameters. The information about each parameter is available in the configuration file. Alternatively, you can refer to the man page, man 5 vsftpd.conf for details of each parameter. There are
options in /etc/vsftpd.conf to help make vsftpd more secure. For example users can be limited to their home directories by uncommenting:

chroot_local_user=YES

Restart the service

sudo service vsftpd restart

Related reading:
Creating Dummy FTP Users