Tech Talk

Sharing learning, opinions and knowledge

0 notes &

HTTP git server on nginx

Git project has recently launched smart http server which enables one to use git remotely over http instead of default ssh protocol. I can foresee http becoming the primary protocol to interact with remote git servers in future. Current documentation of git-http-backend only gives instructions for Apache.

However, I wanted to setup git over https using nginx, haproxy (for SSL offloading). Here are the working code snippets which enabled me to set up git remote server over https on Ubuntu 14.04 LTS.

Install nginx, git, fcgiwrap

sudo apt-get install nginx git fcgiwrap

Add the following lines to /etc/nginx/fastcgi_params

ubuntu@gitserver:~$ tail -2 /etc/nginx/fastcgi_params
# Pass authenticated username to CGI app
fastcgi_param REMOTE_USER $remote_user;

Create a new file in nginx configuration for git server

ubuntu@gitserver:~$ cat /etc/nginx/sites-available/git 
server {
    listen  :80;
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/passwd;

    location ~ (/.*) {
        fastcgi_pass  unix:/var/run/fcgiwrap.socket;
        fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend;
        fastcgi_param GIT_HTTP_EXPORT_ALL "";
        fastcgi_param GIT_PROJECT_ROOT    /srv/git;
        fastcgi_param PATH_INFO           $1;
        include       fastcgi_params;

Create a symlink

sudo ln -s /etc/nginx/sites-available/git /etc/nginx/sites-enabled/git

This enables a git server on with restricted authentication via nginx. Nginx supports authentication using htpasswd or even linux crypt authentication. I personally use 5,000 rounds of SHA-512 with random salt (default linux shadow password implementation).

If you want to go extra mile and move this http based git to https, we can set up haproxy to do SSL offloading. Working haproxy (using haproxy 1.5.4) config file

ubuntu@gitserver:~$ cat conf.haproxy.txt
    log   local0
    tune.ssl.default-dh-param 2048
    maxconn 4096
    user nobody
    group nogroup
    pidfile /var/run/
    stats socket /var/run/socket-haproxy level admin

    log global
    mode http
    option httplog
    option dontlognull
    option httpclose
    option redispatch
    timeout client 30s
    timeout server 30s
    timeout connect 1s
    timeout http-keep-alive 60s
    timeout http-request 15s
    stats enable
    stats refresh 10s
    stats uri /stats

frontend https
    default_backend server

backend server
    server 1

The above SSL configuration got me A on ssl labs with support for everything but IE 6 on XP, IE 8 on XP and Java 6.

The ssl.pem used above looks like

private key file goes here. keep this secret
generate dh paramerets regularly for forward secrecy
use "openssl dhparam -outform pem -out dh2048.pem 2048"
signed cert goes here
chained cert goes here - depends on who signed your CSR

Filed under haproxy nginx git git-http-backend

1 note &

Encrypting linux root disk in EC2/Cloud

I posted a blog earlier on the topic of encrypting a disk. That solution works great for secondary linux volumes. However, that solution doesn’t extend to root device. This blog is about encrypting root disk in a AWS/Cloud environment (or any environment where console access is not available to enter the password). 

One of the foremost requirement of encryption is protection of the encryption key. The key can’t be embedded into the boot disk. Since we don’t have console access, the key can’t be typed during boot process. Certain solutions exist like embedding dropbear (a small ssh server) into boot process where someone is required to login via ssh and enter the password during boot process. However, that’s a push model where a human or a process has to be aware that the server is waiting for password. Such a solution is not feasible for someone managing a large server fleet. 

The solution mentioned here loads the encryption key into server memory only and doesn’t store it anywhere on disk. There are 2 proposed solutions for 2 different use cases. The first solution is for volatile servers. As a part of this solution, the server generates a random encryption key and encrypts root device using that key. The key is only in server memory and never persisted. A reboot will make the server loose it memory and will cause loss of data. This solution is powerful but only usable for servers which don’t persist data.


The second solution builds on top of first solution and requires a central keyserver which stores the keys. The central keyserver will be queried upon boot and will be required to provide the keys to the cloud server in need of encryption keys to mount its root volume.


The solution presented has some complexity as linux boot process can’t do https. So, the solution is designed to protect the key in transit using authenticated encryption.


Before getting in technical details, you might want to get familiar with building an AMI from scratch blog post. Following are the technical code snippets used in Solution 1

# 150M for boot, rest for root device
sudo sfdisk -uM /dev/xvdf << EOF

sudo mkfs.ext4 /dev/xvdf1
sudo mkfs.ext4 /dev/xvdg
sudo mount /dev/xvdg /mnt
sudo mkdir /mnt/boot
sudo mount /dev/xvdf1 /mnt/boot

# Install Ubuntu 14.04 into /mnt
sudo debootstrap --arch=amd64 trusty /mnt
sudo chroot /mnt
cat << EOF > /etc/fstab
/dev/mapper/encfs     /        ext4    defaults            0   1
/dev/xvda1    /boot    ext4    defaults,noatime    0   0
apt-get -y install openssh-server cryptsetup-bin
apt-get -y install linux-image-virtual
apt-get -y purge grub2 grub-pc grub-common
cd /boot
kernel=$(ls -1 vmlinuz-* | head -1)
initrd=$(ls -1 initrd.img-* | head -1)
mkdir /boot/grub
cat << EOF > /boot/grub/menu.lst
timeout 0
title Linux
root (hd0,0)
kernel /$kernel root=/dev/mapper/encfs ro console=hvc0
initrd /$initrd

# add kernel modules required by cryptsetup
cat << EOF >> /etc/initramfs-tools/modules    

cat << EOF > /etc/initramfs-tools/hooks/crypt
    echo "$PREREQ"

case \$1 in
    exit 0

if [ ! -x /sbin/cryptsetup ]; then
    exit 0
. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/cryptsetup /sbin
copy_exec /usr/bin/head /bin

chmod +x /etc/initramfs-tools/hooks/crypt

cat << EOF > /etc/initramfs-tools/scripts/local-top/crypt
    echo "$PREREQ"

case \$1 in
# get pre-requisites
    exit 0

# create a random 256 bit key and use it for encryption
head -c32 /dev/urandom | cryptsetup -c aes-cbc-essiv:sha256 --key-file - create encfs /dev/xvda2
# copy base OS install from unencrypted to encrypted volume
dd if=/dev/xvdb of=/dev/mapper/encfs bs=1024k

chmod +x /etc/initramfs-tools/scripts/local-top/crypt

update-initramfs -k all -c

ec2reg -O KEY -W SECRET --kernel aki-919dcaf8 -a x86_64 -n encrypting-ami --root-device-name /dev/xvda -b "/dev/xvda=SDA_SNAPSHOT" -b "/dev/xvdb=BASE_OS_SNAPSHOT"

Following are the code snippets which extend Solution 1 to Solution 2

openssl genrsa -out server-sk.pem 2048
# ami-sk is placed in OS install
openssl genrsa -out ami-sk.pem 2048
# server-pk is placed in OS install
openssl rsa -in server-sk.pem -pubout -out server-pk.pem
openssl rsa -in ami-sk.pem -pubout -out ami-pk.pem

# Key generation on keyserver end
head -c32 /dev/urandom > message
# encrypt the key using RSA. Note that message has to be less than 2048 bits for this
openssl rsautl -encrypt -inkey ami-pk.pem -pubin -in message > cipher
# using encrypt-than-mac to provide Authenticated Encryption
sha256sum -b cipher | head -c64 > hash
# signing the hash using server secret key
openssl rsautl -sign -inkey server-sk.pem -in hash > signature

# key verification on instance end
signedHash=$(echo $signature | base64 -d | openssl rsautl -inkey /etc/server-pk.pem -pubin)
foundHash=$(echo $cipher | sha256sum -b | head -c64)
message=$(echo $cipher | base64 -d | openssl rsautl -decrypt -inkey /etc/ami-sk.pem)

# Some changes to initramfs scripts to pack openssl & base64
cat << EOF > /etc/initramfs-tools/hooks/crypt
    echo "$PREREQ"

case \$1 in
    exit 0

if [ ! -x /sbin/cryptsetup ]; then
    exit 0
. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/cryptsetup /sbin
copy_exec /usr/bin/base64 /bin
copy_exec /usr/bin/head /bin
copy_exec /usr/bin/openssl /bin
copy_exec /usr/bin/sha256sum /bin
copy_exec /root/ami-sk.pem /etc
copy_exec /root/server-pk.pem /etc

Filed under encryption cloud computing

6 notes &

Path to Cloud Computing

Cloud Computing promises many attractive things to companies. It’s the buzzword of the decade and attracts a considerable investment in hope for future returns. However, there is a lot of chaos in industry where every vendor is promising a turnkey cloud solution. The pot of gold lies at the intersection of 3 roads

  • Software defined network
  • Infrastructure provisioning service
  • Resilient Application design

To realize the benefits of cloud computing, a company has to invest in all 3 areas. Most large companies have opened their coffers to vendors who promise instant cloud. Organizations often fail to realize that cloud computing is nothing but a resilient distributed architecture supported by agile infrastructure and network. Cloud Computing benefits can’t be realized without (re)building an application in a resilient fashion. Organizations look for turnkey solutions which can turn their rigid application architectures into cloud applications, a magic which is not possible.

Small companies/startups on the other hand start with a blank slate when it comes to application design. They are able to buy Infrastructure provisioning service & Software defined network as a package when they use Off premise cloud. Small companies are able to (re)design their software using the best practices suggested by providers like AWS and reap the benefits of Cloud Computing.


For an organization trying to build on-premise Cloud computing, here are some milestones along the way

Software defined network

  • Dynamic Firewall - Application servers will change often for a dynamic distributed application. The firewall needs to adapt to allow new servers and remove old servers.
  • Dynamic DNS - DNS updates is critical part of certain application failovers, moving traffic to another DC (when needed)
  • Floating IP - Floating IP allows a dynamic application to heal itself in situations where traffic can’t be load balanced.

Infrastructure Provisioning service

  • Virtualization - VmWare, Xen, LXC (Linux Containers). This is just one of the milestones to cloud and not an end goal. Vendors would love to convince you otherwise.
  • Load Balancer - Ability to scale application without adding more IPs to DNS records. (since DNS updates are slow to propagate)

Resilient Application Design

  • Horizontally Scalable - Make the application scalable while reducing bottlenecks as they are identified.
  • Reconfigure - Adapt to the dynamic nature of cloud. Application should reconfigure itself based on health of other servers. Try Apache Zookeeper.
  • Dynamic LB pool - Health checks to weed out bad application servers. Add/Remove servers to match load
  • Configure - Configure new servers to install packages/deploy application before adding them to LB pool. Try Chef/Puppet.

Filed under cloud cloud computing

0 notes &

Create AMI/Image from scratch for EC2/Xen

This blog captures the steps required to create an image from scratch which can be used on Xen virtualization platform using PvGrub boot manager. The later half of the blog also highlights steps which can be used to convert this image into an EC2 AMI and can then be used to boot an EC2 instance.

The instructions below have been tested on Ubuntu 13.10 and builds a Ubuntu 13.10 image/AMI.

1) Create image file and mount it

# creating 1GB image here
# image size should be bigger than disk required for AMI
# size of root device is chosen when launching instance and not here
dd if=/dev/zero of=linux.img bs=1M count=1024 sudo losetup /dev/loop0 linux.img sudo mkfs.ext4 /dev/loop0 sudo mount /dev/loop0 /mnt

2) Install base system

sudo apt-get -y install debootstrap
# Installing 64-bit Ubuntu saucy (13.10) here
# Modify for your use case
sudo debootstrap --arch=amd64 saucy /mnt

3) Chroot into new installed system to configure it

sudo chroot /mnt

4) Configure basic system

mount none /proc -t proc
mount none /sys -t sysfs
# Adding root mount point
cat << EOF > /etc/fstab
/dev/xvda1   /       ext4    defaults        0   1
# Adding saucy specific apt sources here
cat << EOF > /etc/apt/sources.list
deb saucy main
deb-src saucy main
deb saucy-updates main
deb-src saucy-updates main
deb saucy-security main
deb-src saucy-security main
# setting eth0 for dhcp
cat << EOF >> /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
apt-get update
# installing ssh-server for headless setup
apt-get -y install openssh-server

5) Setup kernel and system to boot - This was the most tricky part of the setup to learn/find out/get right. PvGrub essentially reads /boot/grub/menu.lst file and find the kernel and initrd information from the file. There is no need to create boot record or boot sectors or anything similar when booting in Xen with PvGrub manager. However, one does have control over what kernel will be running and a kernel needs to be installed.

# install linux kernel
# don't install grub on any disk/device when prompted
# commands requires manual prompt, haven't scripted next step apt-get -y install linux-image-virtual # remove grub which was installed during kernel install # choose to remove grub from /boot/grub when prompted
# commands requires manual prompt, haven't scripted next step
apt-get -y purge grub2 grub-pc grub-common
# install grub-legacy-ec2 which is NOT a ec2 specific package
# this package creates /boot/grub/menu.lst file
# this package applies to all PvGrub guests, even outside ec2
apt-get -y install grub-legacy-ec2

6) Do custom configuration

# set up root password
# or create users with sudo access

7) unmount filesystems & loop devices

# exit out of chroot /mnt
sudo umount /mnt/sys
sudo umount /mnt/proc
# if umount fails, you might have to force it by umount -l
# umount will fail if any daemon processes were started in chroot
sudo umount /mnt
# losetup will fail till daemon process are still running
# kill any chroot daemon processes if needed
sudo losetup -d /dev/loop0

Your new linux.img file is ready for use. The next set of commands are specific to converting this image file into an AMI which can be used on EC2.

a) Transfer the image to a running EC2 instance

b) Mount a new EBS volume on the EC2 instance. Size of the EBS volume should be greater than size of image file chosen at step (1).

c) Copy image file contents into EBS volume

# assuming that the empty EBS drive is on /dev/xvdf
sudo dd if=linux.img of=/dev/xvdf bs=1M

All the steps listed below can also be done in AWS console

d) Create a snapshot of the EC2 instance

ec2addsnap -O AWS_KEY -W AWS_SECRET -d `date +"%Y%m%d%H%M%Z"` 

e) Register a new AMI from the snapshot

# Find the latest PvGrub kernel offered by Amazon
# hd0 and hd00 are the same thing
# choose 32-bit kernel or 64 bit kernel based on image
# kernel-id looks like "aki-919dcaf8" (1.04-x86_64)
ec2-describe-images -O AWS_KEY -W AWS_SECRET -o amazon --filter "name=pv-grub-*.gz"
# registering a 64 bit AMI here
ec2reg -O AWS_KEY -W AWS_SECRET --kernel KERNEL_ID -a x86_64 -n `date +"%Y%m%d%H%M%Z"` -b "/dev/sda1=SNAPSHOT_ID"

f) Test your new AMI by starting an instance with it


Filed under xen pvgrub ami image virtualization ec2 ebs

0 notes &

Secure data at rest with encryption

Encryption is critical to protecting data. Data should be encrypted both at rest and in transit. Data in transit can be encrypted using TLS/HTTPS. This blog talks about storing data securely using encryption. The solution listed below uses dm-crypt kernel module.

1) First, we will randomize the data on the disk to ensure that if someone gets access to the actual disk, is unable to determine the length/size of the encrypted data. The disk sectors can be randomized by overwriting them with random data. This step is important and shouldn’t be overlooked. Without randomization of device blocks, someone can assert that the disk is encrypted. Once the disk sectors have been randomized, it gives you plausible deniable encryption.

Your disk might have been initialized with a specific pattern before you got it (say with zeros). Once your start using the disk, you disk will look like this


However, if you randomize the disk space before storing data, no-one will be able to tell the difference between encrypted/filled space and randomized/empty space.


dd if=/dev/urandom of=/dev/sdb bs=1M

2) First decision related to encryption is the selection of the algorithm and the key length. For this setup, we will use AES with 256 bit key.

In the next step, we are generating a 256 bit (32 bytes) key from /dev/random. We are using /dev/random instead of /dev/unrandom for more security. However, on headless servers, random data generation can be slow and hence the next command may take some time.

head -c 32 /dev/random > 256bitKey

The key generated here should be kept secure, safe and away from the computer where the encrypted disk is mounted. If this key is lost, there is no way to recover the data stored on the encrypted disk.

3) We will use the key generated and create a mapped device such that data is encrypted when stored on /dev/sdb but decrypted when viewed from /dev/mapper/secretFs

sudo cryptsetup -c aes-cbc-essiv:sha256 --key-file 256bitKey create secretFs /dev/sdb

The mapped device can be used like any disk device and can be used directly or as part of LVM etc.

4) Let’s format the device and create an ext4 filesystem on it

sudo mkfs.ext4 /dev/mapper/secretFs

5) The formatted disk can now be mounted so that the applications can write data to the disk

sudo mount /dev/mapper/secretFs /mnt/secure


Here are the consolidated steps to prepare a new disk, set up encryption and mount it

dd if=/dev/urandom of=/dev/sdb bs=1M
head -c 32 /dev/random > 256bitKey
sudo cryptsetup -c aes-cbc-essiv:sha256 --key-file 256bitKey create secretFs /dev/sdb
sudo mkfs.ext4 /dev/mapper/secretFs
sudo mount /dev/mapper/secretFs /mnt/secure

If you reboot the server or unmount the disk, the disk can be re-mounted using the following steps

sudo cryptsetup -c aes-cbc-essiv:sha256 --key-file 256bitKey create secretFs /dev/sdb
sudo mount /dev/mapper/secretFs /mnt/secure

Filed under encryption

0 notes &

SSH Host Identification and Verification

When connecting to a SSH server, you would have often come across a prompt which asks you to confirm the host fingerprint before connecting. Many of the users might have picked up a bad habit of saying yes without understanding the implications of the decision. This blog is to highlight how ssh setup can be strengthened to include host identification (similar to HTTPS certificate signing). The benefit of such a setup is that the client can be confident that it is connecting to the right server and not becoming victim of MITM (man in the middle) attack. 

Often when connecting to a SSH server, you would see a prompt like the following

user@client:~$ ssh user@remoteserver
The authenticity of host 'remoteserver (' can't be established.
ECDSA key fingerprint is dd:30:96:8a:46:78:76:0a:53:7d:9d:0d:23:d6:89:ce.
Are you sure you want to continue connecting (yes/no)?

The presence of the prompt indicates the client can’t confirm the authenticity of the host. This problem can be solved by creating a certificate authority, which will sign the host keys (similar to CA concept in HTTPS). A certificate authority can be created for ssh using the following

user@ca-server:~$ ssh-keygen -f ca

The step above will create 2 files, ca and Contents of file ca are private and should be kept secret. The contents of are public and can be distributed freely.

As a part of creating new server, each server should have a unique set of host identification keys. They keys are usually automatically generated as a part of install. If not, such keys can be generated using the following steps.

root@ssh-server:~$ ssh-keygen -t rsa -q -N '' -f /etc/ssh/ssh_host_rsa_key
root@ssh-server:~$ ssh-keygen -t dsa -q -N '' -f /etc/ssh/ssh_host_dsa_key
root@ssh-server:~$ ssh-keygen -t ecdsa -q -N '' -f /etc/ssh/ssh_host_ecdsa_key

The above step generates private/public key pair for rsa, dsa and ecdsa algorithms. The public host keys can be copied over to ca-server for signing. ca-server is the server where ca private key is stored securely. The CA can sign the host keys using ca private key with the following command

user@ca-server:~$ ssh-keygen -s ca -I "remoteserver" -n remoteserver,remoteserver.domain, -h

The command is signing the host to be able to represent server remoteserver or remoteserver.domain or The certificate will not match if this certificate is presented for any other hostname. The above command will generate files * which should be copied back to the ssh server. Further sshd configuration (usually /etc/ssh/sshd_config) should be modified to present HostCertificates to client during ssh handshake. SSH configuration file on ssh server looks like the following

user@ssh-server:~$ cat /etc/ssh/sshd_config
# ssh daemon configuration HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key # new lines added HostCertificate /etc/ssh/ HostCertificate /etc/ssh/ HostCertificate /etc/ssh/
# further SSH configuration follows ...

Now the ssh server has its keys signed by CA. In the case of HTTPS, we have certain root certs which the browser trusts by default. However, there are no root certs in case of SSH. SSH client needs to explicitly configured to trust the CA. All ssh clients can be configured to trust the CA by putting the CA public key in SSH known hosts (/etc/ssh/ssh_known_hosts) configuration file 

user@client:~$ cat /etc/ssh/ssh_known_hosts
@cert-authority * ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCm+Bdq1eYGvddoWPRmJ43id7MioLeyRlOPNIJeuScHGMQro6jUYU4JyKx9dpKlQrZmn+hZeDxgx4fbxAQFKfdfgaLrFX3N06dR8uAFk7g+oimNJITWnaUgOuHGJrGEKIpNUqeLboOXm5aaYkiCH1ixx4r8hVIT4J+OM66oUZZmYTwWmxkxjj2Cu+Iuil7rpNzhjz9IVEzJrQA0KdpnfGQqv2KuaAhCCq6reZMoutE60HBX1Cww7Y3O26psp2AnL+xV5BzfhWYEdt98+Bz+WR/3Mt2u3NSv/ABwHZD3qseRFcWXnJGj9PbUAWAO6klMDqk9ok1nlmT0FjLbNk/R/gfh

Once done with all the steps above, you should be able to ssh from client machine to any server without facing the ssh host identification warning. The client trusts the CA and trusts the cert presented by the host when the cert is signed by CA and the cert name matches the hostname client is trying to connect to.

Like with any CA, SSH also has provision to revoke host keys when needed. However, there is no provision for a central revocation list (like in HTTPS). Such revocation information needs to be present in all client machines. The following snippet shows how to revoke public key of a server in a client.

user@client:~$ cat /etc/ssh/ssh_known_hosts
@cert-authority * ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCm+Bdq1eYGvddoWPRmJ43id7MioLeyRlOPNIJeuScHGMQro6jUYU4JyKx9dpKlQrZmn+hZeDxgx4fbxAQFKfdfgaLrFX3N06dR8uAFk7g+oimNJITWnaUgOuHGJrGEKIpNUqeLboOXm5aaYkiCH1ixx4r8hVIT4J+OM66oUZZmYTwWmxkxjj2Cu+Iuil7rpNzhjz9IVEzJrQA0KdpnfGQqv2KuaAhCCq6reZMoutE60HBX1Cww7Y3O26psp2AnL+xV5BzfhWYEdt98+Bz+WR/3Mt2u3NSv/ABwHZD3qseRFcWXnJGj9PbUAWAO6klMDqk9ok1nlmT0FjLbNk/R/gfh @revoked * ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHCGEppabLQm/8J8OXzp6VNRAXX/7hXcvsLXD5apKxVT8VY9B8rB6o/1Iyw9qXuRi5k5cPfF29mNEm1XVYz9znU=

Filed under ssh sshd_config ssh_host_key

0 notes &

Secure SSH with Multi Factor authentication

Logging into a machine is essentially about proving your identity to the server. The human logging in needs to prove to the server that he/she is the holder of the account the session is trying to access.

Identity in general can be proved using the following methods

  • Something you have (like hardware token device, access card, public/private keys)
  • Something you know (like password, social security number, birth date)
  • Something you are (physically defines you, like fingerprint)

Most of the linux servers I deal with are headless and don’t really have a bio-metric device attached to them to assert something you are. So, I will be focusing on the first 2 assertions.

Multi factor authentication essentially means asking user to verify his/her identity using two different methods like something you have and something you know.  

The rest of the blog focuses on enabling various security mechanisms in a Linux server

a) Public/Private Key Authentication

Public key authentication falls in something you have but they have a weakness. Something you have should be tamper proof and hard to copy. However, private key files can be copied and can often be done without the user knowing that his/her key has been copied away. However, they offer better protection than password in attacks since we typically 2048 bit keys. 

A private key can be generated easily using the following linux command

ssh-keygen -f file.rsa -t rsa -b 2048 -q -N ''

The command will produce a 2048 bit private RSA key and store that in a text file (file.rsa here). The private key can be potentially protected with a passphrase by either specifying it at command (-N ‘<passphrase>’) or dropping -N parameter and specifying the passphrase as input in the tty session. The file created above is the private key and must be kept secure. The corresponding public key can be generated using

ssh-keygen -y -f file.rsa

The output is the public key which can be distributed freely. The public key is added/appended to ~<user>/.ssh/authorized_keys file on the server. This directs SSH server to accept the person in possession of the private key to log in as <user>.

Public Key Authentication can be enabled in a linux server by ensuring that the following is present in /etc/ssh/sshd_config

PubkeyAuthentication yes

b) Password Authentication

Passwords are the oldest security mechanism. However, people choose simple passwords. Even a random 8 character case sensitive alphanumeric (a-zA-Z0-9) password has only 48 bits of entropy which isn’t too hard (1 week) for a botnet to crack. However, most of us don’t use random passwords and easy to remember passwords have much less entropy.

Standard password can be enabled in linux server by making sure that sshd config (/etc/ssh/sshd_config) contains the following

PasswordAuthentication yes

c) TOTP Authentication

A better something you have example is a TOTP hardware device. A phone running TOTP app (like Google Authenticator) is not 100% tamper proof if running on a rooted Android device or if the underlying OS has security related bugs where other application might be able to steal the secret. A hardware key is more tamper proof and is a better choice but phone running TOTP app is also good.

Time based One Time Password (TOTP) algorithm generates a rolling password every X seconds (default 30). The password is based on 2 factors - a secret key which is known to both server and user and time. Given that time is a factor, it is critical that both server and client are using a synchronized clock.

While TOTP is a good something you have example, it has some flaws when it comes to a company wide deployment
a) The algorithm uses a shared secret. This means that the server knows the secret and hence it must be guarded on the server. Anyone who gains root access to the server can find out the secret for all the other users on the system.
b) The one time password is good for X seconds (default 30). This makes a man in the middle (MITM) attack easy. Anyone who intercepts or sees this password can use it for the next 30 seconds.

The flaws can be fixed by having a central TOTP authentication server which all the servers connect for TOTP validation. The central server can prevent MITM attack by enabling only login per verification code. This however limits the user to be able to log into a single server per X seconds.

TOTP authentication can be enabled on a linux server by running the following commands

# install pam module developed by google 
# which enables TOTP validation apt-get install libpam-google-authenticator # Run the google authenticator command to create new secret google-authenticator # copy the QR code or secret to a smartphone running a TOTP app
# (like Google Authenticator)

Further, ssh pam configuration (/etc/pam.d/sshd) should be modified to add the following line

auth required

This will require user to enter TOTP token (something you have) along with password (something you know)

Enabling combination of these methods

Openssh 6.2 added a new feature where user can be required to pass multiple validations before successful login. Here are some scenarios with the configuration

1) PublicKey + TOTP

Its not a desirable configuration as both public key and TOTP fall under something you have. So, this is not a multi factor authentication. Imagine a user storing both of these on a phone and loses the phone. This configuration can be enabled by doing the following

Add the following line to sshd config (/etc/ssh/sshd_config)

AuthenticationMethods publickey,keyboard-interactive

The configuration above requires user to pass public key check and keyboard-interactive check. keyboard-interactive check passes the control to pam module. SSH pam configuration file should have the following changes

# Require TOTP
auth required

Default pam ssh configuration requires a password. That can be disabled by removing/commenting out the line indicated below

# @include common-auth

2) PublicKey + Password

This is a good configuration since it mixes something you have and something you know. This can be enabled by adding the following to sshd config

AuthenticationMethods publickey,password


AuthenticationMethods publickey,keyboard-interactive

password authentication method is handled by sshd itself. While keyboard-interactive is handled by pam. So, in a default setting, they both appear to be the same to user but they are different under the surface. keyboard-interactive enables complex mechanisms possible via pam module. If the intention is use just password along with public key, its desirable to use “AuthenticationMethods publickey,password”

3) PublicKey + Password + TOTP

All 3 authentication methods listed here can be enabled by adding the following line to sshd config file

AuthenticationMethods publickey,keyboard-interactive

Further, pam ssh module should be modified to require totp code like mentioned in (c) above.

Filed under sshd_config sshd totp security

0 notes &

Running a process inside a network namespace

I have been reading and playing with Linux containers. I previously covered cgroups which enables process group level cpu/memory allocation. This blog entry is about my understanding of network namespace. And about running a process inside an isolated network namespace. Running a process there allows us to set specific filters on the specific process.

Network namespace allows Linux to clone the network stack and make the new stack available to a limited set of processes. This is used primarily with Linux containers such that each container has a different network stack altogether. There are multiple options for adding network interfaces to a newly created network namespace

Out of the three options above, I haven’t been able to find a lot about venet. I think venet is part of OpenVZ kernel changes and is not available in mainstream linux kernel.

A new network namspace can be created using the following command

ip netns add myspace

Now, we will create a new pair of type veth network interfaces. veth interfaces come in pair and act like a pipe of data. Each packet sent to veth0 shows up at veth1 and each packet to veth1 shows up at veth0.

ip link add veth0 type veth peer name veth1

Now, let’s move veth1 to our newly created namespace

ip link set veth1 netns myspace

Now, we will bring up veth0 (in original namespace) and assign IP address and subnet to it

ifconfig veth0 netmask up

Assigning ip address and netmask to veth0 (inside myspace namespace)

ip netns exec myspace ifconfig veth1 netmask up

The command above is important to look at again. The format of the above command is ip netns exec myspace <command>. The <command> executed here will be running in the myspace network namespace. The command will only see the interfaces, the route table configured inside the myspace network namespace.

Setting up gateway for myspace namespace

ip netns exec myspace route add default gw

Now, we have a namespace myspace which can send network packets to veth0 which will reach host at veth1 interface. Here, we can multiple options to connect veth1 to the outside world (via eth0).

  • Bridging
  • NAT

I will be covering NAT setup here. NAT can be enabled by running the following commands

# Enable kernel to forward packets from one interface to another (veth0 <-> eth0)
echo 1 > /proc/sys/net/ipv4/ip_forward
# Each packet coming via 192.168.42.* address space should be sent
# via eth0 after changing the source ip to eth0 ip.
iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE

Now, our network namespace is ready for use. A process can be run inside the network namespace by using the following

ip netns exec myspace <command>

The command will run inside the network namespace. If you want to run the command inside network namespace as an unprivileged user, use the following

# first sudo is optional and only needed if running this command as non-root
sudo ip netns exec myspace sudo -u <user> <command>

This way, all the traffic generated by the process will be inside the network namespace myspace. The traffic can be inspected and accepted or dropped in the parent namespace by using the filter table, forward chain.

Filed under os virtualization network namespaces

0 notes &

Is your application Cloud Ready?

Cloud computing delivers a quick to market development platform which application teams can use to translate business requirements into applications faster than ever. However, the application needs to have certain features to be able to benefit from such a development platform.

  • Cloud enabled Architecture - The application should be able to distribute workload across multiple workers, be able to scale out by adding more resources, have no single point of failure, be able to recover from certain infrastructure failure scenarios like disk failure, datacenter failure, network degradation, database failure. This enables the application to be resilient and scale out when required.
  • Automated Configuration Management - The server configuration required to host the application should be automated to the last mile. This enables the infrastructure provider to patch the Operating System, move application across shared servers in a reliable fashion.
  • Automated deployment - The application should provide an automated deployment workflow for each release. Ideally, each deployment should have an automated rollback workflow defined too.
  • Support shutdown, start, moving - The application should be able to take in commands/signals/notifications when the application is required to shut down, start on the existing server or move to a new server. This enables the infrastructure provider to reduce servers, add servers, move servers when required.
  • Automated Build - The application should work with automated build tools and should be enabled to use automated build on check-in, nightly etc. This enables a quick turnaround for a developer to test their changes, makes the build process clean.
  • Automated QA - The application should have automated test cases which are run after every build. The regression results should be notified to developer making the changes so that either the code or the test cases can be fixed. This increases the speed of development while keeping quality high. It also avoids last minute QA surprises at the end of the sprint.
  • Support health checks - Health checks are important to make sure that the service is working as intended. Health checks can be at machine level, at load balancer, at cluster level and even at a datacenter level. Health checks are required to make decisions for self healing infrastructure.
  • Metrics & Monitoring - You can’t improve what you don’t measure. Whether its uptime, average response time, average failure rate, average pageload time, the metrics should be measured and observed consistently. Obsession with metrics will help us deliver better products with each sprint/release. Metrics should be monitored to observe any problems which would need either automated self healing or manual fixing.

Filed under cloud computing