Provisioning massively cross-compiled binaries to Raspberry Pi (arm) using Vagrant, VirtualBox, Ansible and Python

If you are involved in an IoT or Mobile Application provisioning Project you probably need build a mechanism to spread your application binaries to all Devices on stock and to all the rolled out Devices.

With this Proof-of-concept I will shown you how to build the app binary provisioning system for your custom platform, in this case I’m going to use Raspberry Pi (ARM processor) quickly avoiding perform unnecessary tasks and providing also an ARM cross-compiling platform.

To implement this I will use Vagrant to create an Ubuntu VM mounts the Raspbian OS image internally ready to be used for ARM cross-compiling. There is a special part in this blog post where explains how to NFS mount to provide remote booting for all Raspberry Pi’s connected to same network.

I provide a new Github repository with all the updated scripts required for this PoC. You can download from here:
I would like to mention that this work is based on https://github.com/twobitcircus/rpi-build-and-boot where I’ve created a Vagrantfile for VirtualBox, tweaked the Ansible Playbook and I have documented the process I’ve followed to make it work successfully in my environment (VirtualBox instead of Parallels and booting from NFS).

Requirements:

I’m using a Mac OS X (El Capitan – Version 10.11.3) with the next tools:

  • VirtualBox 5.0.16
  • Vagrant 1.8.1
  • Ansible 2.0.1.0 (installed via Pip)
  • Python 2.7.11
  • Raspberry Pi 2 Model B
  • Raspbian OS (2015-09-24-raspbian-jessie.img)
  • OpenFramework for cross-compiling (http://openframeworks.cc)

Why Ansible instead of other configuration management tools ?

Why Ansible (http://docs.ansible.com/ansible/intro_installation.html) instead of other configuration management tools as Puppet, Chef, …?. Because, Ansible is simple and agentless; you can use it with just with a simple SSH terminal, nothing special is required to be installed in the Host, also because it is written in Python and as you have seen in my previous post, I’m using intensively Python and it is becoming my favorite programming language. You can install Ansible using the same Python installation tools and obviously, you can import ansible from your Python scripts.
To install Ansible on Mac OS X (El Capitan – Version 10.11.3) is easy, just follow these steps:

$ sudo easy_install pip
$ sudo pip install ansible --quiet

// upgrading Ansible and Pip
$ sudo pip install ansible --upgrade
$ sudo pip install --upgrade pip

Preparing the Raspberry Pi

1. Copy RPi image to SD

Identify the disk (not partition) of your SD card, unmount and copy the image there:

$ diskutil list
$ diskutil unmountDisk /dev/disk2
$ cd /Users/Chilcano/Downloads/@isos_vms/raspberrypi-imgs
$ sudo dd bs=1m if=2015-09-24-raspbian-jessie.img of=/dev/rdisk2

2. Connect the Raspberry Pi directly to your Host (MAC OS X)

Using an ethernet cable, connect your Raspberry Pi to your Host, in my case I’ve a MAC OS X and I’m going to share my WIFI Network connection.
Then, enabling Internet Sharing and the “Thunderbolt Ethernet” an IP address will be assigned to the Raspberry Pi, also Raspberry Pi will have Internet access/Network access and the MAC OS X can connect via SSH to the Raspberry Pi.
All that will be possible without a hub, switch, router, screen or keyboard, etc. This will be useful, because we are going to install new software in Raspberry Pi.

After connect your Raspberry Pi to your MAC OS X, turn on by connecting an USB cable, in your MAC OS X open a Terminal and issue a SSH command, before re-generate the SSH keys.

Note that the default hostname of any Raspberry Pi is raspberrypi.local.

// cleaning existing keys
$ ssh-keygen -R raspberrypi.local

// connect to RPi using `raspberry` as default password
$ ssh pi@raspberrypi.local

After connecting, you will check the assigned IP address and the shared Internet Connection. Now, check out your connection.

pi@raspberrypi:~ $ ping www.docker.com
PING www.docker.com (104.239.220.248) 56(84) bytes of data.
64 bytes from 104.239.220.248: icmp_seq=1 ttl=49 time=212 ms
64 bytes from 104.239.220.248: icmp_seq=2 ttl=49 time=214 ms
^C
--- www.docker.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 6970ms
rtt min/avg/max/mdev = 207.205/213.294/217.893/3.513 ms

3. Configure your RPi

Boot your RPi and open a shell. Then enter:

pi@raspberrypi:~ $ sudo raspi-config

In the raspi-config menu, select Option 1 Expand Filesystem, change Keyboard layout, etc. and reboot.

Just if mirrordirector.raspbian.org mirror is not available, remove http://mirrordirector.raspbian.org/raspbian/ repository and add a newest.

pi@raspberrypi ~ $ sudo nano /etc/apt/sources.list

#deb http://mirrordirector.raspbian.org/raspbian/ jessie main contrib non-free rpi
deb http://ftp.cica.es/mirrors/Linux/raspbian/raspbian/ jessie main contrib non-free rpi

# Uncomment line below then 'apt-get update' to enable 'apt-get source'
#deb-src http://archive.raspbian.org/raspbian/ jessie main contrib non-free rpi

4. Install OpenFrameworks tools and dependencies into Raspberry Pi

Download and unzip OpenFrameworks into RPi under /opt.

pi@raspberrypi:~ $ cd /opt
pi@raspberrypi:/opt $ sudo wget http://openframeworks.cc/versions/v0.9.0/of_v0.9.0_linuxarmv7l_release.tar.gz
pi@raspberrypi:/opt $ sudo tar -zxf of_v0.9.0_linuxarmv7l_release.tar.gz
pi@raspberrypi:/opt $ sudo rm of_v0.9.0_linuxarmv7l_release.tar.gz

Now, update the dependencies required when cross-compiling by running install_dependencies.sh.

pi@raspberrypi:~ $ sudo /opt/of_v0.9.0_linuxarmv7l_release/scripts/linux/debian/install_dependencies.sh

Now, compile oF, compile and execute an oF example.

// compiling oF
pi@raspberrypi:~ $ sudo make Release -C /opt/of_v0.9.0_linuxarmv7l_release/libs/openFrameworksCompiled/project
...
se/libs/openFrameworksCompiled/lib/linuxarmv7l/obj/Release/libs/openFrameworks/math/ofMatrix4x4.o /opt/of_v0.9.0_linuxarmv7l_release/libs/openFrameworksCompiled/lib/linuxarmv7l/obj/Release/libs/openFrameworks/math/ofQuaternion.o /opt/of_v0.9.0_linuxarmv7l_release/libs/openFrameworksCompiled/lib/linuxarmv7l/obj/Release/libs/openFrameworks/math/ofVec2f.o
HOST_OS=Linux
HOST_ARCH=armv7l
checking pkg-config libraries:   cairo zlib gstreamer-app-1.0 gstreamer-1.0 gstreamer-video-1.0 gstreamer-base-1.0 libudev freetype2 fontconfig sndfile openal openssl libpulse-simple alsa gtk+-3.0
Done!
make: Leaving directory '/opt/of_v0.9.0_linuxarmv7l_release/libs/openFrameworksCompiled/project'

// executing an example
pi@raspberrypi:~ $ sudo make -C /opt/of_v0.9.0_linuxarmv7l_release/apps/myApps/emptyExample
pi@raspberrypi:~ $ cd /opt/of_v0.9.0_linuxarmv7l_release/apps/myApps/emptyExample
pi@raspberrypi /opt/of_v0.9.0_linuxarmv7l_release/apps/myApps/emptyExample $ bin/emptyExample

5. Make an new image file from the existing and updated Raspberry Pi

Remove the SD card from the Raspberry Pi, insert the SD card in your Host (in my case is MAC OS X) and use dd to make an new image file.

$ diskutil list
$ diskutil unmountDisk /dev/disk2
$ sudo dd bs=1m if=/dev/rdisk2 of=2015-09-24-raspbian-jessie-of2.img

15279+0 records in
15279+0 records out
16021192704 bytes transferred in 381.968084 secs (41943799 bytes/sec)

Very important:

  • The 2015-09-24-raspbian-jessie-of.img will be shared and after mounted from the guest VM, for that, set the user and permissions to 2015-09-24-raspbian-jessie-of.img as shown below:
$ sudo chmod +x 2015-09-24-raspbian-jessie-of2.img
$ sudo chown Chilcano 2015-09-24-raspbian-jessie-of2.img

$ ls -la
total 110439056
drwxr-xr-x  33 Chilcano  staff         1122 Apr 11 19:12 ./
drwxr-xr-x  35 Chilcano  staff         1190 Mar 23 19:26 ../
-rwxr-xr-x   1 Chilcano  staff  16021192704 Apr 11 17:29 2015-09-24-raspbian-jessie-of1.img*
-rwxr-xr-x   1 Chilcano  staff  16021192704 Apr 11 19:19 2015-09-24-raspbian-jessie-of2.img*
-rwxr-xr-x   1 Chilcano  staff   4325376000 Apr 11 17:02 2015-09-24-raspbian-jessie.img*
-rwxr-xr-x   1 Chilcano  staff  16021192704 Mar 31 12:31 2016-03-18-raspbian-jessie-of1.img*
-rwxr-xr-x   1 Chilcano  staff   4033871872 Apr  5 16:31 2016-03-18-raspbian-jessie.img*
...

Building the Vagrant box

1. In your MAC OS X, to clone the rpi-build-and-boot github repository

$ git clone https://github.com/twobitcircus/rpi-build-and-boot
$ cd rpi-build-and-boot

Copy/Move the newest RPi image created above into rpi-build-and-boot folder.

$ mv /Users/Chilcano/Downloads/@isos_vms/raspberrypi-imgs/2015-09-24-raspbian-jessie-of2.img .

2. Install Vagrant and vbguest plugin into MAC OS X

$ wget https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1.dmg
$ vagrant plugin install vagrant-vbguest

3. Create a new Vagrantfile with VirtualBox as provider in the same folder rpi-build-and-boot

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  # https://atlas.hashicorp.com/ubuntu/boxes/trusty64 [Official Ubuntu Server 14.04 LTS (Trusty Tahr) builds]
  config.vm.box = "ubuntu/trusty64"
  config.vm.provider "virtualbox" do |vb|
    config.vbguest.auto_update = true
    vb.customize ["modifyvm", :id, "--memory", "6144"]
    vb.customize ["modifyvm", :id, "--cpus", "4"]    
  end
  # If you want to use this system to netboot Raspberry Pi, then uncomment this line
  #config.vm.network "public_network", bridge: "en4: mac-eth0", ip: "10.0.0.1"
  config.vm.network "public_network", bridge: "ask", ip: "10.0.0.1"
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "playbook.yml"
  end
end

4. Getting boot and root partitions offsets to do loop mounting in Vagrant

Using ./tool.py offsets I will get the offsets of the boot and root partitions, after getting offset, copy the output of this tool to the top of playbook.yml.
To run tool.py in MAC OS X, you will need Python configured.

$ ./tool.py offsets 2015-09-24-raspbian-jessie-of2.img

    image: 2015-09-24-raspbian-jessie-of2.img
    offset_boot: 4194304
    offset_root: 62914560

The idea to loop-mount the RPi image is to create a full structure of directories and files of a Raspberry Pi distribution under a mounting-point in a Vagrant box. This structure is required to do cross-compiling and move/copy new binaries and ARM cross-compiled binaries.

5. Mounting Raspberry Pi image and booting from Vagrant using NFS

Using ./tool.py netboot image.img /dev/rdiskX [--ip=10.0.0.Y] you will copy just the boot partition in a new and tiny SD card.
This new SD card with a fresh boot partition will be useful to boot from the network/remotely. The RPi will download the root partition from Vagrant, in fact, Vagrant will be sharing the custom RPi image (2015-09-24-raspbian-jessie-of2.img) via NFS to any Raspberry Pi connected to same network and having a pre-loaded boot partition.

The idea behind is to provision a custom RPi image massively avoiding to waste time copying and creating SD card for each Raspberry Pi. Also, this method is useful to provision software, configuration, packages, or in my case, provide cross-compiled software for ARM architectures massively.

$ diskutil list

// a new SD on disk3 will be used
$ diskutil unmountDisk /dev/disk3

$ ./tool.py netboot 2015-09-24-raspbian-jessie-of2.img /dev/rdisk3

2015-09-24-raspbian-jessie-of2.img /dev/rdisk3 10.0.0.101
The following partitions will be destroyed
/dev/disk3 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *4.0 GB     disk3
   1:             Windows_FAT_32 boot                    58.7 MB    disk3s1
   2:                      Linux                         3.9 GB     disk3s2

are you sure? y
OK
Unmount of all volumes on disk3 was successful
sudo dd if=2015-09-24-raspbian-jessie-of2.img of=/dev/rdisk3 bs=62914560 count=1
Password:
1+0 records in
1+0 records out
62914560 bytes transferred in 6.846875 secs (9188799 bytes/sec)
Disk /dev/rdisk3 ejected

Note that tool.py netboot automatically will assigns to RPi the 10.0.0.101 as IP address and 8.8.8.8 and 8.8.4.4 as DNS servers to eth0.
You can check or modify previously these values by editing the cmdline.txt file placed in the boot RPi partition. You can edit it from a running Raspberry Pi or from a mounted partition.

6. Download and unzip oF (OpenFramework) into rpi-build-and-boot folder

If you forgot copy OpenFramework in your RPi, you can do now. Using the Ansible playbook.yml, the oF will be copied to your RPi.

$ cd rpi-build-and-boot
$ wget http://openframeworks.cc/versions/v0.9.0/of_v0.9.0_linuxarmv7l_release.tar.gz
$ tar -zxf of_v0.9.0_linuxarmv7l_release.tar.gz

7. Update the Ansible playbook.yml

I’ve had to tweak the playbook.yml to avoid warnings, add DNS to cmdline.txt and add iptables filters to get Internet access on RPi using Host shared NIC. Here the updated Ansible playbook.yml:

---
- hosts: all
  remote_user: vagrant
  become: yes
  become_method: sudo
  vars:
    of_version: of_v0.9.0_linuxarmv7l_release
    raspbian_image: 2015-09-24-raspbian-jessie-of2.img
    offset_boot: 4194304
    offset_root: 62914560
  tasks:
    - apt: upgrade=dist update_cache=yes
    - file: path=/opt/raspberrypi state=directory

    - apt: name=nfs-kernel-server
    - lineinfile: dest=/etc/exports line="/opt/raspberrypi/root 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)"

    - lineinfile: dest=/etc/cron.d/opt_raspberrypi_root line="* * * * * root /bin/mount /opt/raspberrypi/root" create=yes

    - service: name=nfs-kernel-server state=restarted

    - apt: name=build-essential
    - apt: name=pkg-config
    - apt: name=git
    - apt: name=python-pip
    - apt: name=python-dev
    - apt: name=unzip
    - apt: name=gawk
    - apt: name=libudev-dev

    - apt: name=sshpass

    - pip: name=ansible
    - pip: name=paramiko
    - pip: name=PyYAML
    - pip: name=jinja2
    - pip: name=httplib2

    - apt: name=tinyproxy
    - lineinfile: dest="/etc/tinyproxy.conf" line="Allow 10.0.0.0/8"
    - service: name=tinyproxy state=restarted

    - file: path=/opt/raspberrypi/boot state=directory
    - file: path=/opt/raspberrypi/root state=directory

    - mount: src="/vagrant/{{raspbian_image}}" name="/opt/raspberrypi/boot" fstype="auto"  opts="loop,offset={{offset_boot}},noauto" state="mounted"
    - mount: src="/vagrant/{{raspbian_image}}" name="/opt/raspberrypi/root" fstype="auto"  opts="loop,offset={{offset_root}},noauto" state="mounted"
    - lineinfile: dest=/etc/rc.local line="mount /opt/raspberrypi/root" insertbefore="exit 0"
    - lineinfile: dest=/etc/rc.local line="mount /opt/raspberrypi/boot" insertbefore="exit 0"

    # the rpi is unbootable unless it is told not to mount the root filesystem from the card!. also added dns to cmdline.txt and iptables filter. 
    - lineinfile: dest=/opt/raspberrypi/root/etc/fstab regexp="^\/dev\/mmcblk0p2" state="absent"
    - replace: dest=/opt/raspberrypi/boot/cmdline.txt regexp="rootwait$" replace="dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 elevator=deadline root=/dev/nfs rootfstype=nfs nfsroot=10.0.0.1:/opt/raspberrypi/root,udp,vers=3 rw fsck.repair=no rootwait ip=10.0.0.101:10.0.0.1:10.0.0.1:255.255.255.0:rpi:eth0:off:8.8.4.4:8.8.8.8 smsc95xx.turbo_mode=N" backup="no"

    # build helpies
    - file: path=/opt/RPI_BUILD_ROOT state=directory
    - file: src=/opt/raspberrypi/root/etc dest=/opt/RPI_BUILD_ROOT/etc state=link
    - file: src=/opt/raspberrypi/root/lib dest=/opt/RPI_BUILD_ROOT/lib state=link
    - file: src=/opt/raspberrypi/root/opt dest=/opt/RPI_BUILD_ROOT/opt state=link
    - command: rsync -avz /opt/raspberrypi/root/usr/ /opt/RPI_BUILD_ROOT/usr

    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libanl.so.1           dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libanl.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libBrokenLocale.so.1  dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libBrokenLocale.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libcidn.so.1          dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libcidn.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libcrypt.so.1         dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libcrypt.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libdbus-1.so.3.8.13   dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libdbus-1.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libdl.so.2            dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libdl.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libexpat.so.1.6.0     dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libexpat.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libglib-2.0.so.0      dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libglib-2.0.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/liblzma.so.5.0.0      dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/liblzma.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libm.so.6             dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libm.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnsl.so.1           dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnsl.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnss_compat.so.2    dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnss_compat.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnss_dns.so.2       dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnss_dns.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnss_files.so.2     dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnss_files.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnss_hesiod.so.2    dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnss_hesiod.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnss_nisplus.so.2   dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnss_nisplus.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libnss_nis.so.2       dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libnss_nis.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libpcre.so.3          dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libpcre.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libpng12.so.0         dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libpng12.so.0 state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libresolv.so.2        dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libresolv.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libthread_db.so.1     dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libthread_db.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libusb-0.1.so.4       dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libusb-0.1.so.4 state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libusb-1.0.so.0.1.0   dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libusb-1.0.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libutil.so.1          dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libutil.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libz.so.1.2.8         dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libz.so state=link
    - file: src=/opt/raspberrypi/root/lib/arm-linux-gnueabihf/libudev.so.1.5.0      dest=/opt/raspberrypi/root/usr/lib/arm-linux-gnueabihf/libudev.so state=link

    - file: path=/tmp/CROSS_BUILD_TOOLS state=directory
    - copy: src=build_cross_gcc.sh dest=/tmp/CROSS_BUILD_TOOLS/build_cross_gcc.sh mode=0744
    - shell: /tmp/CROSS_BUILD_TOOLS/build_cross_gcc.sh chdir=/tmp/CROSS_BUILD_TOOLS creates=/opt/cross/bin/arm-linux-gnueabihf-g++

    - lineinfile: dest="/home/vagrant/.profile" line="export GST_VERSION=1.0"
    - lineinfile: dest="/home/vagrant/.profile" line="export RPI_ROOT=/opt/raspberrypi/root"
    #######- lineinfile: dest="/home/vagrant/.profile" line="export RPI_BUILD_ROOT=/opt/RPI_BUILD_ROOT"
    - lineinfile: dest="/home/vagrant/.profile" line="export TOOLCHAIN_ROOT=/opt/cross/bin"
    - lineinfile: dest="/home/vagrant/.profile" line="export PLATFORM_OS=Linux"
    - lineinfile: dest="/home/vagrant/.profile" line="export PLATFORM_ARCH=armv7l"
    - lineinfile: dest="/home/vagrant/.profile" line="export PKG_CONFIG_PATH=$RPI_ROOT/usr/lib/arm-linux-gnueabihf/pkgconfig:$RPI_ROOT/usr/share/pkgconfig:$RPI_ROOT/usr/lib/pkgconfig"

    - unarchive: src={{of_version}}.tar.gz dest=/opt/raspberrypi/root/opt creates=/opt/raspberrypi/root/opt/{{of_version}}
    - file: src={{of_version}} dest=/opt/raspberrypi/root/opt/openframeworks state=link
    - file: src=/opt/raspberrypi/root/opt/openframeworks dest=/opt/openframeworks state=link
    - command: chown -R vagrant /opt/raspberrypi/root/opt/{{of_version}}

    # forwarding traffic from eth0 (internet) to eth1 (rpi connection) with iptables
    - replace: dest=/etc/sysctl.conf regexp="^#net.ipv4.ip_forward=1$" replace="net.ipv4.ip_forward=1"
    - shell: /bin/echo 1 > /proc/sys/net/ipv4/ip_forward
    - command: iptables -A FORWARD -o eth0 -i eth1 -m conntrack --ctstate NEW -j ACCEPT
    - command: iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    - command: iptables -A POSTROUTING -t nat -j MASQUERADE
    - shell: /sbin/iptables-save | /usr/bin/tee /etc/iptables.backup
    - service: name=ufw state=restarted
  handlers:

8. Create the Vagrant box

$ cd rpi-build-and-boot
$ vagrant up --provider virtualbox

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: A newer version of the box 'ubuntu/trusty64' is available! You currently
==> default: have version '20160311.0.0'. The latest is version '20160406.0.0'. Run
==> default: `vagrant box update` to update.
==> default: Setting the name of the VM: rpi-build-and-boot_default_1460455393206_79951
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: bridged
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
...
TASK [service] *****************************************************************
changed: [default]

PLAY RECAP *********************************************************************
default                    : ok=82   changed=76   unreachable=0    failed=0

… let's have coffee ;)

After that, restart the Vagrant box recently created.

$ vagrant halt
==> default: Attempting graceful shutdown of VM...

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: A newer version of the box 'ubuntu/trusty64' is available! You currently
==> default: have version '20160311.0.0'. The latest is version '20160406.0.0'. Run
==> default: `vagrant box update` to update.
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Specific bridge 'ask' not found. You may be asked to specify
==> default: which network to bridge to.
==> default: Available bridged network interfaces:
1) en0: Wi-Fi (AirPort)
2) en1: Thunderbolt 1
3) en2: Thunderbolt 2
4) p2p0
5) awdl0
6) en4: mac-eth0
==> default: When choosing an interface, it is usually the one that is
==> default: being used to connect to the internet.
    default: Which interface should the network bridge to? 6
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: bridged
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
==> default: Machine booted and ready!
GuestAdditions 5.0.16 running --- OK.
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
    default: /vagrant => /Users/Chilcano/1github-repo/rpi-build-and-boot
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.

Connect your Raspberry Pi -with the SD card and boot partition copied- using ethernet clable to your Host PC (in my case is a Mac OS X), wait some seconds and check if Raspberry Pi has started from the root partition shared by NFS from the Vagrant box.

$ ping raspberrypi.local
PING raspberrypi.local (10.0.0.101): 56 data bytes
64 bytes from 10.0.0.101: icmp_seq=0 ttl=64 time=0.386 ms
64 bytes from 10.0.0.101: icmp_seq=1 ttl=64 time=0.471 ms
^C
--- raspberrypi.local ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.386/0.428/0.471/0.042 ms

Chilcano@Pisc0 : ~/1github-repo/rpi-build-and-boot
$ ping 10.0.0.101
PING 10.0.0.101 (10.0.0.101): 56 data bytes
64 bytes from 10.0.0.101: icmp_seq=0 ttl=64 time=0.450 ms
64 bytes from 10.0.0.101: icmp_seq=1 ttl=64 time=0.591 ms
^C
--- 10.0.0.101 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.450/0.520/0.591/0.071 ms 

And check if Raspberry Pi is running but from Vagrant box.

$ vagrant ssh
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-85-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Apr 12 10:55:29 UTC 2016

  System load:  0.07               Processes:           129
  Usage of /:   11.8% of 39.34GB   Users logged in:     0
  Memory usage: 2%                 IP address for eth0: 10.0.2.15
  Swap usage:   0%                 IP address for eth1: 10.0.0.1

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud


Last login: Tue Apr 12 10:27:47 2016 from 10.0.2.2

vagrant@vagrant-ubuntu-trusty-64:~$ ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:c9:24:d6
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fec9:24d6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:665 errors:0 dropped:0 overruns:0 frame:0
          TX packets:427 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:67162 (67.1 KB)  TX bytes:54225 (54.2 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:b3:e9:a4
          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feb3:e9a4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:29474 errors:0 dropped:0 overruns:0 frame:0
          TX packets:60947 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5247033 (5.2 MB)  TX bytes:70887820 (70.8 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vagrant@vagrant-ubuntu-trusty-64:~$ ping 10.0.0.101
PING 10.0.0.101 (10.0.0.101) 56(84) bytes of data.
64 bytes from 10.0.0.101: icmp_seq=1 ttl=64 time=0.536 ms
64 bytes from 10.0.0.101: icmp_seq=2 ttl=64 time=0.745 ms
64 bytes from 10.0.0.101: icmp_seq=3 ttl=64 time=0.910 ms
^C
--- 10.0.0.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.536/0.730/0.910/0.154 ms
vagrant@vagrant-ubuntu-trusty-64:~$ ping google.com
PING google.com (216.58.211.206) 56(84) bytes of data.
64 bytes from mad01s25-in-f14.1e100.net (216.58.211.206): icmp_seq=1 ttl=63 time=14.1 ms
64 bytes from mad01s25-in-f14.1e100.net (216.58.211.206): icmp_seq=2 ttl=63 time=13.5 ms
64 bytes from mad01s25-in-f14.1e100.net (216.58.211.206): icmp_seq=3 ttl=63 time=13.9 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2008ms
rtt min/avg/max/mdev = 13.521/13.883/14.137/0.296 ms

9. Check if ARM cross-compiling works in the VirtualBox guest

Check if the cross-compiling Variables have been defined.

vagrant@vagrant-ubuntu-trusty-64:~$ cat /home/vagrant/.profile

...
export GST_VERSION=1.0
export RPI_ROOT=/opt/raspberrypi/root
export TOOLCHAIN_ROOT=/opt/cross/bin
export PLATFORM_OS=Linux
export PLATFORM_ARCH=armv7l
export PKG_CONFIG_PATH=$RPI_ROOT/usr/lib/arm-linux-gnueabihf/pkgconfig:$RPI_ROOT/usr/share/pkgconfig:$RPI_ROOT/usr/lib/pkgconfig

Check if RPi has been mounted.

vagrant@vagrant-ubuntu-trusty-64:~$ ll /opt/raspberrypi/boot/
vagrant@vagrant-ubuntu-trusty-64:~$ ll /opt/raspberrypi/root/

And check if oF works by compiling an example.

$ make -C /opt/openframeworks/apps/myApps/emptyExample

Conclusions

  • As you have seen above, using Vagrant, Ansible and Python you can build easily a Provisioning system for massive delivery of binaries/packages for Raspberry Pi or Mobile Devices.
  • Also, you could replace OpenFramework tool (http://openframeworks.cc) used for ARM cross-compiling for other similar Tool if you have different target Device, to do that, just modify the part related to that in the Ansible Playbook.

Finally, in the next blog post, I will explain how to cross-compile the Kismet tool (https://www.kismetwireless.net/download.shtml) from source for Raspberry Pi (ARM).

I hope you have enjoyed.
See you soon.

References:

  1. Loop-mounting partitions from a disk image:
  2. Ansible documentation:
  3. TCPDump cross-compiling for Android:
  4. ARM Cross Compiling with Mac OS X:
  5. Pre-built environment for Raspberry Pi cross-compiling and NFS booting:
  6. How to Build a GCC Cross-Compiler:
  7. A Vagrant plugin to keep your VirtualBox Guest Additions up to date:
  8. openFrameworks – an open source C++ toolkit:
  9. Vboxvfs lacks support for symbolic / hard links
  10. Cross compiler for OF 0.9.0/Jessie/arm6/RPi1
  11. How to cross compile an application for OpenWRT
  12. Cross-Compiling or Building Android tcpdump?
Tagged with: , , , , ,
Posted in DevOps, IoT, Linux

PaaS or micro-PaaS for Microservices? – a simple technology review

“How do you eat an elephant? One bite at a time” – This phrase makes sense, everybody understands It but sometimes is forgotten.
Happily some technology companies have managed to internalize this phrase in its processes and products.

PaaS for Microservices

Below some examples:

Many people and many companies make big mistake when they are entirely focused on big goals. If you have a big goal, you probably spend a lot of time and effort on achieving it.

Well, I will explain on this blog post How this simple concept is being applied to PaaS (Platform as a Service) today and will mention some opensource!.

1. Key concepts

  • Agile
    • You aren’t avoiding the big goal, you are solving the problem step-by-step. And to do so, you need to be organized, forget obsolete methodologies and not waste time.
    • Pre-shaved Yaks (https://www.flickr.com/photos/zimki/243779431/in/photostream).
    • In other words, automates everything you can, organize small teams to create small and independent products, etc.
  • K-I-S-S (Keep It simple, stupid)
    • The road is long and difficult (the learning curve is steep), then it makes it easy and enjoyable, and if the stretch is unavoidable, then try to automate it.
  • Don’t reinvent the wheel
    • Scrum, Kanban, TDD, Unix, Linux, etc. all these are `things that worked before and work now. Please, use them.
  • Free as in beer
    • Free and open source

2. What is PaaS (Platform as a Service)?

PaaS definition - Wikipedia

Wikipedia mentions that Zimki was the first PaaS and was released in the year 2006. Zimki was an end-to-end JavaScript web application development and utility computing platform that removed all the repetitive tasks encountered when creating web applications and web services. After of Zimki born other:

  • Google App Engine
  • WSO2 Stratos (Apache Stratos)
  • Redhat’s OpenShift
  • Saleforce’s Heroku
  • Jelastic
  • Etc.

I ask myself, Are they really suitable for creating Microservices today?. In my opinion, Yes, they are suitable but after a heavy lifting and re-designing.
There are good news about that because the main Software Companies are working on that, making them lighter, more agile and versatile, someone are focused to Cloud, to on-Premise, to Containers or to RAD (rapid application development). Just check out Openshift, CloudFoundry, etc.

3. What’s out there?

Well, after searching the internet, the result is a first version of the PaaS list.

PaaS

  1. Zato
    • https://zato.io
    • Open-source ESB, SOA, REST, APIs and cloud integrations in Python.
  2. Flynn
    • https://flynn.io
    • Runs anything that can run on Linux, not just stateless web apps. Includes built-in database appliances (just PostgreSQL right now) and handles TCP traffic as well as HTTP and HTTPS.
    • Supports Java, Go, Node.js, PHP, Python and Ruby as languages.
  3. Deis
    • http://deis.io
    • Open source PaaS that makes it easy to deploy and manage applications on your own servers. Deis builds upon Docker and CoreOS to provide a lightweight PaaS with a Heroku-inspired workflow.
    • Deis can deploy any language or framework using Docker. If you don’t use Docker, Deis also includes Heroku buildpacks for Ruby, Python, Node.js, Java, Clojure, Scala, Play, PHP, Perl, Dart and Go.
  4. Tsuru
  5. Nanobox
    • https://desktop.nanobox.io
    • https://github.com/nanobox-io/nanobox
    • Nanobox allows you to stop configuring environments and just code. It guarantees that any project you start will work the same for anyone else collaborating on the project. When it’s time to launch the project, you’ll know that your production app will work, because it already works on nanobox.
    • Nanobox detects your app type and automatically configures the environment and installs everything your app needs to run (more of 15 programming languages and frameworks)
  6. Otto
    • https://ottoproject.io
    • Development and Deployment Made Easy (successor to Vagrant).
    • Otto knows how to develop and deploy any application on any cloud platform, all controlled with a single consistent workflow to maximize the productivity of you and your team.
  7. Rack
    • https://convox.com
    • https://github.com/convox/rack
    • A Convox Rack is a private Platform-as-a-Service (PaaS). It gives you a place to deploy your web applications and mobile backends without having to worry about managing servers, writing complex deployment recipes, or monitoring process uptime. We call this platform a “Rack”.
  8. Empire
    • http://engineering.remind.com
    • https://github.com/remind101/empire
    • Empire is a control layer on top of Amazon EC2 Container Service (ECS) that provides a Heroku like workflow. It conforms to a subset of the Heroku Platform API, which means you can use the same tools and processes that you use with Heroku, but with all the power of EC2 and Docker.
  9. Dokku
    • http://dokku.viewdocs.io/dokku
    • The smallest PaaS implementation you’ve ever seen.
    • Powered by Docker, you can install Dokku on any hardware. Use it on inexpensive cloud providers.
    • Dokku by default does not provide any datastores such as MySQL or PostgreSQL. You will need to install plugins to handle that.
  10. Gondor
    • https://gondor.io
    • Managed Python hosting with command-line deployment and support for PostgreSQL, Redis, Celery, Elasticsearch and more.
  11. AppFog
    • https://www.ctl.io/appfog
    • AppFog, CenturyLink’s Platform-as-a-Service (PaaS) based on Cloud Foundry, enables developers to focus on writing great cloud-based applications without having to worry about managing the underlying infrastructure.

Microservices frameworks

  1. Dropwizard
    • http://www.dropwizard.io
    • Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done.
    • Dropwizard has out-of-the-box support for sophisticated configuration, application metrics, logging, operational tools, and much more, allowing you and your team to ship a production-quality web service in the shortest time possible.
  2. Ratpack
    • https://ratpack.io
    • Ratpack is a set of Java libraries for building modern HTTP applications. It provides just enough for writing practical, high performance, apps. It is built on Java 8, Netty and reactive principles.
  3. Spark
    • http://sparkjava.com
    • Spark – A micro framework for creating web applications in Java 8 with minimal effort
  4. Vertx
    • http://vertx.io
    • Vert.x is event driven and non blocking. This means your app can handle a lot of concurrency using a small number of kernel threads. Vert.x lets your app scale with minimal hardware.
    • You can use Vert.x with multiple languages including Java, JavaScript, Groovy, Ruby, and Ceylon.
  5. Seneca
    • http://senecajs.org
    • Seneca is a microservices toolkit for Node.js. It helps you write clean, organized code that you can scale and deploy at any time.
  6. Kong
    • https://getkong.org
    • Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong runs in front of any RESTful API and is extended through Plugins, which provide extra functionalities and services beyond the core platform.
    • Kong is built on top of reliable technologies like NGINX and Apache Cassandra, and provides you with an easy to use RESTful API to operate and configure the system.
  7. Unirest
    • http://unirest.io
    • Unirest is a set of lightweight HTTP libraries available in multiple languages (Node.js, Ruby, PHP, Java, Python, Objective-C, .Net).

4. Conclusions

  • As you can see, the trend is to provide a set of tools to do more easy the application development on-premise or/and on-cloud. The idea behind is to remove all the repetitive tasks encountered when creating web applications and web services (aspects related to infrastructure and operations from setting up servers, scaling, configuration, security and backups), the Pre-Shaved Yaks concept was introduced.
  • In other side, there are custom PaaS created from existing lightweight frameworks using Docker Containers, below some references. This confirms, right now, that there are mature tools and frameworks ready to be used in the construction of these platforms.

5. References

Tagged with: , , , , ,
Posted in Cloud, Microservices, PaaS

Running multi-container (WSO2 BAM & MAC Address Lookup) Docker Application using Docker Compose

In my 4 previous blog post I explained each part of this Proof-of-concept, they are:

  1. Analysing Wireless traffic in real time with WSO2 BAM, Apache Cassandra, Complex Event Processor (CEP Siddhi), Apache Thrift and Python:
  2. A Python Microservice in a Docker Container (MAC Address Manufacturer Lookup):

Now, in this blog post I’m going to explain how to run two Docker Containers, the WSO2 BAM and the MAC Address Manufacturer Lookup containers, by using Docker Compose.

// clone 2 repositories
$ git clone https://github.com/chilcano/docker-wso2bam-kismet-poc.git
$ cd docker-wso2bam-kismet-poc
$ git clone https://github.com/chilcano/wso2bam-wifi-thrift-cassandra-poc.git

// run docker compose
$ docker-compose up -d

Starting dockerwso2bamkismetpoc_mac-manuf_1
Starting dockerwso2bamkismetpoc_wso2bam-dashboard-kismet_1

Below a diagram explaining this.

802.11 traffic capture PoC - Docker Compose

Now, if you want to run all together in a few minutes, just runs the Docker Compose Yaml file.
For a deeply explanation, follow the instructions on README.md (https://github.com/chilcano/docker-wso2bam-kismet-poc).

If everything is OK, you will get a huge amount of data (WIFI traffic) stored in Apache Cassandra and a simple Dashboard showing captured MAC Addresses and Manufacturer of the Wireless Devices (PC, Mobiles, WIFI Access Points, Tablets, etc..) around of your Raspberry Pi.

Visualising 802.11 captured traffic with the MAC Address Manufacturer

I hope you will find this blog posts useful.
Bye.

Tagged with: , , , , , , ,
Posted in BAM, IoT, Linux, Microservices, Security, SOA

A MAC Address Manufacturer DB and RESTful Python Microservice in a Docker Container

A MAC address, also called physical address, is a unique identifier assigned to every network interfaces for communications on the physical network segment. In other words, you can identify the manufacturer of your device through your pyshical address.

There are different tools on the Internet that allow you to identify the manufacturer from the MAC Address. How in my 3 previous post I wrote about how to capture the wireless traffic and all MAC Address, now in this post I will explain how to implement a Docker container exposing a Rest API to get the Manufacturer from the captured MAC Address.

As everything should be lightweight, minimalist, easy to use and auto-contained, I’m going to use the next:

  • Python as lightweight and powerful programming language.
  • Flask (http://flask.pocoo.org) is a microframework for Python based on Werkzeug and Jinja 2. I will use Flask to implement a mini-web application.
  • SQLAlchemy (http://www.sqlalchemy.org/) is a Python SQL toolkit and ORM.
  • SQLite3 (https://www.sqlite.org) is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.
  • pyOpenSSL library to work with X.509 certificates. Required to start the embedded Webserver on HTTPS (TLS).
  • CORS extension for Flask (https://flask-cors.readthedocs.org) useful to solve cross-domain Ajax request issues.

This Docker container provides a Microservice (API Rest) to MAC Address Manufacturer resolution. This Docker container is part of the “Everything generates Data: Capturing WIFI Anonymous Traffic using Raspberry Pi and WSO2 BAM” blog serie (Part I, Part II & Part III), but you can use it independently as part of other set of Docker containers.

This Docker Container will work in this scenario, as shown below:

The MAC Address Manufacturer Lookup Docker Container

Then, let’s do it.

I. Preparing the Python development environment in Mac OSX

Follow this guide to setup your Python Development Environment in your Mac OSX: https://github.com/chilcano/how-tos/blob/master/Preparing-Python-Dev-Env-Mac-OSX.md

II. Creating a MAC Address Manufacturer DB

Exist in Internet several MAC Address Lookup Tools, in fact, the OUI’s prefix used to identify the MAC Address are public available.

But, in this case I am going to use the MAC Address List of Wireshark (https://www.wireshark.org/tools/oui-lookup.html).
Wireshark is a popular network protocol analyzer a.k.a. network sniffer, the Wireshark tool uses internally the MAC Address list to identity the Manufacturer of a NIC.
I’m going to download and create a API Rest for you. Below the steps.

1) Downloading the Wireshark MAC Addresses Manufacturer file and loading into a DB

Using the below Python script I will download the Wireshark MAC Address list into a file and to get the hash. The idea is to parse the file and load it into a minimalist DB.
I will use SQLite Database where I will create an unique table and all information will be loaded there. The Table structure will be:

mac             String      # The original MAC Address
manuf           String      # The original Manufacturer name
manuf_desc      String      # The Manufacturer description, if exists.

Here the Python script used to do that: mac_manuf_wireshark_file.py

III. Exposing the MAC Address Manufacturer DB as an API Rest

After creating the database, the next step is to expose the data through a simple API Rest. The idea is to make a call GET to the API Rest with a MAC Address and get the Manufacturer as response.

1) Defining the API

The best way to define an API Rest and the contract is using the Swagger language (http://swagger.io). The idea is to create documentation about the API Rest and explain what resources are available or exposed, writte a request and response sample, etc.
In this scenario I’m going to define in a simple way the API, also I’m going to use JSON to define the request and response.
Then, below the API definition.

POST    /chilcano/api/manuf                 # Add a new Manufacturer
PUT     /chilcano/api/manuf                 # Update an existing Manufacturer
GET     /chilcano/api/manuf/{macAddress}    # Find Manufacturer by MAC Address

In this Proof-of-Concept I will implement only the GET resource for the API.

2) Implementing the API Rest

I have created 2 Python scripts to implement the API Rest.
The first one (mac_manuf_table_def.py) is just a Model of the MacAddressManuf table.

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# file name: mac_manuf_table_def.py
#

from sqlalchemy import create_engine, ForeignKey
from sqlalchemy import Column, Date, Integer, String
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///mymusic.db', echo=True)
Base = declarative_base()

#
# Model for 'MacAddressManuf':
# used for API Rest to get access to data from DB
#
class MacAddressManuf(Base):
    """"""
    __tablename__ = "MacAddressManuf"

    mac = Column(String, primary_key=True)
    manuf = Column(String)
    manuf_desc = Column(String)

    def __init__(self, manuf, manuf_desc):
        """"""
        self.manuf = manuf
        self.manuf_desc = manuf_desc

And second Python script (mac_manuf_api_rest.py) implements the API Rest. You can review the

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# file name: mac_manuf_api_rest.py
#

import os, re
from flask import Flask, jsonify
from flask.ext.cors import CORS
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from mac_manuf_table_def import MacAddressManuf

ROOT_DIR = "manuf"
FINAL_MANUF_DB_FILENAME = "mac_address_manuf.db"
HTTPS_ENABLED = "true"

engine = create_engine("sqlite:///" + os.path.join(ROOT_DIR, FINAL_MANUF_DB_FILENAME))
Session = sessionmaker(bind=engine)

app = Flask(__name__)
cors = CORS(app, resources={r"/chilcano/api/*": {"origins": "*"}})

# 
# API Rest:
#   i.e. curl -i http://localhost:5000/chilcano/api/manuf/00:50:5a:e5:6e:cf
#   i.e. curl -ik https://localhost:5443/chilcano/api/manuf/00:50:5a:e5:6e:cf
#
@app.route("/chilcano/api/manuf/<string:macAddress>", methods=["GET"])
def get_manuf(macAddress):
    try:
        if re.search(r'^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$', macAddress.strip(), re.I).group():
            # expected MAC formats : a1-b2-c3-p4-q5-r6, a1:b2:c3:p4:q5:r6, A1:B2:C3:P4:Q5:R6, A1-B2-C3-P4-Q5-R6
            mac1 = macAddress[:2] + ":" + macAddress[3:5] + ":" + macAddress[6:8]
            mac2 = macAddress[:2] + "-" + macAddress[3:5] + "-" + macAddress[6:8]
            mac3 = mac1.upper()
            mac4 = mac2.upper()
            session = Session()
            result = session.query(MacAddressManuf).filter(MacAddressManuf.mac.in_([mac1, mac2, mac3, mac4])).first()
            try:
                return jsonify(mac=result.mac, manuf=result.manuf, manuf_desc=result.manuf_desc)
            except:
                return jsonify(error="The MAC Address '" + macAddress + "' does not exist"), 404
        else:
            return jsonify(mac=macAddress, manuf="Unknown", manuf_desc="Unknown"), 404
    except:
        return jsonify(error="The MAC Address '" + macAddress + "' is malformed"), 400

if __name__ == "__main__":
    if HTTPS_ENABLED == "true":
        # 'adhoc' means auto-generate the certificate and keypair
        app.run(host="0.0.0.0", port=5443, ssl_context="adhoc", threaded=True, debug=True)
    else:
        app.run(host="0.0.0.0", port=5000, threaded=True, debug=True)

This second Python script performs the next tasks:

  • Calls the Model (mac_manuf_table_def.py).
  • Connects to SQLite Database and creates a Session.
  • Runs a query by using macAddress as parameter.
  • And creates a JSON response with the query’s result.

3) Running and Testing the API Rest

We could use the Flask buit-in HTTP server just for testing and debugging. To run the above Python Web Application (API Rest) just execute the Python script. Note that actually I have 3 versions (py-1.0, py-1.1 and py-latest)

Chilcano@Pisc0 : ~/1github-repo/docker-mac-address-manuf-lookup/python/1.0
$ python mac_manuf_api_rest.py
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 258-876-642

Chilcano@Pisc0 : ~/1github-repo/docker-mac-address-manuf-lookup/python/1.1
$ python mac_manuf_api_rest.py
 * Running on https://0.0.0.0:5443/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 258-876-642

Chilcano@Pisc0 : ~/1github-repo/docker-mac-address-manuf-lookup/python/latest
$ python mac_manuf_api_rest.py
 * Running on https://0.0.0.0:5443/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 258-876-642

Now, from other Terminal call the API Rest using curl, I’m going to use only the python-lates version:

$ curl -ik https://127.0.0.1:5443/chilcano/api/manuf/00-50:Ca-Fe-Ca-Fe
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 93
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Thu, 03 Mar 2016 17:37:45 GMT

{
  "mac": "00:50:CA",
  "manuf": "NetToNet",
  "manuf_desc": "# NET TO NET TECHNOLOGIES"
}

$ curl -ik https://127.0.0.1:5443/chilcano/api/manuf/11-50:Ca-Fe-Ca-Fe
HTTP/1.0 404 NOT FOUND
Content-Type: application/json
Content-Length: 67
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Thu, 03 Mar 2016 17:38:49 GMT

{
  "error": "The MAC Address '11-50:Ca-Fe-Ca-Fe' does not exist"
}

 curl -ik https://127.0.0.1:5443/chilcano/api/manuf/00-50:Ca-Fe-Ca-Fe---
HTTP/1.0 400 BAD REQUEST
Content-Type: application/json
Content-Length: 68
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Thu, 03 Mar 2016 17:39:23 GMT

{
  "error": "The MAC Address '00-50:Ca-Fe-Ca-Fe---' is malformed"
}

But if you want to run in Production. In the Flask webpage (http://flask.pocoo.org/docs/0.10/deploying/wsgi-standalone) recommends these HTTP servers (Standalone WSGI Containers):

  • Gunicorn
  • Tornado
  • Gevent
  • Twisted Web

IV. Putting everything in a Docker Container

1) The Dockerfile

The latest version of the MAC Address Manufacturer lookup Docker container is the python-latest (aka Docker MAC Manuf) and has the next Dockerfile:

# Dockerfile to MAC Address Manufacturer Lookup container.

FROM python:2.7

MAINTAINER Roger CARHUATOCTO <chilcano at intix dot info>

RUN pip install --upgrade pip
RUN pip install unicodecsv
RUN pip install Flask
RUN pip install sqlalchemy
RUN pip install pyOpenSSL
RUN pip install -U flask-cors

# Allocate the 5000/5443 to run a HTTP/HTTPS server
EXPOSE 5000 5443

COPY mac_manuf_wireshark_file.py /
COPY mac_manuf_table_def.py /
COPY mac_manuf_api_rest.py /

RUN python mac_manuf_wireshark_file.py
CMD python mac_manuf_api_rest.py

2) Using the Docker Container

Clone the Github repository and build it.

$ git clone https://github.com/chilcano/docker-mac-address-manuf-lookup.git
$ cd docker-mac-address-manuf-lookup
$ docker build --rm -t chilcano/mac-manuf:py-latest python/latest/.

Or Pull from Docker Hub.

$ docker login
$ docker pull chilcano/mac-manuf-lookup:py-latest
$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
chilcano/mac-manuf-lookup   py-latest           19d33a4f3ec1        16 minutes ago      714.8 MB

Run and check the container.

$ docker run -dt --name=mac-manuf-py-latest -p 5443:5443/tcp chilcano/mac-manuf-lookup:py-latest

$ docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
4b0bb0b5b518        chilcano/mac-manuf-lookup:py-latest   "/bin/sh -c 'python m"   2 minutes ago       Up 2 minutes        5000/tcp, 0.0.0.0:5443->5443/tcp   mac-manuf-py-latest

Gettting SSH access to the Container to check if SQLite DB exists.

$ docker exec -ti mac-manuf-py-latest bash

Getting the Docker Machine IP Address.

$ docker-machine ls
NAME           ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS
default        *        virtualbox   Running   tcp://192.168.99.100:2376
machine-dev    -        virtualbox   Stopped
machine-test   -        virtualbox   Stopped

Testing/Calling the Microservice (API Rest).

$ curl -i http://192.168.99.100:5000/chilcano/api/manuf/00-50:Ca-ca-fe-ca
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 93
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Sat, 20 Feb 2016 09:01:38 GMT

{
  "mac": "00:50:CA",
  "manuf": "NetToNet",
  "manuf_desc": "# NET TO NET TECHNOLOGIES"
}

If the embedded server was started on HTTPS, you could test it as shown below.

$ curl -ik https://192.168.99.100:5443/chilcano/api/manuf/00-50:Ca-ca-fe-ca
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 93
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Mon, 29 Feb 2016 15:58:21 GMT

{
  "mac": "00:50:CA",
  "manuf": "NetToNet",
  "manuf_desc": "# NET TO NET TECHNOLOGIES"
}

 

V. And now what?, How to use the MAC Manuf Docker with the WSO2 BAM Docker?

Visualizing Captured WIFI Traffic in Realtime from WSO2 BAM Dashboard
Visualizing Captured WIFI Traffic in Realtime

As you can see in above image, when capturing WIFI traffic the information is shown in the WSO2 BAM Dashboard but not the MAC Address Manufaturer.
In this scenario, our Docker MAC Manuf will be useful because It will provide the Manufacturer information via a RESTful Microservice. Then, the idea is configure the WSO2 BAM Dashboard (the prepared Kismet Toolbox) to point to the Docker MAC Manuf RESTful Microservice. In other words, the WSO2 BAM will call to the Docker MAC Manuf Microservice to get the Manufacturer information.

The next blog post I will explain how to connect the MAC Address Manufacturer Docker Container with the WSO2 BAM Docker Container by using Docker Compose to do a minimal orchestration.

VI. Conclusions

Python and a few modules (as Flask, SQLAlchemy, CORS, pyOpenssl, …) more you can create quickly any kind of Applications (Business Applications, Web Applications, Mobile Applications, Microservices, …). The development of this (Micro)service and put It into a Docker container was a smooth experience. It was possible to implement the older scripts to automatize some task while at the same time implement modern layered web applications as a microservice, and everything in a few lines of code.

See you soon.

Inspirational reference

MAC Address references

Tagged with: , , , , ,
Posted in Big Data, Microservices, Security, SOA

Wardriving with WIFI Pineapple Nano in Mobile World Congress 2016 at Barcelona

WIFI Pineapple Nano is a nice tiny device to do Wireless Security Auditing. It has OpenWRT embedded as S.O. with 2 Wireless NIC pre-configured and a lot of Security tools pre-installed ready to perform a Security Wireless Auditing. For further details, you can check the Hak5 page and I encourage you to buy one (https://www.wifipineapple.com)!.

Objetives

The idea is to do a quick wardriving around of the Mobile World Congress at Barcelona to check if the attendants are aware about their Mobile Devices with the leak of information.

At the end, after wardriving, you will get the following files:

  • xyz.alert
  • xyz.gpsxml
  • xyz.netxml
  • xyz.pcapdump

With above files you will can identify the Manufacturer of the device (or model), the geo-position and route followed aproximadely and other information related signal quality.

The arsenal

The software and devices I’ve used are the following:

Configuration

Obviously I have initialized my Pineapple Nano previously and have updated the firmware. But if you haven’t already done so, I recommend the next guide: https://www.wifipineapple.com/pages/setup

The next steps are if you have initialized Pineapple Nano.

1) Connect to WIFI Pineapple and prepare everything

By using USB cable, connect the Pineapple Nano to your PC, in my case I’m using a Virtualbox VM with Kali Linux.
Then, from a Kali Linux (the best Linux distro for Security Audit) terminal will get the wp6.sh script and will connect to Pineapple Nano.
The wp6.sh can be downloaded from here: https://github.com/hak5darren/wp6

After that, open your browser in your Kali Linux and connect to http://172.16.42.1:1741

From the Pineapple Web Admin console, insert the SD and format It.
After that, verify if SD was formatted suceesfully.

root@Pineapple:~# df -h
Filesystem                Size      Used Available Use% Mounted on
rootfs                    2.3M    900.0K      1.4M  39% /
/dev/root                12.5M     12.5M         0 100% /rom
tmpfs                    29.9M      3.6M     26.3M  12% /tmp
/dev/mtdblock3            2.3M    900.0K      1.4M  39% /overlay
overlayfs:/overlay        2.3M    900.0K      1.4M  39% /
tmpfs                   512.0K         0    512.0K   0% /dev
/dev/sdcard/sd1           6.2G     14.6M      5.9G   0% /sd

Now, open other Kali Linux terminal and get SSH access to Pinapple Nano and update the packages.

root@Pineapple:~# opkg update

2) Install Kismet in the Pineapple Nano

Pineapple Nano has not enough space internally to install things. I recommend you to install your new applications or packages in the SD.

root@Pineapple:~# opkg list | grep kismet
kismet-client - 2013-03-R1b-1 - An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. This package contains the kismet text interface client.
kismet-drone - 2013-03-R1b-1 - An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. This package contains the kismet remote sniffing.and monitoring drone.
kismet-server - 2013-03-R1b-1 - An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. This package contains the kismet server.

root@Pineapple:~# # opkg --dest sd install kismet-server

root@Pineapple:~# # opkg --dest sd install kismet-client

3) Sharing the Android's GPS with the WIFI Pineapple Nano

To do this, we need to connect our Android Mobile to the Pineapple USB 2.0 Host port and from Android share the GPS signal by using ShareGPS app. Before, let's go to install ADB (Android Debug Bridge) in the Pineapple.

root@Pineapple:~# opkg --dest sd install adb

Installing adb (android.5.0.2_r1-1) to sd...
Downloading https://www.wifipineapple.com/nano/packages/adb_android.5.0.2_r1-1_ar71xx.ipk.
Configuring adb.

Now, from your Pineapple SSH terminal start the ADB service and check if your Android Mobile is recognized for the Pineapple Nano. Before, to enable USB Debugging and enable USB Tethering in your Mobile.

My Android Mobile is recognized with ID 2a47:0004.

root@Pineapple:~# lsusb
Bus 001 Device 009: ID 2a47:0004
Bus 001 Device 004: ID 05e3:0745 Genesys Logic, Inc.
Bus 001 Device 003: ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n
Bus 001 Device 002: ID 058f:6254 Alcor Micro Corp. USB Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Start ADB service in Pineapple Nano.

root@Pineapple:~# adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached
AB010682    unauthorized

In your Android Mobile is propmted to accept connection from Pineapple. Accept It and run again ADB. You have to see the next as shown below.

root@Pineapple:~# adb devices
List of devices attached
AB010682    device

Now, in your Android install the ShareGPS app from Google Play Store. After that, start the app and share the GPS signal.

Step 1Step 1

Step 1Step 1

Finally, You are ready to use the shared Android's GPS signal. Let's to check it.

root@Pineapple:~# adb forward tcp:50000 tcp:50000

root@Pineapple:~# telnet localhost 50000

$GPGGA,181216.000,4138.5572,N,00221.8326,E,1,5,1.47,202.2,M,51.1,M,,*54
$GPGSA,A,3,19,25,24,15,12,,,,,,,,1.70,1.47,0.85*05
$GPGSV,3,1,10,12,65,031,34.4,25,61,286,31.7,24,54,116,14.2,14,43,295,*66
$GPGSV,3,2,10,29,26,197,25.8,02,21,102,26.9,19,17,041,13.4,06,16,061,17.8*7C
$GPGSV,3,3,10,15,09,174,20.0,31,03,303,*6A
$GPRMC,181216.000,A,4138.5572,N,00221.8326,E,0.000,104.39,250216,,,A*5B
$GPVTG,104.39,T,,M,0.000,N,0.000,K,A*32
$GPACCURACY,4.6*0A
$GPGGA,181217.000,4138.5572,N,00221.8326,E,1,5,1.47,202.2,M,51.1,M,,*55
$GPGSA,A,3,19,25,24,15,12,,,,,,,,1.70,1.47,0.85*05
$GPGSV,3,1,10,12,65,031,34.8,25,61,286,31.6,24,54,116,14.2,14,43,295,*6B
$GPGSV,3,2,10,29,26,197,23.7,02,21,102,26.9,19,16,041,13.4,06,16,061,17.8*74
$GPGSV,3,3,10,15,09,174,19.6,31,03,303,*66
$GPRMC,181217.000,A,4138.5572,N,00221.8326,E,0.000,104.39,250216,,,A*5A
$GPVTG,104.39,T,,M,0.000,N,0.000,K,A*32
$GPACCURACY,4.5*09
$GPGGA,181218.000,4138.5572,N,00221.8326,E,1,5,1.47,202.2,M,51.1,M,,*5A
$GPGSA,A,3,19,25,24,15,12,,,,,,,,1.70,1.47,0.85*05
$GPGSV,3,1,10,12,65,031,34.8,25,61,286,31.6,24,54,116,14.2,14,43,295,*6B
$GPGSV,3,2,10,29,26,197,21.6,02,21,102,26.4,19,16,041,13.4,06,16,061,13.8*7E
$GPGSV,3,3,10,15,09,174,19.3,31,03,303,*63
$GPRMC,181218.000,A,4138.5572,N,00221.8326,E,0.000,104.39,250216,,,A*55
$GPVTG,104.39,T,,M,0.000,N,0.000,K,A*32
^C

After running telnet localhost 50000 you will see in green on the ShareGPS app connections “Connected” instead of “Listening”. That verifies Pineapple connected to Android’s GPS, also you will see some characters in your terminal demonstrating that everything works.

4) Configuration of Kismet: wlan1 in monitor mode

The WIFI Pineapple Nano has 2 Wireless NICs: wlan0 and wlan1.

root@Pineapple:~# iwconfig
lo        no wireless extensions.

usb0      no wireless extensions.

wlan1     IEEE 802.11bgn  ESSID:off/any
          Mode:Managed  Access Point: Not-Associated   Tx-Power=20 dBm
          RTS thr:off   Fragment thr:off
          Encryption key:off
          Power Management:off

wlan0-1   IEEE 802.11bgn  Mode:Master  Tx-Power=17 dBm
          RTS thr:off   Fragment thr:off
          Power Management:off

wlan0     IEEE 802.11bgn  Mode:Master  Tx-Power=17 dBm
          RTS thr:off   Fragment thr:off
          Power Management:off

eth0      no wireless extensions.

br-lan    no wireless extensions.

I’m going to use wlan1 to capture the 802.11 traffic, in fact, the wlan1 will be configured in monitor mode as shown below:

root@Pineapple:~# ifconfig wlan1 down
root@Pineapple:~# iwconfig wlan1 mode monitor
root@Pineapple:~# ifconfig wlan1 up

Check again the wireless interfaces. The wlan1 NIC should be in monitor mode.

root@Pineapple:~# iwconfig wlan1
wlan1     IEEE 802.11bgn  Mode:Monitor  Frequency:2.412 GHz  Tx-Power=20 dBm
          RTS thr:off   Fragment thr:off
          Power Management:off

5) Configuration of Kismet: MAC Address Manufacturer

Kismet by default hasn’t the MAC Address Manufacturer database, I have to download it from the Wireshark web portal and copy it to /sd.

root@Pineapple:~# wget -O /sd/manuf http://anonsvn.wireshark.org/wireshark/trunk/manuf

root@Pineapple:~# ln -s /sd/manuf /etc/manuf
root@Pineapple:~# ln -s /sd/manuf /sd/etc/manuf

6) Configuration of Kismet: Installing and configuring GPSd

GPSd is a service daemon that monitors one or more GPSes or AIS receivers attached to a host computer through serial or USB ports, making all data on the location/course/velocity of the sensors available to be queried on TCP port 2947 of the host computer. (http://www.catb.org/gpsd)

GPSd is not available in the current repository of OpenWRT (Chaos Calmer 15.05) used for WIFI Pineapple Nano. Not problem, we can install the all GPSd packages for an older version of OpenWRT.
The packages to be installed are:

  • libgps_3.7-1_ar71xx.ipk
  • libgpsd_3.7-1_ar71xx.ipk
  • gpsd_3.7-1_ar71xx.ipk
  • gpsd-clients_3.7-1_ar71xx.ipk

Download and install them.

root@Pineapple:~# cd /sd
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/libgps_3.7-1_ar71xx.ipk
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/libgpsd_3.7-1_ar71xx.ipk
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/gpsd_3.7-1_ar71xx.ipk 
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/gpsd-clients_3.7-1_ar71xx.ipk

root@Pineapple:/sd# opkg --dest sd install libgps_3.7-1_ar71xx.ipk
root@Pineapple:/sd# opkg --dest sd install libgpsd_3.7-1_ar71xx.ipk
root@Pineapple:/sd# opkg --dest sd install gpsd_3.7-1_ar71xx.ipk 
root@Pineapple:/sd# opkg --dest sd install gpsd-clients_3.7-1_ar71xx.ipk

Just make sure what gpsd service not starting on boot.
You can edit /etc/default/gpsd and set everything to false and/or run service gpsd stop.

root@Pineapple:~# nano /etc/default/gpsd
# Default settings for the gpsd init script and the hotplug wrapper.

# Start the gpsd daemon automatically at boot time
START_DAEMON="false"

# Use USB hotplugging to add new USB devices automatically to the daemon
USBAUTO="false"

# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES=""

# Other options you want to pass to gpsd
GPSD_OPTIONS=""

Now, start the GPSd service. In debug mode:

root@Pineapple:~# gpsd -F /var/run/gpsd.sock -N tcp://localhost:50000

Or run it as daemon.

root@Pineapple:~# gpsd -F /var/run/gpsd.sock tcp://localhost:50000

Where:

  • -F /var/run/gpsd.sock is the control socket file being used for GPSd.
  • tcp://localhost:50000 is the shared Andoid’s GPS signal being listening on TCP port.
  • -N don’t daemonize, useful for debugging.

And to check if GPSd is working, just run this:

root@Pineapple:~# cgps

7) Starting Kismet for the first time

Now, we are ready to start Kismet.
Kismet is a Client/Server application, firstly, I will start the Kismet Server and secondly the Kismet Client.

root@Pineapple:~# kismet_server -p /sd/.kismet -c wlan1 --daemonize

Where:

  • -p /sd/.kismet is the folder where Kismet Server will place the log files.
  • -c wlan1 is the NIC in monitor mode to be used.
  • Kismet has by default the GPSd configured properly. Nothing to fo here.
  • Kismet will look for the MAC Address Manufacturer file in this path /etc/manuf.

In other terminal run the next:

root@Pineapple:~# kismet_client

You should see the Kismet UI showning the Wireless Networks and connected clientes identified and reading the GPS coordinates in real-time.

8) Running Kismet for the next times

Now, if you want to avoid run all above commands one-by-one, you could create a shell script with all required commands.
Then, let's to create a simple and dirty shell script.

root@Pineapple:~# nano /root/run_wardriving.sh

#!/bin/bash

echo "==> Setting wlan1 in monitor mode"
ifconfig wlan1 down
iwconfig wlan1 mode monitor
ifconfig wlan1 up

echo "==> Enabling ADB Forwarding to tcp:50000"
adb forward tcp:50000 tcp:50000

echo "==> Refreshing NTP server"
killall ntpd
ntpd > /dev/null 2>&1 &
sleep 3

echo "==> Starting GPSD with tcp://localhost:50000"
service gpsd stop
killall gpsd
gpsd -F /var/run/gpsd.sock tcp://localhost:50000
sleep 3

echo "==> Starting Kismet server in background"
killall kismet_server
kismet_server -p /sd/.kismet -c wlan1 --daemonize

And executions privileges.

root@Pineapple:~# chmod +x /root/run_wardriving.sh

The run_wardriving.sh will be useful when starting Kismet for the next times, because you have to do from your Android mobile and don’t from your PC or Kali Linux.
You will need something likes that shell script to start wardriving quickly and from your Android mobile.
For that, you will need also JuiceSSH App in your Android mobile to connect to WIFI Pineapple Nano and execute the run_wardriving.sh script. ;)

Some results

Below some screenshots taken from my Android mobile and Google Earth with the WIFI Networks placed.

1) Kismet in action

Kismet in action693 Wireless Networks indentifiedOps!, A rogue AP?

2) Creating Wireless Recon Maps with Google Earth and GISKismet

GISKismet is a wireless recon visualization tool to represent data gathered using Kismet in a flexible manner. GISKismet stores the information in a database so that the user can generate graphs using SQL. GISKismet currently uses SQLite for the database and GoogleEarth / KML files for graphing. (http://git.kali.org/gitweb/?p=packages/giskismet.git)

Conclusions

  • WIFI Pineapple has a tons of Security tools installed, but not tools to perform a Wardriving. For that Kismet and GPS were installed and configured. Probably in the following blog post I will explain how to create a Module for Wardriving.
  • Turn off your phone mobile. The new mobiles or smartphones (Android, iOS, Windows, …) have a Wireless Network Interface, they are constantly trying to connect to Wireless networks, to do that, all Mobiles send packets asking for Access Points. Kismet or other Tools taken advantage of that by capturing these packets. If you want to avoid that, just turn off your Wireless Network Interface and avoid to connect to Wireless Access Point unknown.
  • The tip of the iceberg. The packets that your phone Mobile emits and captures Kismet is only basic information and does not represent any risk to you. This phase is called Security “Data gathering” or “Recognition”. The problem comes later, there are other tools that allow a targeted attack to steal important information.
  • The business behind of IoT and BigData. Actually, Marketing Companies and Telecoms are monetizing with the tracking information of phone mobiles. They are taking advantage of radio/cellular signal to track in all the time. With these information the Shopping Company could identify your pattern of behavior, which stores visited, what phone mobile model do you have, etc. Only search in Google “phone mobile wireless anonymous tracking” and will be aware of the industry behind and who is making money.
  • There isn’t Security awareness at Mobile World Congress. Large conferences where there are people crowd (and thousands of mobiles phone) are a breeding ground for scammers and thieves who take advantage of weaknesses or defects of the Organization, the Devices or the Apps. Then, my friend be careful with that.
Tagged with: , , , , ,
Posted in Big Data, IoT, Linux, Security

Everything generates data: Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM (Part III)

After configuring the Raspberry Pi in monitor WIFI/802.11 mode (first blog post) and after configuring Raspberry Pi to send the 802.11 captured traffic to WSO2 BAM and Apache Thrift listener (second blog post), now I will explain how to create a simple Dashboard showing the WIFI traffic captured in real-time.

Architecture IoT/BigData – Visualizing WIFI traffic in realtime from a WSO2 BAM Dashboard
Architecture IoT/BigData – Visualizing WIFI traffic in realtime from a WSO2 BAM Dashboard

Well, to make it easier, I created a Github repository (wso2bam-wifi-thrift-cassandra-poc) where I copied a number of scripts required for this third blog post.
I encourage you to download and follow the instructions below.

This repository (wso2bam-wifi-thrift-cassandra-poc) contains

  • A toolbox to view incoming Kismet traffic (802.11) in realtime valid for WSO2 BAM 2.5.0.
  • A set of definitions to create Execution Plan (CEP Shiddi), Input and Output Stream Definitions (Apache Thrift), and Formatters.

Considerations

  • I’ve used WSO2 BAM 2.5.0 (standard configuration without changes and with offset 0)
  • I’ve used a Raspberry Pi as agent to send captured 802.11 traffic to WSO2 BAM by using Apache Thrift.
  • I’ve used a Python Thrift and Kismet script to send the captured traffic.

How to use

1) Send Kismet traffic to WSO2 BAM using Apache Thrift listener

2) Deploy the WSO2 BAM Kismet toolbox

  • Deploy the kismet_wifi_realtime_traffic.tbox in WSO2 BAM.
  • Check if WSO2 BAM toolbox was deployed successfully.

Kismet Real Time Toolbox for WSO2 BAM

3) Deploy the set of Stream and Execution Plan definitions

Copy the set of definitions to create Execution Plan (CEP Shiddi), Input and Output Stream Definitions (Apache Thrift), and Formatters to WSO2 BAM manually.
All files and directories to be copied are under wso2bam-wifi-thrift-cassandra-poc/wso2bam_defns/ and have to be copied to /.

Structure of file definitions and directories
Input/Output Stream, Execution Plan and Formatters for WSO2 BAM

Two Output Streams deployed into WSO2 BAM
Input/Output Stream, Execution Plan and Formatters for WSO2 BAM

4) Visualizing Kismet (802.11) traffic in WSO2 BAM Dashboard

If everything is OK, then you can see the incoming traffic in realtime, to do that, you have to use the previously installed/deployed WSO2 BAM toolbox.
Then, login to WSO2 BAM Dashboard and select the Kismet WIFI Realtime Monitoring graphic. You should see the following.

Visualizing Captured Kismet Traffic in Realtime from WSO2 BAM Dashboard
Visualizing Captured Kismet Traffic in Realtime

That’s all.
In the next blogpost I will explain how to implement a Microservice to get the Manufacturer for each MAC address captured.

Regards.

Tagged with: , , , , , , ,
Posted in BAM, Big Data, IoT, Security

Everything generates data: Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM (Part II)

After configuring the Raspberry Pi to capture WIFI/802.11 traffic (first blog post), we have to store this traffic in a Database (NoSQL and RDBMS).
Because, the idea is to process in real-time and/or batch the stored data.

To capture this type of traffic (WIFI/802.11 traffic) is so difficult for next reasons:

  • Kismet captures 802.11 layer-2 wireless network traffic (Network IP blocks such as TCP, UDP, ARP, and DHCP packets) what should be decoded.
  • The traffic should be captured and stored in real-time, we have to use a protocol optimized to capture quickly and low latency.
  • The library that implements that protocol should have low memory footprint, because Kismet will run in a Raspberry Pi.
  • The protocol to be used should be developer-friendly in both sides (Raspberry Pi side and WSO2 BAM – Apache Cassandra side).

Well, in this second blog post I will explain how to solve above difficults.

Architecture IoT/BigData – Storing WIFI traffic in Apache Cassandra (WSO2 BAM and Apache Thrift)
Architecture IoT/BigData – Storing WIFI traffic in Apache Cassandra (WSO2 BAM and Apache Thrift)

I.- Looking for the Streaming and/or Communication Protocol

There are some stream and communication protocols and implementations

Really, there are many libraries and streaming protocols out there to solve the above issues, but if you are looking for a protocol/library open source, lightweight, low memory footprint and developer friendly there are a few. They are:

1) Elastic Logstash (https://www.elastic.co/products/logstash)

Logstash is a set of tools to collect heterogeneous type of data and It’s to used with Elasticsearch, It requires Java and for this reason It is too heavy to run in a Raspberry Pi. The best choice is to use only Logstash Forwarder.
Logstash Forwarder (a.k.a. lumberjack) is the protocol used to ship, parse and collect streams or log-events when using ELK.
Logstash Forwarder can be downloaded and compiled using the Go compiler on your Raspberry Pi, for further information you can use this link.

2) Elastic Filebeat (https://github.com/elastic/beats/tree/master/filebeat)

Filebeat is a lightweight, open source shipper for log file data. As the next-generation Logstash Forwarder, Filebeat tails logs and
quickly sends this information to Logstash for further parsing and enrichment or to Elasticsearch for centralized storage and analysis.

Installing and configuring Filebeat is easy and you can use It with Logstash to perform additional processing on the data collected and the Filebeat replaces Logstash Forwarder.

3) Apache Flume (https://flume.apache.org)

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of
log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable
reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online
analytic application.

Apache Flume used Java and requires high (memory and CPU) resources.

4) Mozilla Heka (https://github.com/mozilla-services/heka)

Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data
processing, useful for a wide variety of different tasks, such as:

  • Loading and parsing log files from a file system.
  • Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB.
  • Launching external processes to gather operational data from the local system.
  • Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline.
  • Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
  • Delivering processed data to one or more persistent data stores.

Mozilla Heka is very similar to Logstash Forwarder, both are written in Go, but Mozilla Heka can process the log-events in real-time also Heka is also able to provide graphs of this data directly, those are great advantages. These graphs will be updated in real time, as the data is flowing through Heka, without the latency of the data store driven graphs.

5) Fluentd (https://github.com/fluent/fluentd)

Fluentd is similar to Logstash in that there are inputs and outputs for a large variety of sources and destination. Some of it’s design tenets
are easy installation and small footprint. It doesn’t provide any storage tier itself but allows you to easily configure where your logs should
be collected.

6) Apache Thrift (https://thrift.apache.org)

Thrift is an interface definition language and binary communication protocol[1] that is used to define and create services for numerous languages.
It is used as a remote procedure call (RPC) framework and was developed at Facebook for “scalable cross-language services development”. It
combines a software stack with a code generation engine to build services that work efficiently to a varying degree and seamlessly between C#,
C++ (on POSIX-compliant systems), Cappuccino, Cocoa, Delphi, Erlang, Go, Haskell, Java, Node.js, OCaml, Perl, PHP, Python, Ruby and Smalltalk.
Although developed at Facebook, it is now an open source project in the Apache Software Foundation.

Facebook Scribe is a project what uses the Thrift protocol and is a server for aggregating log data streamed in real time from a large number of servers.

In this Proof-of-Concept I will use Apache Thrift for these reasons:

  • Apache Thrift is embedded in WSO2 BAM 2.5.0.
  • The WSO2 BAM 2.5.0 is a very important component because also It embeds Apache Cassandra to persist the data stream/log-events. You don’t need to do anything, all log-events captured will be stored automatically in Apache Cassandra.
  • There are lightweight Python libraries implementing the Apache Thrift protocol, this Thrift Python Client is suitable to be used in a Raspberry Pi and publish events int WSO2 BAM (Apache Cassandra).
  • And finally, there is a Python Client Library specific for Kismet. This Python Kismet Client reads the traffic captured for Kismet.

II.- Installing, configuring and running Python Kismet Client and Python Thrift library

I cloned the above repositories (Thrift Python Client and Python Kismet Client).

$ mkdir kismet_to_wso2bam
$ cd kismet_to_wso2bam

// Install svn client, It's useful to download a folder from a Github repo
$ sudo apt-get install subversion

Replace tree/master for trunk in the URL and checkout the folder.

// List files and subfolders
$ svn ls https://github.com/chilcano/iot-server-appliances/trunk/Arduino%20Robot/PC_Clients/PythonRobotController/DirectPublishClient/BAMPythonPublisher
.gitignore
BAMPublisher.py
Publisher.py
PythonClient.py
README.md
gen-py/
thrift/

// Download files and subfolder
$ svn checkout https://github.com/chilcano/iot-server-appliances/trunk/Arduino%20Robot/PC_Clients/PythonRobotController/DirectPublishClient/BAMPythonPublisher

Now, download the kismetclient repository.

$ git clone https://github.com/chilcano/kismetclient
Cloning into 'kismetclient'...
remote: Counting objects: 100, done.
remote: Total 100 (delta 0), reused 0 (delta 0), pack-reused 100
Receiving objects: 100% (100/100), 15.84 KiB, done.
Resolving deltas: 100% (57/57), done.

2.1) Creating a custom Python script to send the Kismet captured traffic to WSO2 BAM 2.5.0

Under kismet_to_wso2bam folder create this Python (sendTrafficFromKismetToWSO2BAM.py) script.

#!/usr/bin/env python
"""
Python script to send 802.11 traffic captured for Kismet to WSO2 BAM 2.5.0.

Author:  Chilcano
Date:    2015/12/31
Version: 1.0

Requires:
- Python Thrift Client (https://github.com/chilcano/iot-server-appliances/tree/master/Arduino%20Robot/PC_Clients/PythonRobotController/DirectPublishClient/BAMPythonPublisher)
- Python Kismet Client (https://github.com/chilcano/kismetclient)
- Place the 'sendTrafficFromKismetToWSO2BAM.py' in same level of 'BAMPythonPublisher' and 'kismetclient' folders.

Run:
$ python sendTrafficFromKismetToWSO2BAM.py
"""

import sys
sys.path.append('kismetclient')
sys.path.append('BAMPythonPublisher')
sys.path.append('BAMPythonPublisher/gen-py')
sys.path.append('BAMPythonPublisher/thrift')

from kismetclient import Client as KismetClient
from kismetclient import handlers
from Publisher import *
from pprint import pprint

import logging
import time

log = logging.getLogger('kismetclient')
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)

# Kismet server
address = ('127.0.0.1', 2501)
k = KismetClient(address)
##k.register_handler('TRACKINFO', handlers.print_fields)

# BAM/CEP/Thrift Server
cep_ip = '192.168.1.43' # IP address of the server
cep_port = 7713         # Thrift listen port of the server
cep_username = 'admin'  # username
cep_password = 'admin'  # passowrd

# Initialize publisher with ip and port of server
publisher = Publisher()
publisher.init(cep_ip, cep_port)

# Connect to server with username and password
publisher.connect(cep_username, cep_password)

# Define the Input Stream
streamDefinition = "{ 'name':'rpi_kismet_stream_in', 'version':'1.0.0', 'nickName': 'rpi_k_in', 'description': '802.11 passive packet capture', 'tags': ['RPi 2 Model B', 'Kismet', 'Thrift'], 'metaData':[ {'name':'ipAdd','type':'STRING'},{'name':'deviceType','type':'STRING'},{'name':'owner','type':'STRING'}, {'name':'bssid','type':'STRING'}], 'payloadData':[ {'name':'macAddress','type':'STRING'}, {'name':'type','type':'STRING'}, {'name':'llcpackets','type':'STRING'}, {'name':'datapackets','type':'STRING'}, {'name':'cryptpackets','type':'STRING'},{'name':'signal_dbm','type':'STRING'}, {'name':'bestlat','type':'STRING'}, {'name':'bestlon','type':'STRING'}, {'name':'bestalt','type':'STRING'}, {'name':'channel','type':'STRING'}, {'name':'datasize','type':'STRING'}, {'name':'newpackets','type':'STRING'}] }";
publisher.defineStream(streamDefinition)

def handle_client(client, bssid, mac, lasttime, type, llcpackets, datapackets, cryptpackets, signal_dbm, bestlat, bestlon, bestalt, channel, datasize, newpackets):
  publisher.publish(['rpi_chicha', 'RPi 2 Model B', 'chilcano.io', int(lasttime)], [bssid, mac, type, llcpackets, datapackets, cryptpackets, signal_dbm, bestlat, bestlon, bestalt, channel, datasize, newpackets])

k.register_handler('CLIENT', handle_client)

try:
    while True:
        k.listen()
except KeyboardInterrupt:
    pprint(k.protocols)
    publisher.disconnect()
    log.info('Exiting...')

At the end, the structure of files and directories will be as shown below:

$ ll
total 20
drwxr-xr-x  4 pi pi 4096 Feb  3 14:39 ./
drwxr-xr-x 11 pi pi 4096 Feb  3 12:10 ../
drwxr-xr-x  5 pi pi 4096 Feb  3 12:14 BAMPythonPublisher/
drwxr-xr-x  4 pi pi 4096 Feb  3 12:11 kismetclient/
-rw-r--r--  1 pi pi 2552 Feb  3 12:14 sendTrafficFromKismetToWSO2BAM.py

$ tree -L 3
.
├── BAMPythonPublisher
│   ├── BAMPublisher.py
│   ├── gen-py
│   │   ├── Data
│   │   ├── Exception
│   │   ├── __init__.py
│   │   ├── ThriftEventTransmissionService
│   │   └── ThriftSecureEventTransmissionService
│   ├── Publisher.py
│   ├── Publisher.pyc
│   ├── PythonClient.py
│   ├── README.md
│   └── thrift
│       ├── __init__.py
│       ├── __init__.pyc
│       ├── protocol
│       ├── server
│       ├── Thrift.py
│       ├── Thrift.pyc
│       ├── transport
│       ├── TSCons.py
│       ├── TSerialization.py
│       └── TTornado.py
├── kismetclient
│   ├── kismetclient
│   │   ├── client.py
│   │   ├── client.pyc
│   │   ├── exceptions.py
│   │   ├── exceptions.pyc
│   │   ├── handlers.py
│   │   ├── handlers.pyc
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── utils.py
│   │   └── utils.pyc
│   ├── LICENSE
│   ├── README.md
│   ├── runclient.py
│   └── setup.py
└── sendTrafficFromKismetToWSO2BAM.py

12 directories, 28 files

Notes:

  • You have to update the sendTrafficFromKismetToWSO2BAM.py with IP Address, Username, Password and Ports where WSO2 BAM is running.
  • The above Python script reads the captured traffic and defines previously a structure of data to be send to WSO2 BAM (Apache Thrift). You can modify that data structure by adding or removing 802.11 fields.

2.2) Install and configure WSO2 BAM to receive the Kismet traffic

Before you run the sendTrafficFromKismetToWSO2BAM.py, WSO2 BAM 2.5.0 should be running and the Thrift listener port should be open.
The Thrift listener standard port is 7711, in my case I have an offset of +2.

I recommend you my Docker container created to get a fully functional WSO2 BAM 2.5.0 ready for use in this PoC with Kismet.
To do that, open a new terminal in your Host PC and execute the next commands:

Initialize the Docker environment.

$ docker-machine ls

$ docker-machine start default

$ eval "$(docker-machine env default)"

$ docker login

Download the WSO2 BAM Docker image from Docker Hub.

$ docker pull chilcano/wso2-bam:2.5.0

2.5.0: Pulling from chilcano/wso2-bam
9acb471e45a5: Pull complete
...
e12995f4907c: Pull complete
77e4386b8b45: Pull complete
Digest: sha256:64e40ea4ea6b89c7e1b08edeb43e31467196a11c9fe755c0026403780f9e24e1
Status: Downloaded newer image for chilcano/wso2-bam:2.5.0

$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
chilcano/netcat         jessie              302d06d998e6        5 days ago          135.1 MB
chilcano/rtail-server   latest              cb313f9e2546        7 days ago          674.2 MB
ubuntu                  wily                d8a164f81acc        7 days ago          134.4 MB
ubuntu                  vivid               99639e3e70c8        7 days ago          131.3 MB
debian                  jessie              7a01cc5f27b1        8 days ago          125.1 MB
node                    0.12.9              d09c6f7639f7        13 days ago         637.1 MB
ubuntu                  trusty              6cc0fc2a5ee3        2 weeks ago         187.9 MB
ubuntu                  precise             6b4adea2c00e        2 weeks ago         137.5 MB
sebp/elk                latest              96f071b7a8e2        3 weeks ago         980.8 MB
chilcano/wso2-bam       2.5.0               77e4386b8b45        7 weeks ago         1.65 GB
chilcano/wso2-dss       3.2.1               acd92f55f678        7 weeks ago         1.383 GB
chilcano/wiremock       latest              a3e4764483b9        7 weeks ago         597.3 MB
java                    openjdk-7           e93dd201a77e        8 weeks ago         589.7 MB

$ docker run -d -t --name=wso2bam-kismet -p 9445:9443 -p 7713:7711 chilcano/wso2-bam:2.5.0
fc9fb8368e7f4f24b01bc33f90122776b4c10d63d0e849073474a485700b6266

$ docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                                                                     NAMES
fc9fb8368e7f        chilcano/wso2-bam:2.5.0   "/bin/sh -c 'sh ./wso"   9 seconds ago       Up 8 seconds        7611/tcp, 9160/tcp, 9763/tcp, 21000/tcp, 0.0.0.0:7713->7711/tcp, 0.0.0.0:9445->9443/tcp   wso2bam-kismet

The 9445 port is for the WSO2 Carbon Admin Console and the 7713 port is the Thrift listener port.
Now, let’s verify that WSO2 BAM is running in the Docker container.

$ docker exec -ti wso2bam-kismet bash

root@fc9fb8368e7f:/opt/wso2bam02a/bin# tail -f ../repository/logs/wso2carbon.log
TID: [0] [BAM] [2016-02-03 16:38:10,482]  INFO {org.wso2.carbon.ntask.core.service.impl.TaskServiceImpl} -  Task service starting in STANDALONE mode... {org.wso2.carbon.ntask.core.service.impl.TaskServiceImpl}
TID: [0] [BAM] [2016-02-03 16:38:10,664]  INFO {org.apache.cassandra.net.OutboundTcpConnection} -  Handshaking version with localhost/127.0.0.1 {org.apache.cassandra.net.OutboundTcpConnection}
TID: [0] [BAM] [2016-02-03 16:38:10,672]  INFO {org.apache.cassandra.net.OutboundTcpConnection} -  Handshaking version with localhost/127.0.0.1 {org.apache.cassandra.net.OutboundTcpConnection}
TID: [0] [BAM] [2016-02-03 16:38:11,127]  INFO {org.wso2.carbon.ntask.core.impl.AbstractQuartzTaskManager} -  Task scheduled: [-1234][BAM_NOTIFICATION_DISPATCHER_TASK][NOTIFIER] {org.wso2.carbon.ntask.core.impl.AbstractQuartzTaskManager}
TID: [0] [BAM] [2016-02-03 16:38:11,232]  INFO {org.wso2.carbon.core.init.JMXServerManager} -  JMX Service URL  : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi {org.wso2.carbon.core.init.JMXServerManager}
TID: [0] [BAM] [2016-02-03 16:38:11,246]  INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -  Server           :  WSO2BAM02A-2.5.0 {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [BAM] [2016-02-03 16:38:11,247]  INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -  WSO2 Carbon started in 41 sec {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [BAM] [2016-02-03 16:38:14,044]  INFO {org.wso2.carbon.dashboard.common.oauth.GSOAuthModule} -  Using random key for OAuth client-side state encryption {org.wso2.carbon.dashboard.common.oauth.GSOAuthModule}
TID: [0] [BAM] [2016-02-03 16:38:14,714]  INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} -  Mgt Console URL  : https://172.17.0.2:9443/carbon/ {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
TID: [0] [BAM] [2016-02-03 16:38:14,714]  INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} -  Gadget Server Default Context : http://172.17.0.2:9763/portal {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}

2.3) Remote access of different network (i.e. Raspberry Pi) to the WSO2 BAM Docker container

If you want to get access to WSO2 BAM from a web browser, to use this URL https://192.168.99.100:9445/carbon/admin, but if you want to connect to embedded Thrift listener, to use this IP Address 192.168.99.100 and this 7713 port.
That is valid if you are in the same Host PC, but how to get access remotely, for example from the above Raspberry Pi, to the WSO2 BAM Docker Container?.
To do that, follow this explanation (Remote access to Docker with TLS), as It is mentioned, there are 3 choices, as I’m running Docker deamon in a Mac OS X, the easy way to expose and to do available the Docker container to Raspberry Pi network is to do port forwarding or SSH tunneling using docker-machine.

In other words, follow these commands in your Host PC (Mac OS X):

$ docker -v
Docker version 1.9.1, build a34a1d5

As WSO2 BAM opens 9445 and 7713 ports, then I will open/forward both ports.

$ docker-machine ssh default -f -N -L 192.168.1.43:7713:localhost:7713

// Optional
$ docker-machine ssh default -f -N -L 192.168.1.43:9445:localhost:9445

Where:

  • '-f' requests SSH to go to background just before command execution.
  • '-N' allows empty command (useful here to forward ports only).
  • The user/password for boot2docker is docker/tcuser.

You also can do the same but using the ssh command:

$ ssh docker@$(docker-machine ip default) -f -N -L 192.168.1.43:7713:localhost:7713

Now, from the Raspberry Pi, check if WSO2 BAM is reachable.

$ nc -vzw 3 192.168.1.43 7713
Connection to 192.168.1.43 7713 port [tcp/*] succeeded!

// Optional
$ nc -vzw 3 192.168.1.43 9445
Connection to 192.168.1.43 9445 port [tcp/*] succeeded!

Or check It by using curl.

$ curl -Ivsk https://192.168.1.43:9445/carbon/admin/login.jsp -o /dev/null

...
< HTTP/1.1 200 OK
< Set-Cookie: JSESSIONID=601A0F02DCCB47B2685686A7042BBD8F; Path=/; Secure; HttpOnly
< X-FRAME-OPTIONS: DENY
< Content-Type: text/html;charset=UTF-8
< Content-Language: en
< Transfer-Encoding: chunked
< Vary: Accept-Encoding
< Date: Thu, 04 Feb 2016 12:14:09 GMT
< Server: WSO2 Carbon Server
<
* Connection #0 to host 192.168.1.43 left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
} [data not shown]

2.4) Running the custom Python script to send the captured traffic by Kismet to WSO2 BAM

Make sure that Python is installed, install It if It’s not installed.

$ python
Python 2.7.3 (default, Mar 18 2014, 05:13:23)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()

After that, run the Python script, obviously, Kismet should be running.

$ cd kismet_to_wso2bam/

$ python sendTrafficFromKismetToWSO2BAM.py

*KISMET: ['0.0.0', '1454495618', 'rpi-chicha', 'pcapdump,netxml,nettxt,gpsxml,alert', '1000']
Server: 0.0.0 1454495618 rpi-chicha pcapdump,netxml,nettxt,gpsxml,alert 1000
*PROTOCOLS: ['KISMET,ERROR,ACK,PROTOCOLS,CAPABILITY,TERMINATE,TIME,PACKET,STATUS,PLUGIN,SOURCE,ALERT,COMMON,TRACKINFO,WEPKEY,STRING,GPS,BSSID,SSID,CLIENT,BSSIDSRC,CLISRC,NETTAG,CLITAG,REMOVE,CHANNEL,INFO,BATTERY,CRITFAIL']
!1 CAPABILITY KISMET
!2 CAPABILITY ERROR
!3 CAPABILITY ACK
...

In the WSO2 BAM side you will see the below log events where Raspberry Pi (Kismet) is connecting to WSO2 BAM (Thrift listener) successfully.

...
TID: [0] [BAM] [2016-02-04 12:27:40,542]  INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} -  'admin@carbon.super [-1234]' logged in at [2016-02-04 12:27:40,542+0000] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [0] [BAM] [2016-02-04 12:29:20,334]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user admin connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [0] [BAM] [2016-02-04 12:29:20,416]  INFO {org.wso2.carbon.databridge.streamdefn.registry.datastore.RegistryStreamDefinitionStore} -  Stream definition added to registry successfully : rpi_kismet_stream_in:1.0.0 {org.wso2.carbon.databridge.streamdefn.registry.datastore.RegistryStreamDefinitionStore}
TID: [0] [BAM] [2016-02-04 12:29:20,670]  INFO {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory} -  Initializing Event cluster {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory}
TID: [0] [BAM] [2016-02-04 12:29:20,877]  INFO {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory} -  Initializing Event Index cluster {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory}

III.- Exploring the 802.11 captured traffic stored in Apache Cassandra (WSO2 BAM)

Remember, the WSO2 BAM 2.5.0 Docker Container is running locally with a internal Docker Machine IP Address (192.168.99.100), also is running with a public IP Address by using the Host IP Address (192.168.1.43) because the internal IP address was forwarded.
In brief, WSO2 BAM has the below addresses:

Then, let’s go to explore the 802.11 traffic stored in Apache Cassandra.
Below a set of images took when browsing the Apache Cassandra embedded in WSO2 BAM.

01 / WSO2 BAM / Apache Cassandra – Key Spaces
WSO2 BAM / Apache Cassandra 01

02 / WSO2 BAM / Apache Cassandra – Event KS information
WSO2 BAM / Apache Cassandra 01

03 / WSO2 BAM / Apache Cassandra – Event KS information
WSO2 BAM / Apache Cassandra 01

04 / WSO2 BAM / Apache Cassandra – Connecting to explore KS
WSO2 BAM / Apache Cassandra 01

05 / WSO2 BAM / Apache Cassandra – List of Key Spaces
WSO2 BAM / Apache Cassandra 01

06 / WSO2 BAM / Apache Cassandra – Exploring the Kismet data
WSO2 BAM / Apache Cassandra 01

07 / WSO2 BAM / Apache Cassandra – Exploring the Kismet data
WSO2 BAM / Apache Cassandra 01

In the blog post (Part III), I will explain how to create a simple Dashboard showing the WIFI traffic captured in real-time.
See you soon.

Tagged with: , , , , , , ,
Posted in BAM, Big Data, IoT, Security
Archives