Running multi-container (WSO2 BAM & MAC Address Lookup) Docker Application using Docker Compose

In my 4 previous blog post I explained each part of this Proof-of-concept, they are:

  1. Analysing Wireless traffic in real time with WSO2 BAM, Apache Cassandra, Complex Event Processor (CEP Siddhi), Apache Thrift and Python:
  2. A Python Microservice in a Docker Container (MAC Address Manufacturer Lookup):

Now, in this blog post I’m going to explain how to run two Docker Containers, the WSO2 BAM and the MAC Address Manufacturer Lookup containers, by using Docker Compose.

// clone 2 repositories
$ git clone https://github.com/chilcano/docker-wso2bam-kismet-poc.git
$ cd docker-wso2bam-kismet-poc
$ git clone https://github.com/chilcano/wso2bam-wifi-thrift-cassandra-poc.git

// run docker compose
$ docker-compose up -d

Starting dockerwso2bamkismetpoc_mac-manuf_1
Starting dockerwso2bamkismetpoc_wso2bam-dashboard-kismet_1

Below a diagram explaining this.

802.11 traffic capture PoC - Docker Compose

Now, if you want to run all together in a few minutes, just runs the Docker Compose Yaml file.
For a deeply explanation, follow the instructions on README.md (https://github.com/chilcano/docker-wso2bam-kismet-poc).

If everything is OK, you will get a huge amount of data (WIFI traffic) stored in Apache Cassandra and a simple Dashboard showing captured MAC Addresses and Manufacturer of the Wireless Devices (PC, Mobiles, WIFI Access Points, Tablets, etc..) around of your Raspberry Pi.

Visualising 802.11 captured traffic with the MAC Address Manufacturer

I hope you will find this blog posts useful.
Bye.

Tagged with: , , , , , , ,
Posted in BAM, IoT, Linux, Microservices, Security, SOA

A MAC Address Manufacturer DB and RESTful Python Microservice in a Docker Container

A MAC address, also called physical address, is a unique identifier assigned to every network interfaces for communications on the physical network segment. In other words, you can identify the manufacturer of your device through your pyshical address.

There are different tools on the Internet that allow you to identify the manufacturer from the MAC Address. How in my 3 previous post I wrote about how to capture the wireless traffic and all MAC Address, now in this post I will explain how to implement a Docker container exposing a Rest API to get the Manufacturer from the captured MAC Address.

As everything should be lightweight, minimalist, easy to use and auto-contained, I’m going to use the next:

  • Python as lightweight and powerful programming language.
  • Flask (http://flask.pocoo.org) is a microframework for Python based on Werkzeug and Jinja 2. I will use Flask to implement a mini-web application.
  • SQLAlchemy (http://www.sqlalchemy.org/) is a Python SQL toolkit and ORM.
  • SQLite3 (https://www.sqlite.org) is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.
  • pyOpenSSL library to work with X.509 certificates. Required to start the embedded Webserver on HTTPS (TLS).
  • CORS extension for Flask (https://flask-cors.readthedocs.org) useful to solve cross-domain Ajax request issues.

This Docker container provides a Microservice (API Rest) to MAC Address Manufacturer resolution. This Docker container is part of the “Everything generates Data: Capturing WIFI Anonymous Traffic using Raspberry Pi and WSO2 BAM” blog serie (Part I, Part II & Part III), but you can use it independently as part of other set of Docker containers.

This Docker Container will work in this scenario, as shown below:

The MAC Address Manufacturer Lookup Docker Container

Then, let’s do it.

I. Preparing the Python development environment in Mac OSX

Follow this guide to setup your Python Development Environment in your Mac OSX: https://github.com/chilcano/how-tos/blob/master/Preparing-Python-Dev-Env-Mac-OSX.md

II. Creating a MAC Address Manufacturer DB

Exist in Internet several MAC Address Lookup Tools, in fact, the OUI’s prefix used to identify the MAC Address are public available.

But, in this case I am going to use the MAC Address List of Wireshark (https://www.wireshark.org/tools/oui-lookup.html).
Wireshark is a popular network protocol analyzer a.k.a. network sniffer, the Wireshark tool uses internally the MAC Address list to identity the Manufacturer of a NIC.
I’m going to download and create a API Rest for you. Below the steps.

1) Downloading the Wireshark MAC Addresses Manufacturer file and loading into a DB

Using the below Python script I will download the Wireshark MAC Address list into a file and to get the hash. The idea is to parse the file and load it into a minimalist DB.
I will use SQLite Database where I will create an unique table and all information will be loaded there. The Table structure will be:

mac             String      # The original MAC Address
manuf           String      # The original Manufacturer name
manuf_desc      String      # The Manufacturer description, if exists.

Here the Python script used to do that: mac_manuf_wireshark_file.py

III. Exposing the MAC Address Manufacturer DB as an API Rest

After creating the database, the next step is to expose the data through a simple API Rest. The idea is to make a call GET to the API Rest with a MAC Address and get the Manufacturer as response.

1) Defining the API

The best way to define an API Rest and the contract is using the Swagger language (http://swagger.io). The idea is to create documentation about the API Rest and explain what resources are available or exposed, writte a request and response sample, etc.
In this scenario I’m going to define in a simple way the API, also I’m going to use JSON to define the request and response.
Then, below the API definition.

POST    /chilcano/api/manuf                 # Add a new Manufacturer
PUT     /chilcano/api/manuf                 # Update an existing Manufacturer
GET     /chilcano/api/manuf/{macAddress}    # Find Manufacturer by MAC Address

In this Proof-of-Concept I will implement only the GET resource for the API.

2) Implementing the API Rest

I have created 2 Python scripts to implement the API Rest.
The first one (mac_manuf_table_def.py) is just a Model of the MacAddressManuf table.

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# file name: mac_manuf_table_def.py
#

from sqlalchemy import create_engine, ForeignKey
from sqlalchemy import Column, Date, Integer, String
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///mymusic.db', echo=True)
Base = declarative_base()

#
# Model for 'MacAddressManuf':
# used for API Rest to get access to data from DB
#
class MacAddressManuf(Base):
    """"""
    __tablename__ = "MacAddressManuf"

    mac = Column(String, primary_key=True)
    manuf = Column(String)
    manuf_desc = Column(String)

    def __init__(self, manuf, manuf_desc):
        """"""
        self.manuf = manuf
        self.manuf_desc = manuf_desc

And second Python script (mac_manuf_api_rest.py) implements the API Rest. You can review the

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# file name: mac_manuf_api_rest.py
#

import os, re
from flask import Flask, jsonify
from flask.ext.cors import CORS
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from mac_manuf_table_def import MacAddressManuf

ROOT_DIR = "manuf"
FINAL_MANUF_DB_FILENAME = "mac_address_manuf.db"
HTTPS_ENABLED = "true"

engine = create_engine("sqlite:///" + os.path.join(ROOT_DIR, FINAL_MANUF_DB_FILENAME))
Session = sessionmaker(bind=engine)

app = Flask(__name__)
cors = CORS(app, resources={r"/chilcano/api/*": {"origins": "*"}})

# 
# API Rest:
#   i.e. curl -i http://localhost:5000/chilcano/api/manuf/00:50:5a:e5:6e:cf
#   i.e. curl -ik https://localhost:5443/chilcano/api/manuf/00:50:5a:e5:6e:cf
#
@app.route("/chilcano/api/manuf/<string:macAddress>", methods=["GET"])
def get_manuf(macAddress):
    try:
        if re.search(r'^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$', macAddress.strip(), re.I).group():
            # expected MAC formats : a1-b2-c3-p4-q5-r6, a1:b2:c3:p4:q5:r6, A1:B2:C3:P4:Q5:R6, A1-B2-C3-P4-Q5-R6
            mac1 = macAddress[:2] + ":" + macAddress[3:5] + ":" + macAddress[6:8]
            mac2 = macAddress[:2] + "-" + macAddress[3:5] + "-" + macAddress[6:8]
            mac3 = mac1.upper()
            mac4 = mac2.upper()
            session = Session()
            result = session.query(MacAddressManuf).filter(MacAddressManuf.mac.in_([mac1, mac2, mac3, mac4])).first()
            try:
                return jsonify(mac=result.mac, manuf=result.manuf, manuf_desc=result.manuf_desc)
            except:
                return jsonify(error="The MAC Address '" + macAddress + "' does not exist"), 404
        else:
            return jsonify(mac=macAddress, manuf="Unknown", manuf_desc="Unknown"), 404
    except:
        return jsonify(error="The MAC Address '" + macAddress + "' is malformed"), 400

if __name__ == "__main__":
    if HTTPS_ENABLED == "true":
        # 'adhoc' means auto-generate the certificate and keypair
        app.run(host="0.0.0.0", port=5443, ssl_context="adhoc", threaded=True, debug=True)
    else:
        app.run(host="0.0.0.0", port=5000, threaded=True, debug=True)

This second Python script performs the next tasks:

  • Calls the Model (mac_manuf_table_def.py).
  • Connects to SQLite Database and creates a Session.
  • Runs a query by using macAddress as parameter.
  • And creates a JSON response with the query’s result.

3) Running and Testing the API Rest

We could use the Flask buit-in HTTP server just for testing and debugging. To run the above Python Web Application (API Rest) just execute the Python script. Note that actually I have 3 versions (py-1.0, py-1.1 and py-latest)

Chilcano@Pisc0 : ~/1github-repo/docker-mac-address-manuf-lookup/python/1.0
$ python mac_manuf_api_rest.py
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 258-876-642

Chilcano@Pisc0 : ~/1github-repo/docker-mac-address-manuf-lookup/python/1.1
$ python mac_manuf_api_rest.py
 * Running on https://0.0.0.0:5443/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 258-876-642

Chilcano@Pisc0 : ~/1github-repo/docker-mac-address-manuf-lookup/python/latest
$ python mac_manuf_api_rest.py
 * Running on https://0.0.0.0:5443/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 258-876-642

Now, from other Terminal call the API Rest using curl, I’m going to use only the python-lates version:

$ curl -ik https://127.0.0.1:5443/chilcano/api/manuf/00-50:Ca-Fe-Ca-Fe
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 93
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Thu, 03 Mar 2016 17:37:45 GMT

{
  "mac": "00:50:CA",
  "manuf": "NetToNet",
  "manuf_desc": "# NET TO NET TECHNOLOGIES"
}

$ curl -ik https://127.0.0.1:5443/chilcano/api/manuf/11-50:Ca-Fe-Ca-Fe
HTTP/1.0 404 NOT FOUND
Content-Type: application/json
Content-Length: 67
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Thu, 03 Mar 2016 17:38:49 GMT

{
  "error": "The MAC Address '11-50:Ca-Fe-Ca-Fe' does not exist"
}

 curl -ik https://127.0.0.1:5443/chilcano/api/manuf/00-50:Ca-Fe-Ca-Fe---
HTTP/1.0 400 BAD REQUEST
Content-Type: application/json
Content-Length: 68
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Thu, 03 Mar 2016 17:39:23 GMT

{
  "error": "The MAC Address '00-50:Ca-Fe-Ca-Fe---' is malformed"
}

But if you want to run in Production. In the Flask webpage (http://flask.pocoo.org/docs/0.10/deploying/wsgi-standalone) recommends these HTTP servers (Standalone WSGI Containers):

  • Gunicorn
  • Tornado
  • Gevent
  • Twisted Web

IV. Putting everything in a Docker Container

1) The Dockerfile

The latest version of the MAC Address Manufacturer lookup Docker container is the python-latest (aka Docker MAC Manuf) and has the next Dockerfile:

# Dockerfile to MAC Address Manufacturer Lookup container.

FROM python:2.7

MAINTAINER Roger CARHUATOCTO <chilcano at intix dot info>

RUN pip install --upgrade pip
RUN pip install unicodecsv
RUN pip install Flask
RUN pip install sqlalchemy
RUN pip install pyOpenSSL
RUN pip install -U flask-cors

# Allocate the 5000/5443 to run a HTTP/HTTPS server
EXPOSE 5000 5443

COPY mac_manuf_wireshark_file.py /
COPY mac_manuf_table_def.py /
COPY mac_manuf_api_rest.py /

RUN python mac_manuf_wireshark_file.py
CMD python mac_manuf_api_rest.py

2) Using the Docker Container

Clone the Github repository and build it.

$ git clone https://github.com/chilcano/docker-mac-address-manuf-lookup.git
$ cd docker-mac-address-manuf-lookup
$ docker build --rm -t chilcano/mac-manuf:py-latest python/latest/.

Or Pull from Docker Hub.

$ docker login
$ docker pull chilcano/mac-manuf-lookup:py-latest
$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
chilcano/mac-manuf-lookup   py-latest           19d33a4f3ec1        16 minutes ago      714.8 MB

Run and check the container.

$ docker run -dt --name=mac-manuf-py-latest -p 5443:5443/tcp chilcano/mac-manuf-lookup:py-latest

$ docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
4b0bb0b5b518        chilcano/mac-manuf-lookup:py-latest   "/bin/sh -c 'python m"   2 minutes ago       Up 2 minutes        5000/tcp, 0.0.0.0:5443->5443/tcp   mac-manuf-py-latest

Gettting SSH access to the Container to check if SQLite DB exists.

$ docker exec -ti mac-manuf-py-latest bash

Getting the Docker Machine IP Address.

$ docker-machine ls
NAME           ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS
default        *        virtualbox   Running   tcp://192.168.99.100:2376
machine-dev    -        virtualbox   Stopped
machine-test   -        virtualbox   Stopped

Testing/Calling the Microservice (API Rest).

$ curl -i http://192.168.99.100:5000/chilcano/api/manuf/00-50:Ca-ca-fe-ca
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 93
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Sat, 20 Feb 2016 09:01:38 GMT

{
  "mac": "00:50:CA",
  "manuf": "NetToNet",
  "manuf_desc": "# NET TO NET TECHNOLOGIES"
}

If the embedded server was started on HTTPS, you could test it as shown below.

$ curl -ik https://192.168.99.100:5443/chilcano/api/manuf/00-50:Ca-ca-fe-ca
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 93
Server: Werkzeug/0.11.4 Python/2.7.11
Date: Mon, 29 Feb 2016 15:58:21 GMT

{
  "mac": "00:50:CA",
  "manuf": "NetToNet",
  "manuf_desc": "# NET TO NET TECHNOLOGIES"
}

 

V. And now what?, How to use the MAC Manuf Docker with the WSO2 BAM Docker?

Visualizing Captured WIFI Traffic in Realtime from WSO2 BAM Dashboard
Visualizing Captured WIFI Traffic in Realtime

As you can see in above image, when capturing WIFI traffic the information is shown in the WSO2 BAM Dashboard but not the MAC Address Manufaturer.
In this scenario, our Docker MAC Manuf will be useful because It will provide the Manufacturer information via a RESTful Microservice. Then, the idea is configure the WSO2 BAM Dashboard (the prepared Kismet Toolbox) to point to the Docker MAC Manuf RESTful Microservice. In other words, the WSO2 BAM will call to the Docker MAC Manuf Microservice to get the Manufacturer information.

The next blog post I will explain how to connect the MAC Address Manufacturer Docker Container with the WSO2 BAM Docker Container by using Docker Compose to do a minimal orchestration.

VI. Conclusions

Python and a few modules (as Flask, SQLAlchemy, CORS, pyOpenssl, …) more you can create quickly any kind of Applications (Business Applications, Web Applications, Mobile Applications, Microservices, …). The development of this (Micro)service and put It into a Docker container was a smooth experience. It was possible to implement the older scripts to automatize some task while at the same time implement modern layered web applications as a microservice, and everything in a few lines of code.

See you soon.

Inspirational reference

MAC Address references

Tagged with: , , , , ,
Posted in Big Data, Microservices, Security, SOA

Wardriving with WIFI Pineapple Nano in Mobile World Congress 2016 at Barcelona

WIFI Pineapple Nano is a nice tiny device to do Wireless Security Auditing. It has OpenWRT embedded as S.O. with 2 Wireless NIC pre-configured and a lot of Security tools pre-installed ready to perform a Security Wireless Auditing. For further details, you can check the Hak5 page and I encourage you to buy one (https://www.wifipineapple.com)!.

Objetives

The idea is to do a quick wardriving around of the Mobile World Congress at Barcelona to check if the attendants are aware about their Mobile Devices with the leak of information.

chilcano-00-android-gps-kismet-pineapple-adb

The arsenal at Mobile World Congress 2016 Barcelona – Wardriving with WIFI Pineapple Nano, Android & Kismet

At the end, after wardriving, you will get the following files:

  • xyz.alert
  • xyz.gpsxml
  • xyz.netxml
  • xyz.pcapdump

With above files you will can identify the Manufacturer of the device (or model), the geo-position and route followed aproximadely and other information related signal quality.

The arsenal

The software and devices I’ve used are the following:

Configuration

Obviously I have initialized my Pineapple Nano previously and have updated the firmware. But if you haven’t already done so, I recommend the next guide: https://www.wifipineapple.com/pages/setup

The next steps are if you have initialized Pineapple Nano.

1) Connect to WIFI Pineapple and prepare everything

By using USB cable, connect the Pineapple Nano to your PC, in my case I’m using a Virtualbox VM with Kali Linux.
Then, from a Kali Linux (the best Linux distro for Security Audit) terminal will get the wp6.sh script and will connect to Pineapple Nano.
The wp6.sh can be downloaded from here: https://github.com/hak5darren/wp6

After that, open your browser in your Kali Linux and connect to http://172.16.42.1:1741

From the Pineapple Web Admin console, insert the SD and format It.
After that, verify if SD was formatted suceesfully.

root@Pineapple:~# df -h
Filesystem Size Used Available Use% Mounted on
rootfs 2.3M 900.0K 1.4M 39% /
/dev/root 12.5M 12.5M 0 100% /rom
tmpfs 29.9M 3.6M 26.3M 12% /tmp
/dev/mtdblock3 2.3M 900.0K 1.4M 39% /overlay
overlayfs:/overlay 2.3M 900.0K 1.4M 39% /
tmpfs 512.0K 0 512.0K 0% /dev
/dev/sdcard/sd1 6.2G 14.6M 5.9G 0% /sd

Now, open other Kali Linux terminal and get SSH access to Pinapple Nano and update the packages.

root@Pineapple:~# opkg update

2) Install Kismet in the Pineapple Nano

Pineapple Nano has not enough space internally to install things. I recommend you to install your new applications or packages in the SD.

root@Pineapple:~# opkg list | grep kismet
kismet-client - 2013-03-R1b-1 - An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. This package contains the kismet text interface client.
kismet-drone - 2013-03-R1b-1 - An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. This package contains the kismet remote sniffing.and monitoring drone.
kismet-server - 2013-03-R1b-1 - An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. This package contains the kismet server.

root@Pineapple:~# # opkg --dest sd install kismet-server

root@Pineapple:~# # opkg --dest sd install kismet-client

3) Sharing the Android's GPS with the WIFI Pineapple Nano

To do this, we need to connect our Android Mobile to the Pineapple USB 2.0 Host port and from Android share the GPS signal by using ShareGPS app. Before, let's go to install ADB (Android Debug Bridge) in the Pineapple.

root@Pineapple:~# opkg --dest sd install adb

Installing adb (android.5.0.2_r1-1) to sd...
Downloading https://www.wifipineapple.com/nano/packages/adb_android.5.0.2_r1-1_ar71xx.ipk.
Configuring adb.

Now, from your Pineapple SSH terminal start the ADB service and check if your Android Mobile is recognized for the Pineapple Nano. Before, to enable USB Debugging and enable USB Tethering in your Mobile.

My Android Mobile is recognized with ID 2a47:0004.

root@Pineapple:~# lsusb
Bus 001 Device 009: ID 2a47:0004
Bus 001 Device 004: ID 05e3:0745 Genesys Logic, Inc.
Bus 001 Device 003: ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n
Bus 001 Device 002: ID 058f:6254 Alcor Micro Corp. USB Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Start ADB service in Pineapple Nano.

root@Pineapple:~# adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached
AB010682 unauthorized

In your Android Mobile is propmted to accept connection from Pineapple. Accept It and run again ADB. You have to see the next as shown below.

root@Pineapple:~# adb devices
List of devices attached
AB010682 device

Now, in your Android install the ShareGPS app from Google Play Store. After that, start the app and share the GPS signal.

Step 1Step 1

Step 1Step 1

Finally, You are ready to use the shared Android's GPS signal. Let's to check it.

root@Pineapple:~# adb forward tcp:50000 tcp:50000

root@Pineapple:~# telnet localhost 50000

$GPGGA,181216.000,4138.5572,N,00221.8326,E,1,5,1.47,202.2,M,51.1,M,,*54
$GPGSA,A,3,19,25,24,15,12,,,,,,,,1.70,1.47,0.85*05
$GPGSV,3,1,10,12,65,031,34.4,25,61,286,31.7,24,54,116,14.2,14,43,295,*66
$GPGSV,3,2,10,29,26,197,25.8,02,21,102,26.9,19,17,041,13.4,06,16,061,17.8*7C
$GPGSV,3,3,10,15,09,174,20.0,31,03,303,*6A
$GPRMC,181216.000,A,4138.5572,N,00221.8326,E,0.000,104.39,250216,,,A*5B
$GPVTG,104.39,T,,M,0.000,N,0.000,K,A*32
$GPACCURACY,4.6*0A
$GPGGA,181217.000,4138.5572,N,00221.8326,E,1,5,1.47,202.2,M,51.1,M,,*55
$GPGSA,A,3,19,25,24,15,12,,,,,,,,1.70,1.47,0.85*05
$GPGSV,3,1,10,12,65,031,34.8,25,61,286,31.6,24,54,116,14.2,14,43,295,*6B
$GPGSV,3,2,10,29,26,197,23.7,02,21,102,26.9,19,16,041,13.4,06,16,061,17.8*74
$GPGSV,3,3,10,15,09,174,19.6,31,03,303,*66
$GPRMC,181217.000,A,4138.5572,N,00221.8326,E,0.000,104.39,250216,,,A*5A
$GPVTG,104.39,T,,M,0.000,N,0.000,K,A*32
$GPACCURACY,4.5*09
$GPGGA,181218.000,4138.5572,N,00221.8326,E,1,5,1.47,202.2,M,51.1,M,,*5A
$GPGSA,A,3,19,25,24,15,12,,,,,,,,1.70,1.47,0.85*05
$GPGSV,3,1,10,12,65,031,34.8,25,61,286,31.6,24,54,116,14.2,14,43,295,*6B
$GPGSV,3,2,10,29,26,197,21.6,02,21,102,26.4,19,16,041,13.4,06,16,061,13.8*7E
$GPGSV,3,3,10,15,09,174,19.3,31,03,303,*63
$GPRMC,181218.000,A,4138.5572,N,00221.8326,E,0.000,104.39,250216,,,A*55
$GPVTG,104.39,T,,M,0.000,N,0.000,K,A*32
^C

After running telnet localhost 50000 you will see in green on the ShareGPS app connections “Connected” instead of “Listening”. That verifies Pineapple connected to Android’s GPS, also you will see some characters in your terminal demonstrating that everything works.

4) Configuration of Kismet: wlan1 in monitor mode

The WIFI Pineapple Nano has 2 Wireless NICs: wlan0 and wlan1.

root@Pineapple:~# iwconfig
lo no wireless extensions.

usb0 no wireless extensions.

wlan1 IEEE 802.11bgn ESSID:off/any
Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm
RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off

wlan0-1 IEEE 802.11bgn Mode:Master Tx-Power=17 dBm
RTS thr:off Fragment thr:off
Power Management:off

wlan0 IEEE 802.11bgn Mode:Master Tx-Power=17 dBm
RTS thr:off Fragment thr:off
Power Management:off

eth0 no wireless extensions.

br-lan no wireless extensions.

I’m going to use wlan1 to capture the 802.11 traffic, in fact, the wlan1 will be configured in monitor mode as shown below:

root@Pineapple:~# ifconfig wlan1 down
root@Pineapple:~# iwconfig wlan1 mode monitor
root@Pineapple:~# ifconfig wlan1 up

Check again the wireless interfaces. The wlan1 NIC should be in monitor mode.

root@Pineapple:~# iwconfig wlan1
wlan1 IEEE 802.11bgn Mode:Monitor Frequency:2.412 GHz Tx-Power=20 dBm
RTS thr:off Fragment thr:off
Power Management:off

5) Configuration of Kismet: MAC Address Manufacturer

Kismet by default hasn’t the MAC Address Manufacturer database, I have to download it from the Wireshark web portal and copy it to /sd.

root@Pineapple:~# wget -O /sd/manuf http://anonsvn.wireshark.org/wireshark/trunk/manuf

root@Pineapple:~# ln -s /sd/manuf /etc/manuf
root@Pineapple:~# ln -s /sd/manuf /sd/etc/manuf

6) Configuration of Kismet: Installing and configuring GPSd

GPSd is a service daemon that monitors one or more GPSes or AIS receivers attached to a host computer through serial or USB ports, making all data on the location/course/velocity of the sensors available to be queried on TCP port 2947 of the host computer. (http://www.catb.org/gpsd)

GPSd is not available in the current repository of OpenWRT (Chaos Calmer 15.05) used for WIFI Pineapple Nano. Not problem, we can install the all GPSd packages for an older version of OpenWRT.
The packages to be installed are:

  • libgps_3.7-1_ar71xx.ipk
  • libgpsd_3.7-1_ar71xx.ipk
  • gpsd_3.7-1_ar71xx.ipk
  • gpsd-clients_3.7-1_ar71xx.ipk

Download and install them.

root@Pineapple:~# cd /sd
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/libgps_3.7-1_ar71xx.ipk
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/libgpsd_3.7-1_ar71xx.ipk
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/gpsd_3.7-1_ar71xx.ipk
root@Pineapple:/sd# wget https://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/gpsd-clients_3.7-1_ar71xx.ipk

root@Pineapple:/sd# opkg --dest sd install libgps_3.7-1_ar71xx.ipk
root@Pineapple:/sd# opkg --dest sd install libgpsd_3.7-1_ar71xx.ipk
root@Pineapple:/sd# opkg --dest sd install gpsd_3.7-1_ar71xx.ipk
root@Pineapple:/sd# opkg --dest sd install gpsd-clients_3.7-1_ar71xx.ipk

Just make sure what gpsd service not starting on boot.
You can edit /etc/default/gpsd and set everything to false and/or run service gpsd stop.

root@Pineapple:~# nano /etc/default/gpsd
# Default settings for the gpsd init script and the hotplug wrapper.

# Start the gpsd daemon automatically at boot time
START_DAEMON="false"

# Use USB hotplugging to add new USB devices automatically to the daemon
USBAUTO="false"

# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES=""

# Other options you want to pass to gpsd
GPSD_OPTIONS=""

Now, start the GPSd service. In debug mode:

root@Pineapple:~# gpsd -F /var/run/gpsd.sock -N tcp://localhost:50000

Or run it as daemon.

root@Pineapple:~# gpsd -F /var/run/gpsd.sock tcp://localhost:50000

Where:

  • -F /var/run/gpsd.sock is the control socket file being used for GPSd.
  • tcp://localhost:50000 is the shared Andoid’s GPS signal being listening on TCP port.
  • -N don’t daemonize, useful for debugging.

And to check if GPSd is working, just run this:

root@Pineapple:~# cgps

7) Starting Kismet for the first time

Now, we are ready to start Kismet.
Kismet is a Client/Server application, firstly, I will start the Kismet Server and secondly the Kismet Client.

root@Pineapple:~# kismet_server -p /sd/.kismet -c wlan1 --daemonize

Where:

  • -p /sd/.kismet is the folder where Kismet Server will place the log files.
  • -c wlan1 is the NIC in monitor mode to be used.
  • Kismet has by default the GPSd configured properly. Nothing to fo here.
  • Kismet will look for the MAC Address Manufacturer file in this path /etc/manuf.

In other terminal run the next:

root@Pineapple:~# kismet_client

You should see the Kismet UI showning the Wireless Networks and connected clientes identified and reading the GPS coordinates in real-time.

8) Running Kismet for the next times

Now, if you want to avoid run all above commands one-by-one, you could create a shell script with all required commands.
Then, let's to create a simple and dirty shell script.

root@Pineapple:~# nano /root/run_wardriving.sh

#!/bin/bash

echo "==> Setting wlan1 in monitor mode"
ifconfig wlan1 down
iwconfig wlan1 mode monitor
ifconfig wlan1 up

echo "==> Enabling ADB Forwarding to tcp:50000"
adb forward tcp:50000 tcp:50000

echo "==> Refreshing NTP server"
killall ntpd
ntpd > /dev/null 2>&1 &
sleep 3

echo "==> Starting GPSD with tcp://localhost:50000"
service gpsd stop
killall gpsd
gpsd -F /var/run/gpsd.sock tcp://localhost:50000
sleep 3

echo "==> Starting Kismet server in background"
killall kismet_server
kismet_server -p /sd/.kismet -c wlan1 --daemonize

And executions privileges.

root@Pineapple:~# chmod +x /root/run_wardriving.sh

The run_wardriving.sh will be useful when starting Kismet for the next times, because you have to do from your Android mobile and don’t from your PC or Kali Linux.
You will need something likes that shell script to start wardriving quickly and from your Android mobile.
For that, you will need also JuiceSSH App in your Android mobile to connect to WIFI Pineapple Nano and execute the run_wardriving.sh script. ;)

Some results

Below some screenshots taken from my Android mobile and Google Earth with the WIFI Networks placed.

1) Kismet in action

Kismet in action693 Wireless Networks indentifiedOps!, A rogue AP?

2) Creating Wireless Recon Maps with Google Earth and GISKismet

GISKismet is a wireless recon visualization tool to represent data gathered using Kismet in a flexible manner. GISKismet stores the information in a database so that the user can generate graphs using SQL. GISKismet currently uses SQLite for the database and GoogleEarth / KML files for graphing. (http://git.kali.org/gitweb/?p=packages/giskismet.git)

Conclusions

  • WIFI Pineapple has a tons of Security tools installed, but not tools to perform a Wardriving. For that Kismet and GPS were installed and configured. Probably in the following blog post I will explain how to create a Module for Wardriving.
  • Turn off your phone mobile. The new mobiles or smartphones (Android, iOS, Windows, …) have a Wireless Network Interface, they are constantly trying to connect to Wireless networks, to do that, all Mobiles send packets asking for Access Points. Kismet or other Tools taken advantage of that by capturing these packets. If you want to avoid that, just turn off your Wireless Network Interface and avoid to connect to Wireless Access Point unknown.
  • The tip of the iceberg. The packets that your phone Mobile emits and captures Kismet is only basic information and does not represent any risk to you. This phase is called Security “Data gathering” or “Recognition”. The problem comes later, there are other tools that allow a targeted attack to steal important information.
  • The business behind of IoT and BigData. Actually, Marketing Companies and Telecoms are monetizing with the tracking information of phone mobiles. They are taking advantage of radio/cellular signal to track in all the time. With these information the Shopping Company could identify your pattern of behavior, which stores visited, what phone mobile model do you have, etc. Only search in Google “phone mobile wireless anonymous tracking” and will be aware of the industry behind and who is making money.
  • There isn’t Security awareness at Mobile World Congress. Large conferences where there are people crowd (and thousands of mobiles phone) are a breeding ground for scammers and thieves who take advantage of weaknesses or defects of the Organization, the Devices or the Apps. Then, my friend be careful with that.
Tagged with: , , , , ,
Posted in Big Data, IoT, Linux, Security

Everything generates data: Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM (Part III)

After configuring the Raspberry Pi in monitor WIFI/802.11 mode (first blog post) and after configuring Raspberry Pi to send the 802.11 captured traffic to WSO2 BAM and Apache Thrift listener (second blog post), now I will explain how to create a simple Dashboard showing the WIFI traffic captured in real-time.

Architecture IoT/BigData – Visualizing WIFI traffic in realtime from a WSO2 BAM Dashboard
Architecture IoT/BigData – Visualizing WIFI traffic in realtime from a WSO2 BAM Dashboard

Well, to make it easier, I created a Github repository (wso2bam-wifi-thrift-cassandra-poc) where I copied a number of scripts required for this third blog post.
I encourage you to download and follow the instructions below.

This repository (wso2bam-wifi-thrift-cassandra-poc) contains:

  • A toolbox to view incoming Kismet traffic (802.11) in realtime valid for WSO2 BAM 2.5.0.
  • A set of definitions to create Execution Plan (CEP Shiddi), Input and Output Stream Definitions (Apache Thrift), and Formatters.

Considerations

  • I’ve used WSO2 BAM 2.5.0 (standard configuration without changes and with offset 0)
  • I’ve used a Raspberry Pi as agent to send captured 802.11 traffic to WSO2 BAM by using Apache Thrift.
  • I’ve used a Python Thrift and Kismet script to send the captured traffic.

How to use

1) Send Kismet traffic to WSO2 BAM using Apache Thrift listener

2) Deploy the WSO2 BAM Kismet toolbox

  • Deploy the kismet_wifi_realtime_traffic.tbox in WSO2 BAM.
  • Check if WSO2 BAM toolbox was deployed successfully.

Kismet Real Time Toolbox for WSO2 BAM

3) Deploy the set of Stream and Execution Plan definitions

Copy the set of definitions to create Execution Plan (CEP Shiddi), Input and Output Stream Definitions (Apache Thrift), and Formatters to WSO2 BAM manually.
All files and directories to be copied are under wso2bam-wifi-thrift-cassandra-poc/wso2bam_defns/ and have to be copied to /.

Structure of file definitions and directories
Input/Output Stream, Execution Plan and Formatters for WSO2 BAM

Two Output Streams deployed into WSO2 BAM
Input/Output Stream, Execution Plan and Formatters for WSO2 BAM

4) Visualizing Kismet (802.11) traffic in WSO2 BAM Dashboard

If everything is OK, then you can see the incoming traffic in realtime, to do that, you have to use the previously installed/deployed WSO2 BAM toolbox.
Then, login to WSO2 BAM Dashboard and select the Kismet WIFI Realtime Monitoring graphic. You should see the following.

Visualizing Captured Kismet Traffic in Realtime from WSO2 BAM Dashboard
Visualizing Captured Kismet Traffic in Realtime

That’s all.
In the next blogpost I will explain how to implement a Microservice to get the Manufacturer for each MAC address captured.

Regards.

Tagged with: , , , , , , ,
Posted in BAM, Big Data, IoT, Security

Everything generates data: Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM (Part II)

After configuring the Raspberry Pi to capture WIFI/802.11 traffic (first blog post), we have to store this traffic in a Database (NoSQL and RDBMS).
Because, the idea is to process in real-time and/or batch the stored data.

To capture this type of traffic (WIFI/802.11 traffic) is so difficult for next reasons:

  • Kismet captures 802.11 layer-2 wireless network traffic (Network IP blocks such as TCP, UDP, ARP, and DHCP packets) what should be decoded.
  • The traffic should be captured and stored in real-time, we have to use a protocol optimized to capture quickly and low latency.
  • The library that implements that protocol should have low memory footprint, because Kismet will run in a Raspberry Pi.
  • The protocol to be used should be developer-friendly in both sides (Raspberry Pi side and WSO2 BAM – Apache Cassandra side).

Well, in this second blog post I will explain how to solve above difficults.

Architecture IoT/BigData – Storing WIFI traffic in Apache Cassandra (WSO2 BAM and Apache Thrift)
Architecture IoT/BigData – Storing WIFI traffic in Apache Cassandra (WSO2 BAM and Apache Thrift)

I.- Looking for the Streaming and/or Communication Protocol

There are some stream and communication protocols and implementations

Really, there are many libraries and streaming protocols out there to solve the above issues, but if you are looking for a protocol/library open source, lightweight, low memory footprint and developer friendly there are a few. They are:

1) Elastic Logstash (https://www.elastic.co/products/logstash)

Logstash is a set of tools to collect heterogeneous type of data and It’s to used with Elasticsearch, It requires Java and for this reason It is too heavy to run in a Raspberry Pi. The best choice is to use only Logstash Forwarder.
Logstash Forwarder (a.k.a. lumberjack) is the protocol used to ship, parse and collect streams or log-events when using ELK.
Logstash Forwarder can be downloaded and compiled using the Go compiler on your Raspberry Pi, for further information you can use this link.

2) Elastic Filebeat (https://github.com/elastic/beats/tree/master/filebeat)

Filebeat is a lightweight, open source shipper for log file data. As the next-generation Logstash Forwarder, Filebeat tails logs and
quickly sends this information to Logstash for further parsing and enrichment or to Elasticsearch for centralized storage and analysis.

Installing and configuring Filebeat is easy and you can use It with Logstash to perform additional processing on the data collected and the Filebeat replaces Logstash Forwarder.

3) Apache Flume (https://flume.apache.org)

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of
log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable
reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online
analytic application.

Apache Flume used Java and requires high (memory and CPU) resources.

4) Mozilla Heka (https://github.com/mozilla-services/heka)

Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data
processing, useful for a wide variety of different tasks, such as:

  • Loading and parsing log files from a file system.
  • Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB.
  • Launching external processes to gather operational data from the local system.
  • Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline.
  • Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
  • Delivering processed data to one or more persistent data stores.

Mozilla Heka is very similar to Logstash Forwarder, both are written in Go, but Mozilla Heka can process the log-events in real-time also Heka is also able to provide graphs of this data directly, those are great advantages. These graphs will be updated in real time, as the data is flowing through Heka, without the latency of the data store driven graphs.

5) Fluentd (https://github.com/fluent/fluentd)

Fluentd is similar to Logstash in that there are inputs and outputs for a large variety of sources and destination. Some of it’s design tenets
are easy installation and small footprint. It doesn’t provide any storage tier itself but allows you to easily configure where your logs should
be collected.

6) Apache Thrift (https://thrift.apache.org)

Thrift is an interface definition language and binary communication protocol[1] that is used to define and create services for numerous languages.
It is used as a remote procedure call (RPC) framework and was developed at Facebook for “scalable cross-language services development”. It
combines a software stack with a code generation engine to build services that work efficiently to a varying degree and seamlessly between C#,
C++ (on POSIX-compliant systems), Cappuccino, Cocoa, Delphi, Erlang, Go, Haskell, Java, Node.js, OCaml, Perl, PHP, Python, Ruby and Smalltalk.
Although developed at Facebook, it is now an open source project in the Apache Software Foundation.

Facebook Scribe is a project what uses the Thrift protocol and is a server for aggregating log data streamed in real time from a large number of servers.

In this Proof-of-Concept I will use Apache Thrift for these reasons:

  • Apache Thrift is embedded in WSO2 BAM 2.5.0.
  • The WSO2 BAM 2.5.0 is a very important component because also It embeds Apache Cassandra to persist the data stream/log-events. You don’t need to do anything, all log-events captured will be stored automatically in Apache Cassandra.
  • There are lightweight Python libraries implementing the Apache Thrift protocol, this Thrift Python Client is suitable to be used in a Raspberry Pi and publish events int WSO2 BAM (Apache Cassandra).
  • And finally, there is a Python Client Library specific for Kismet. This Python Kismet Client reads the traffic captured for Kismet.

II.- Installing, configuring and running Python Kismet Client and Python Thrift library

I cloned the above repositories (Thrift Python Client and Python Kismet Client).

$ mkdir kismet_to_wso2bam
$ cd kismet_to_wso2bam

// Install svn client, It's useful to download a folder from a Github repo
$ sudo apt-get install subversion

Replace tree/master for trunk in the URL and checkout the folder.

// List files and subfolders
$ svn ls https://github.com/chilcano/iot-server-appliances/trunk/Arduino%20Robot/PC_Clients/PythonRobotController/DirectPublishClient/BAMPythonPublisher
.gitignore
BAMPublisher.py
Publisher.py
PythonClient.py
README.md
gen-py/
thrift/

// Download files and subfolder
$ svn checkout https://github.com/chilcano/iot-server-appliances/trunk/Arduino%20Robot/PC_Clients/PythonRobotController/DirectPublishClient/BAMPythonPublisher

Now, download the kismetclient repository.

$ git clone https://github.com/chilcano/kismetclient
Cloning into 'kismetclient'...
remote: Counting objects: 100, done.
remote: Total 100 (delta 0), reused 0 (delta 0), pack-reused 100
Receiving objects: 100% (100/100), 15.84 KiB, done.
Resolving deltas: 100% (57/57), done.

2.1) Creating a custom Python script to send the Kismet captured traffic to WSO2 BAM 2.5.0

Under kismet_to_wso2bam folder create this Python (sendTrafficFromKismetToWSO2BAM.py) script.

#!/usr/bin/env python
"""
Python script to send 802.11 traffic captured for Kismet to WSO2 BAM 2.5.0.

Author:  Chilcano
Date:    2015/12/31
Version: 1.0

Requires:
- Python Thrift Client (https://github.com/chilcano/iot-server-appliances/tree/master/Arduino%20Robot/PC_Clients/PythonRobotController/DirectPublishClient/BAMPythonPublisher)
- Python Kismet Client (https://github.com/chilcano/kismetclient)
- Place the 'sendTrafficFromKismetToWSO2BAM.py' in same level of 'BAMPythonPublisher' and 'kismetclient' folders.

Run:
$ python sendTrafficFromKismetToWSO2BAM.py
"""

import sys
sys.path.append('kismetclient')
sys.path.append('BAMPythonPublisher')
sys.path.append('BAMPythonPublisher/gen-py')
sys.path.append('BAMPythonPublisher/thrift')

from kismetclient import Client as KismetClient
from kismetclient import handlers
from Publisher import *
from pprint import pprint

import logging
import time

log = logging.getLogger('kismetclient')
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)

# Kismet server
address = ('127.0.0.1', 2501)
k = KismetClient(address)
##k.register_handler('TRACKINFO', handlers.print_fields)

# BAM/CEP/Thrift Server
cep_ip = '192.168.1.43' # IP address of the server
cep_port = 7713         # Thrift listen port of the server
cep_username = 'admin'  # username
cep_password = 'admin'  # passowrd

# Initialize publisher with ip and port of server
publisher = Publisher()
publisher.init(cep_ip, cep_port)

# Connect to server with username and password
publisher.connect(cep_username, cep_password)

# Define the Input Stream
streamDefinition = "{ 'name':'rpi_kismet_stream_in', 'version':'1.0.0', 'nickName': 'rpi_k_in', 'description': '802.11 passive packet capture', 'tags': ['RPi 2 Model B', 'Kismet', 'Thrift'], 'metaData':[ {'name':'ipAdd','type':'STRING'},{'name':'deviceType','type':'STRING'},{'name':'owner','type':'STRING'}, {'name':'bssid','type':'STRING'}], 'payloadData':[ {'name':'macAddress','type':'STRING'}, {'name':'type','type':'STRING'}, {'name':'llcpackets','type':'STRING'}, {'name':'datapackets','type':'STRING'}, {'name':'cryptpackets','type':'STRING'},{'name':'signal_dbm','type':'STRING'}, {'name':'bestlat','type':'STRING'}, {'name':'bestlon','type':'STRING'}, {'name':'bestalt','type':'STRING'}, {'name':'channel','type':'STRING'}, {'name':'datasize','type':'STRING'}, {'name':'newpackets','type':'STRING'}] }";
publisher.defineStream(streamDefinition)

def handle_client(client, bssid, mac, lasttime, type, llcpackets, datapackets, cryptpackets, signal_dbm, bestlat, bestlon, bestalt, channel, datasize, newpackets):
  publisher.publish(['rpi_chicha', 'RPi 2 Model B', 'chilcano.io', int(lasttime)], [bssid, mac, type, llcpackets, datapackets, cryptpackets, signal_dbm, bestlat, bestlon, bestalt, channel, datasize, newpackets])

k.register_handler('CLIENT', handle_client)

try:
    while True:
        k.listen()
except KeyboardInterrupt:
    pprint(k.protocols)
    publisher.disconnect()
    log.info('Exiting...')

At the end, the structure of files and directories will be as shown below:

$ ll
total 20
drwxr-xr-x  4 pi pi 4096 Feb  3 14:39 ./
drwxr-xr-x 11 pi pi 4096 Feb  3 12:10 ../
drwxr-xr-x  5 pi pi 4096 Feb  3 12:14 BAMPythonPublisher/
drwxr-xr-x  4 pi pi 4096 Feb  3 12:11 kismetclient/
-rw-r--r--  1 pi pi 2552 Feb  3 12:14 sendTrafficFromKismetToWSO2BAM.py

$ tree -L 3
.
├── BAMPythonPublisher
│   ├── BAMPublisher.py
│   ├── gen-py
│   │   ├── Data
│   │   ├── Exception
│   │   ├── __init__.py
│   │   ├── ThriftEventTransmissionService
│   │   └── ThriftSecureEventTransmissionService
│   ├── Publisher.py
│   ├── Publisher.pyc
│   ├── PythonClient.py
│   ├── README.md
│   └── thrift
│       ├── __init__.py
│       ├── __init__.pyc
│       ├── protocol
│       ├── server
│       ├── Thrift.py
│       ├── Thrift.pyc
│       ├── transport
│       ├── TSCons.py
│       ├── TSerialization.py
│       └── TTornado.py
├── kismetclient
│   ├── kismetclient
│   │   ├── client.py
│   │   ├── client.pyc
│   │   ├── exceptions.py
│   │   ├── exceptions.pyc
│   │   ├── handlers.py
│   │   ├── handlers.pyc
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── utils.py
│   │   └── utils.pyc
│   ├── LICENSE
│   ├── README.md
│   ├── runclient.py
│   └── setup.py
└── sendTrafficFromKismetToWSO2BAM.py

12 directories, 28 files

Notes:

  • You have to update the sendTrafficFromKismetToWSO2BAM.py with IP Address, Username, Password and Ports where WSO2 BAM is running.
  • The above Python script reads the captured traffic and defines previously a structure of data to be send to WSO2 BAM (Apache Thrift). You can modify that data structure by adding or removing 802.11 fields.

2.2) Install and configure WSO2 BAM to receive the Kismet traffic

Before you run the sendTrafficFromKismetToWSO2BAM.py, WSO2 BAM 2.5.0 should be running and the Thrift listener port should be open.
The Thrift listener standard port is 7711, in my case I have an offset of +2.

I recommend you my Docker container created to get a fully functional WSO2 BAM 2.5.0 ready for use in this PoC with Kismet.
To do that, open a new terminal in your Host PC and execute the next commands:

Initialize the Docker environment.

$ docker-machine ls

$ docker-machine start default

$ eval "$(docker-machine env default)"

$ docker login

Download the WSO2 BAM Docker image from Docker Hub.

$ docker pull chilcano/wso2-bam:2.5.0

2.5.0: Pulling from chilcano/wso2-bam
9acb471e45a5: Pull complete
...
e12995f4907c: Pull complete
77e4386b8b45: Pull complete
Digest: sha256:64e40ea4ea6b89c7e1b08edeb43e31467196a11c9fe755c0026403780f9e24e1
Status: Downloaded newer image for chilcano/wso2-bam:2.5.0

$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
chilcano/netcat         jessie              302d06d998e6        5 days ago          135.1 MB
chilcano/rtail-server   latest              cb313f9e2546        7 days ago          674.2 MB
ubuntu                  wily                d8a164f81acc        7 days ago          134.4 MB
ubuntu                  vivid               99639e3e70c8        7 days ago          131.3 MB
debian                  jessie              7a01cc5f27b1        8 days ago          125.1 MB
node                    0.12.9              d09c6f7639f7        13 days ago         637.1 MB
ubuntu                  trusty              6cc0fc2a5ee3        2 weeks ago         187.9 MB
ubuntu                  precise             6b4adea2c00e        2 weeks ago         137.5 MB
sebp/elk                latest              96f071b7a8e2        3 weeks ago         980.8 MB
chilcano/wso2-bam       2.5.0               77e4386b8b45        7 weeks ago         1.65 GB
chilcano/wso2-dss       3.2.1               acd92f55f678        7 weeks ago         1.383 GB
chilcano/wiremock       latest              a3e4764483b9        7 weeks ago         597.3 MB
java                    openjdk-7           e93dd201a77e        8 weeks ago         589.7 MB

$ docker run -d -t --name=wso2bam-kismet -p 9445:9443 -p 7713:7711 chilcano/wso2-bam:2.5.0
fc9fb8368e7f4f24b01bc33f90122776b4c10d63d0e849073474a485700b6266

$ docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                                                                     NAMES
fc9fb8368e7f        chilcano/wso2-bam:2.5.0   "/bin/sh -c 'sh ./wso"   9 seconds ago       Up 8 seconds        7611/tcp, 9160/tcp, 9763/tcp, 21000/tcp, 0.0.0.0:7713->7711/tcp, 0.0.0.0:9445->9443/tcp   wso2bam-kismet

The 9445 port is for the WSO2 Carbon Admin Console and the 7713 port is the Thrift listener port.
Now, let’s verify that WSO2 BAM is running in the Docker container.

$ docker exec -ti wso2bam-kismet bash

root@fc9fb8368e7f:/opt/wso2bam02a/bin# tail -f ../repository/logs/wso2carbon.log
TID: [0] [BAM] [2016-02-03 16:38:10,482]  INFO {org.wso2.carbon.ntask.core.service.impl.TaskServiceImpl} -  Task service starting in STANDALONE mode... {org.wso2.carbon.ntask.core.service.impl.TaskServiceImpl}
TID: [0] [BAM] [2016-02-03 16:38:10,664]  INFO {org.apache.cassandra.net.OutboundTcpConnection} -  Handshaking version with localhost/127.0.0.1 {org.apache.cassandra.net.OutboundTcpConnection}
TID: [0] [BAM] [2016-02-03 16:38:10,672]  INFO {org.apache.cassandra.net.OutboundTcpConnection} -  Handshaking version with localhost/127.0.0.1 {org.apache.cassandra.net.OutboundTcpConnection}
TID: [0] [BAM] [2016-02-03 16:38:11,127]  INFO {org.wso2.carbon.ntask.core.impl.AbstractQuartzTaskManager} -  Task scheduled: [-1234][BAM_NOTIFICATION_DISPATCHER_TASK][NOTIFIER] {org.wso2.carbon.ntask.core.impl.AbstractQuartzTaskManager}
TID: [0] [BAM] [2016-02-03 16:38:11,232]  INFO {org.wso2.carbon.core.init.JMXServerManager} -  JMX Service URL  : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi {org.wso2.carbon.core.init.JMXServerManager}
TID: [0] [BAM] [2016-02-03 16:38:11,246]  INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -  Server           :  WSO2BAM02A-2.5.0 {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [BAM] [2016-02-03 16:38:11,247]  INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -  WSO2 Carbon started in 41 sec {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [BAM] [2016-02-03 16:38:14,044]  INFO {org.wso2.carbon.dashboard.common.oauth.GSOAuthModule} -  Using random key for OAuth client-side state encryption {org.wso2.carbon.dashboard.common.oauth.GSOAuthModule}
TID: [0] [BAM] [2016-02-03 16:38:14,714]  INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} -  Mgt Console URL  : https://172.17.0.2:9443/carbon/ {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
TID: [0] [BAM] [2016-02-03 16:38:14,714]  INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} -  Gadget Server Default Context : http://172.17.0.2:9763/portal {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}

2.3) Remote access of different network (i.e. Raspberry Pi) to the WSO2 BAM Docker container

If you want to get access to WSO2 BAM from a web browser, to use this URL https://192.168.99.100:9445/carbon/admin, but if you want to connect to embedded Thrift listener, to use this IP Address 192.168.99.100 and this 7713 port.
That is valid if you are in the same Host PC, but how to get access remotely, for example from the above Raspberry Pi, to the WSO2 BAM Docker Container?.
To do that, follow this explanation (Remote access to Docker with TLS), as It is mentioned, there are 3 choices, as I’m running Docker deamon in a Mac OS X, the easy way to expose and to do available the Docker container to Raspberry Pi network is to do port forwarding or SSH tunneling using docker-machine.

In other words, follow these commands in your Host PC (Mac OS X):

$ docker -v
Docker version 1.9.1, build a34a1d5

As WSO2 BAM opens 9445 and 7713 ports, then I will open/forward both ports.

$ docker-machine ssh default -f -N -L 192.168.1.43:7713:localhost:7713

// Optional
$ docker-machine ssh default -f -N -L 192.168.1.43:9445:localhost:9445

Where:

  • '-f' requests SSH to go to background just before command execution.
  • '-N' allows empty command (useful here to forward ports only).
  • The user/password for boot2docker is docker/tcuser.

You also can do the same but using the ssh command:

$ ssh docker@$(docker-machine ip default) -f -N -L 192.168.1.43:7713:localhost:7713

Now, from the Raspberry Pi, check if WSO2 BAM is reachable.

$ nc -vzw 3 192.168.1.43 7713
Connection to 192.168.1.43 7713 port [tcp/*] succeeded!

// Optional
$ nc -vzw 3 192.168.1.43 9445
Connection to 192.168.1.43 9445 port [tcp/*] succeeded!

Or check It by using curl.

$ curl -Ivsk https://192.168.1.43:9445/carbon/admin/login.jsp -o /dev/null

...
< HTTP/1.1 200 OK
< Set-Cookie: JSESSIONID=601A0F02DCCB47B2685686A7042BBD8F; Path=/; Secure; HttpOnly
< X-FRAME-OPTIONS: DENY
< Content-Type: text/html;charset=UTF-8
< Content-Language: en
< Transfer-Encoding: chunked
< Vary: Accept-Encoding
< Date: Thu, 04 Feb 2016 12:14:09 GMT
< Server: WSO2 Carbon Server
<
* Connection #0 to host 192.168.1.43 left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
} [data not shown]

2.4) Running the custom Python script to send the captured traffic by Kismet to WSO2 BAM

Make sure that Python is installed, install It if It’s not installed.

$ python
Python 2.7.3 (default, Mar 18 2014, 05:13:23)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()

After that, run the Python script, obviously, Kismet should be running.

$ cd kismet_to_wso2bam/

$ python sendTrafficFromKismetToWSO2BAM.py

*KISMET: ['0.0.0', '1454495618', 'rpi-chicha', 'pcapdump,netxml,nettxt,gpsxml,alert', '1000']
Server: 0.0.0 1454495618 rpi-chicha pcapdump,netxml,nettxt,gpsxml,alert 1000
*PROTOCOLS: ['KISMET,ERROR,ACK,PROTOCOLS,CAPABILITY,TERMINATE,TIME,PACKET,STATUS,PLUGIN,SOURCE,ALERT,COMMON,TRACKINFO,WEPKEY,STRING,GPS,BSSID,SSID,CLIENT,BSSIDSRC,CLISRC,NETTAG,CLITAG,REMOVE,CHANNEL,INFO,BATTERY,CRITFAIL']
!1 CAPABILITY KISMET
!2 CAPABILITY ERROR
!3 CAPABILITY ACK
...

In the WSO2 BAM side you will see the below log events where Raspberry Pi (Kismet) is connecting to WSO2 BAM (Thrift listener) successfully.

...
TID: [0] [BAM] [2016-02-04 12:27:40,542]  INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} -  'admin@carbon.super [-1234]' logged in at [2016-02-04 12:27:40,542+0000] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [0] [BAM] [2016-02-04 12:29:20,334]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user admin connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [0] [BAM] [2016-02-04 12:29:20,416]  INFO {org.wso2.carbon.databridge.streamdefn.registry.datastore.RegistryStreamDefinitionStore} -  Stream definition added to registry successfully : rpi_kismet_stream_in:1.0.0 {org.wso2.carbon.databridge.streamdefn.registry.datastore.RegistryStreamDefinitionStore}
TID: [0] [BAM] [2016-02-04 12:29:20,670]  INFO {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory} -  Initializing Event cluster {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory}
TID: [0] [BAM] [2016-02-04 12:29:20,877]  INFO {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory} -  Initializing Event Index cluster {org.wso2.carbon.databridge.persistence.cassandra.datastore.ClusterFactory}

III.- Exploring the 802.11 captured traffic stored in Apache Cassandra (WSO2 BAM)

Remember, the WSO2 BAM 2.5.0 Docker Container is running locally with a internal Docker Machine IP Address (192.168.99.100), also is running with a public IP Address by using the Host IP Address (192.168.1.43) because the internal IP address was forwarded.
In brief, WSO2 BAM has the below addresses:

Then, let’s go to explore the 802.11 traffic stored in Apache Cassandra.
Below a set of images took when browsing the Apache Cassandra embedded in WSO2 BAM.

01 / WSO2 BAM / Apache Cassandra – Key Spaces
WSO2 BAM / Apache Cassandra 01

02 / WSO2 BAM / Apache Cassandra – Event KS information
WSO2 BAM / Apache Cassandra 01

03 / WSO2 BAM / Apache Cassandra – Event KS information
WSO2 BAM / Apache Cassandra 01

04 / WSO2 BAM / Apache Cassandra – Connecting to explore KS
WSO2 BAM / Apache Cassandra 01

05 / WSO2 BAM / Apache Cassandra – List of Key Spaces
WSO2 BAM / Apache Cassandra 01

06 / WSO2 BAM / Apache Cassandra – Exploring the Kismet data
WSO2 BAM / Apache Cassandra 01

07 / WSO2 BAM / Apache Cassandra – Exploring the Kismet data
WSO2 BAM / Apache Cassandra 01

In the blog post (Part III), I will explain how to create a simple Dashboard showing the WIFI traffic captured in real-time.
See you soon.

Tagged with: , , , , , , ,
Posted in BAM, Big Data, IoT, Security

Everything generates data: Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM (Part I)

Yes, in this digital world, everything generates data, but before to do BigData, you have to follow these steps:

1. Capture
Acquires, Integrates data.

2. Store
Classification, Consolidate, Transformation, Storage Design, etc.

3. Analysis
Exploration, visualization, modeling, prediction, etc.

Everything generates data – IoT, BigData, Privacy, Security
Everything generates data - IoT, BigData, Privacy, Security

In this first blog post I will explain how to capture anonymous WIFI/802.11 traffic using a Raspberry Pi 2 Model B, Kismet (An 802.11 layer2 wireless network detector, sniffer, and intrusion detection system) and in the second blog post I will use WSO2 BAM 2.5.0 to collect the anonymous WIFI traffic to generate a simple Dashboard showing data in live or real time.

The final idea is create a simple Dashboard showing the Mobile Devices as mobile phones identified around of the Raspberry Pi.
Anyway, you can use this traffic for different purposes such as:
* Monitor Shopping Activity
* Vehicule Traffic Monitoring
* Street Activity Monitoring
* Etc.

Architecture – Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM and WSO2 CEP
Architecture - Capturing WIFI anonymous traffic using Raspberry Pi and WSO2 BAM and WSO2 CEP

Well, now let’s get down to work.

I.- Configure the Raspberry Pi to enable monitor mode

 

1. Prepare the Raspberry Pi

Obviously, I have a clean image of Raspbian installed in my Raspberry Pi 2 Model B.
The below steps explain how to prepare Raspberry Pi and install and configure Kismet to capture 802.11 anonymous traffic.

Before to do it, I have to prepare the Raspberry Pi, for example, configure a static IP address to Ethernet interface (eth0) to get SSH access remotely. After that, I can configure the Wireless interface (wlan0) and install Kismet.

1.1) Get SSH access to Raspberry Pi

$ ssh pi@192.168.1.102
pi@192.168.1.102's password:
Linux rpi-chicha 3.18.11-v7+ #781 SMP PREEMPT Tue Apr 21 18:07:59 BST 2015 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jan 29 11:32:20 2016 from 192.168.1.43

1.2) Connect the USB WIFI dongle

$ lsusb 
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
Bus 001 Device 004: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter

Check if your WIFI dongle allows monitor mode.

Note:
RTL8188CUS does not allow monitor mode.
http://raspberrypi.stackexchange.com/questions/8578/enable-monitor-mode-in-rtl8188cus-realtek-wifi-usb-dongle

$ ifconfig 
$ sudo ifconfig
eth0      Link encap:Ethernet  HWaddr b8:27:eb:1e:12:63
          inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:32177 errors:0 dropped:568 overruns:0 frame:0
          TX packets:1940 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2495710 (2.3 MiB)  TX bytes:187339 (182.9 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:46 errors:0 dropped:0 overruns:0 frame:0
          TX packets:46 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4568 (4.4 KiB)  TX bytes:4568 (4.4 KiB)

wlan0     Link encap:Ethernet  HWaddr 00:13:ef:c0:21:2b
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:2394 errors:0 dropped:0 overruns:0 frame:0
          TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:207760 (202.8 KiB)  TX bytes:3764 (3.6 KiB)
$ sudo iwconfig wlan0
wlan0     IEEE 802.11bgn  ESSID:off/any
          Mode:Managed  Access Point: Not-Associated   Tx-Power=20 dBm
          Retry short limit:7   RTS thr:off   Fragment thr:off
          Encryption key:off
          Power Management:off

1.3) Set static IP address to eth0 and configure wlan0 (optional)

$ sudo nano /etc/network/interfaces

Initial config.

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

Add and configure config for eth0 and wlan0.

auto lo

iface lo inet loopback

iface eth0 inet static
address 192.168.1.102
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1

allow-hotplug wlan0
auto wlan0
iface wlan0 inet dhcp
   wpa-ssid "your-ssid"
   wpa-psk "your-password"

Reload the changes.

$ sudo service networking reload

1.4) Enable wlan0 in monitor mode (option 1)

Run these 2 commands together (*):

$ sudo ifconfig wlan0 down;sudo iwconfig wlan0 mode monitor

Now, check if wlan0 is working in mode monitor:

$ sudo iwconfig wlan0
wlan0     IEEE 802.11bgn  Mode:Monitor  Frequency:2.412 GHz  Tx-Power=20 dBm   
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:off

$ sudo ifconfig wlan0
wlan0     Link encap:UNSPEC  HWaddr 00-13-EF-C0-21-2B-70-78-00-00-00-00-00-00-00-00
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:764 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:81873 (79.9 KiB)  TX bytes:1475 (1.4 KiB)

(*) The raspbian has a service called ifplugd. This ifplugd is a daemon which will automatically configure your ethernet device when it is plugged in and automatically unconfigure it if it’s pulled.
So, it does the device stay busy. Disabling it allow you to use ifconfig and iwconfig normally. Just use the comand:

$ sudo service ifplugd stop
[ ok ] Network Interface Plugging Daemon...stop eth0...stop wlan0...done.

$ sudo service ifplugd status
[....] eth0: ifplugd not running.
[....] wlan0: ifplugd not running.
[info] all: device all is either not present or not functional.

1.5) Enable wlan0 in monitor mode (option 2)

If above (option 1) configuration not worked, the you could try this alternative by using the iw scripts. Then, gonna try it.

$ sudo apt-get install iw

$ sudo iw wlan0 info
Interface wlan0
  ifindex 3
  type monitor
  wiphy 0

Add the mon0 in monitor mode, a new network interface, instead of wlan0.

$ sudo iw phy phy0 interface add mon0 type monitor

Check the interfaces associated to phy0.

$ sudo iw dev
phy#0
  Interface mon0
    ifindex 6
    wdev 0x4
    addr 74:f0:6d:4d:40:2f
    type monitor
  Interface wlan0
    ifindex 5
    wdev 0x3
    addr 74:f0:6d:4d:40:2f
    type managed
    channel 6 (2437 MHz), width: 20 MHz, center1: 2437 MHz

Now, we need to remove the wlan0. If you do that, proably the mon0 interface will be restored to managed mode.

$ sudo iw dev wlan0 del

$ sudo iw dev
phy#0
  Interface mon0
    ifindex 8
    wdev 0x6
    addr 74:f0:6d:4d:40:2f
    type managed

But, to avoid above, you have to configure/set monitor mode properly with the ifconfig and iwconfig commands as follow.

$ sudo ifconfig mon0 down
$ sudo iwconfig mon0 mode monitor
$ sudo ifconfig mon0 up

Now, if you check the interface in monitor mode, you should see this:

$ sudo iw dev
phy#0
  Interface mon0
    ifindex 8
    wdev 0x6
    addr 74:f0:6d:4d:40:2f
    type monitor
    channel 6 (2437 MHz), width: 20 MHz (no HT), center1: 2437 MHz

After that, check if wlan0 or mon0 are running in monitor mode, if so, then you are ready to start Kismet.

II.- Install, configure and start Kismet

 

2.1) Installation of Kismet

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get install libncurses5-dev libpcap-dev libpcre3-dev libnl-dev

# latest version - 2015.05.01
$ wget http://www.kismetwireless.net/code/kismet-2013-03-R1b.tar.xz 

$ xz -d kismet-2013-03-R1b.tar.xz

$ tar -xf kismet-2013-03-R1b.tar

$ cd kismet-2013-03-R1b

$ ./configure

$ make

$ sudo make suidinstall

$ sudo usermod -a -G kismet pi

$ sudo reboot

2.2) Configure Kismet

Edit /usr/local/etc/kismet.conf to point at the WIFI adaptor configured in monitor mode, in this case to add ncsource=mon0 or ncsource=wlan0 and hidedata=true.

$ sudo nano /usr/local/etc/kismet.conf 

Download the manufacturer list. This is useful to identify the Wireless Interface Manufacturer.

$ sudo mkdir -p /usr/share/wireshark/

$ cd /usr/share/wireshark/

$ sudo wget -O manuf http://anonsvn.wireshark.org/wireshark/trunk/manuf

$ sudo cp manuf /etc/manuf

2.3) Start Kismet Server and Client

If you have configured and updated /usr/local/etc/kismet.conf, then you can start Kismet running this command (without parameters):

$ kismet_server

But, if you haven’t configured /usr/local/etc/kismet.conf or you want to overwrite It, you can pass these parameters with below command, this will create a TCP listener on port 2501:

$ kismet_server -c wlan0

INFO: Not running as root - will try to launch root control binary (/usr/lo
      cal/bin/kismet_capture) to control cards.
INFO: Started kismet_capture control binary successfully, pid 2517
INFO: Reading from config file /usr/local/etc/kismet.conf
debug - 2516 - child creating ipc fdfd
INFO: No 'dronelisten' config line and no command line drone-listen 
      argument given, Kismet drone server will not be enabled.
INFO: Created alert tracker...
INFO: Creating device tracker...
INFO: Registered 80211 PHY as id 0
INFO: Kismet will spend extra time on channels 1,6,11
INFO: Kismet will attempt to hop channels at 3 channels per second unless 
      overridden by source-specific options
INFO: Matched source type 'rt2800usb' for auto-type source 'wlan0'
INFO: Using hardware channel list 1:3,2,3,4,5,6:3,7,8,9,10,11:3,12,13,14, 
      14 channels on source wlan0
INFO: Source 'wlan0' will attempt to create and use a monitor-only VAP 
      instead of reconfiguring the main interface
ERROR: Detected the following processes that appear to be using the 
       interface wlan0, which can cause problems with Kismet by changing 
       the configuration of the network device: wpa_supplicant dhclient 
       ifplugd.  If  Kismet stops running or stops capturing packets, try 
       killing one (or all) of these processes or stopping the network for 
       this interface.
INFO: Created source wlan0 with UUID 52bee95c-c8df-11e5-9fa4-dc04bb23e201
INFO: Will attempt to reopen on source 'wlan0' if there are errors
INFO: Created TCP listener on port 2501
INFO: Kismet drone framework disabled, drone will not be activated.
INFO: Inserting basic packet dissectors...
INFO: hidedata= set in Kismet config.  Kismet will ignore the contents of 
      data packets entirely
INFO: Allowing Kismet frontends to view WEP keys
INFO: Starting GPS components...
INFO: Enabling reconnection to the GPS device if the link is lost
INFO: Using GPSD server on localhost:2947
INFO: Opened OUI file '/etc/manuf
INFO: Indexing manufacturer db
INFO: Completed indexing manufacturer db, 28150 lines 563 indexes
INFO: Creating network tracker...
INFO: Creating channel tracker...
INFO: Registering dumpfiles...
INFO: Pcap log in PPI format
...

You can run Kismet Server as a Linux deamon.

$ kismet_server -c wlan0 --daemonize

INFO: Not running as root - will try to launch root control binary (/usr/lo
      cal/bin/kismet_capture) to control cards.
INFO: Started kismet_capture control binary successfully, pid 4027
INFO: Reading from config file /usr/local/etc/kismet.conf
Silencing output and entering daemon mode...
debug - 4028 - child creating ipc fdfd

And now, start the Kismet Client. The Kismet Client will connect to Kismet Server automatically, because both are running in the same Raspberry Pi.

$ kismet_client 

Kismet – Capturing 802.11 anonymous traffic using Raspberry Pi
Kismet - Capturing 802.11 anonymous traffic using Raspberry Pi

III.- Common Kismet errors

 

1) Error when start Kismet. plugins folder not found.

ERROR: Failed to open primary plugin directory (/usr/local/lib/kismet/): 
       No such file or directory
ERROR: Failed to open user plugin directory (/home/pi/.kismet//plugins/): 
       No such file or directory
ERROR: Failed to open primary plugin directory (/usr/lib/kismet/): No such file or directory  
ERROR: Failed to open user plugin directory (/root/.kismet//plugins/): No such file or directory

Solution:

$ sudo mkdir -p /usr/local/lib/kismet/

$ mkdir -p /home/pi/.kismet/plugins/
$ sudo mkdir -p /usr/lib/kismet/

$ mkdir -p /root/.kismet/plugins/

2) A process is using the wireless interface.

ERROR: Didn't understand driver 'ath9k_htc' for interface 'mon0', but it 
       looks like a mac80211 device so Kismet will use the generic options 
       for it.  Please post on the Kismet forum or stop by the IRC channel 
       and report what driver it was.

ERROR: Detected the following processes that appear to be using the 
       interface mon0, which can cause problems with Kismet by changing 
       the configuration of the network device: ifplugd.  If  Kismet stops 
       running or stops capturing packets, try killing one (or all) of 
       these processes or stopping the network for this interface.

Solution:

$ sudo pkill wpa_cli; sudo pkill ifplugd; sudo pkill wpa_supplicant

3) The manufactur file doesn’t exist.

ERROR: Could not open OUI file '/etc/manuf': No such file or directory
ERROR: Could not open OUI file '/usr/share/wireshark/wireshark/manuf': No 
       such file or directory

Solution:

$ sudo mkdir -p /usr/share/wireshark/

$ cd /usr/share/wireshark/

$ sudo wget -O manuf http://anonsvn.wireshark.org/wireshark/trunk/manuf

$ sudo cp manuf /etc/manuf

4) VAP for mon0 wasn’t created.

ERROR: Not creating a VAP for mon0 even though one was requested, since 
       the interface is already in monitor mode.  Perhaps an existing 
       monitor mode VAP was specified. To override this and create a new 
       monitor mode vap no matter what, use the forcevap=true source option

Solution:
Check if mon0 is being used for other process or restart and reconfigure your wireless interface.

Tagged with: , , , , , , ,
Posted in BAM, Big Data, IoT, Security

Log Events Management in WSO2 (Micro)services: ELK & rTail (Part II)

Trailing and checking the performance and the health of (micro)services are important tasks to be accomplished.
The logging is a time consuming process and we have to prepare before in order to be more productive.
There are many tools out there, opensource, commercial, on-cloud, such as log.io, ELK, Clarity, rTail, Tailon, frontail, etc. In my opinion, for a VM used to development the most simple, fresh and lightweight tool is rTail (http://rtail.org).

With rTail I can collect different log files, track and visualize them from a Browser in real time. rTail is very easy to use, just install NodeJS and deploy rTail application and you will be ready to send any type of traces to Browser directly avoiding store/persist logs, index and parse/filter them.

In this second blog post I will explain how to use rTail to view all streams/log-events from a Browser in real time.
For that, we require:

  • rTail Server Docker Container who will centralize and view all stream/log-events.
  • Vagrant box (with WSO2 stack and Wiremock) what will send log events to above rTail Server Docker Container

rTail – Viewing WSO2 and Wiremock raw log events
rTail - Viewing WSO2 and Wiremock raw log events

Part II: rTail (a node.js application to debug and monitor in realtime)

 

1. Starting with rTail Server Docker Container

 

1) Prepare the rTail Server Docker Container

I have created and published a rTail Docker Image in Docker Hub ready to use it.
Just download and run it.

$ docker login
Username (chilcano):
WARNING: login credentials saved in /Users/Chilcano/.docker/config.json
Login Succeeded

$ docker search rtail-server
NAME                    DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
chilcano/rtail-server   rTail is a realtime debugging and monitori...   1                    [OK]
maluuba/rtail-server                                                    0                    [OK]

$ docker pull chilcano/rtail-server
Using default tag: latest
latest: Pulling from chilcano/rtail-server
523ef1d23f22: Pull complete
140f9bdfeb97: Pull complete
5c63804eac90: Pull complete
ce2b29af7753: Pull complete
5c2bdca41b86: Pull complete
f417df1119e6: Pull complete
d36821cb651a: Pull complete
48d9fce985a8: Pull complete
d09c6f7639f7: Pull complete
46a67992ee2a: Pull complete
78642d9272ea: Pull complete
d95ea484c076: Pull complete
d55510bfe660: Pull complete
2cc39298d465: Pull complete
bd885c733a0a: Pull complete
f8fa62532424: Pull complete
Digest: sha256:ebb137e20fd3eb404b57620e14a355d7bdc635ebab237719ba41e19c1fa8928b
Status: Downloaded newer image for chilcano/rtail-server:latest

$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
chilcano/rtail-server   latest              f8fa62532424        2 days ago          663.8 MB
sebp/elk                latest              96f071b7a8e2        2 weeks ago         980.8 MB
chilcano/wso2-dss       3.2.1               acd92f55f678        5 weeks ago         1.383 GB
chilcano/wiremock       latest              a3e4764483b9        6 weeks ago         597.3 MB
java                    openjdk-7           e93dd201a77e        7 weeks ago         589.7 MB

$ docker run -d -t --name=rtail-srv -p 8181:8181 -p 9191:9191/udp chilcano/rtail-server
4d0c897e9741342dfc7c8ca9d95dc8144f56f21954baf9170f593585181bd469

$ docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                                                      NAMES
4d0c897e9741        chilcano/rtail-server   "/bin/sh -c 'rtail-se"   15 seconds ago      Up 16 seconds       0.0.0.0:8181->8181/tcp, 9191/tcp, 0.0.0.0:9191->9191/udp   rtail-srv

Or download the Dockerfile, build it and run it.

$ git clone https://github.com/chilcano/docker-rtail-server

$ docker build --rm -t chilcano/rtail-server docker-rtail-server/

$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
chilcano/rtail-srv   latest              93b7a3c76c6a        7 seconds ago       664.8 MB
sebp/elk             latest              96f071b7a8e2        2 weeks ago         980.8 MB
chilcano/wso2-dss    3.2.1               acd92f55f678        5 weeks ago         1.383 GB
chilcano/wiremock    latest              a3e4764483b9        5 weeks ago         597.3 MB
java                 openjdk-7           e93dd201a77e        6 weeks ago         589.7 MB
node                 0.12.6              77d70f920fa3        6 months ago        638.1 MB

$ docker run -d -t --name=rtail-srv -p 8181:8181 -p 9191:9191/udp chilcano/rtail-server
bdbb0476fa201f5114355a636b01ea165335398b50865c6e58f1716931b2c779

$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                              NAMES
bdbb0476fa20        chilcano/rtail-srv   "/bin/sh -c 'rtail-se"   5 seconds ago       Up 5 seconds        9191/tcp, 0.0.0.0:9191->9191/udp, 0.0.0.0:8181->8181/tcp   rtail-srv

2) Check if the rTail Server Docker Container is working

Just open the rTail Server Web Console from a browser using this URL http://192.168.99.100:8181.
But if you want check if rTail Server Container is reacheable remotely (from other VM) to send log events, just execute this:

# use netcat instead of telnet, because telnet doesn't use UDP
$ nc -vuzw 3 <IP_ADDRESS_RTAIL_CONTAINER> 9191
Connection to 192.168.99.100 9191 port [udp/*] succeeded!

To stop, start or restart rTail Server just stop, start or restart the Docker container

3) Get Shell access to rTail Server Container

$ docker exec -i -t rtail-srv bash

Where:
8181 port is running a HTTP server. It is useful to view the log events from a web browser.
9191 port is listening for UDP traffic (log events).

2. Send log events to rTail Server Docker Container

You can send any type of log events, from a syslog event, an echo message or a log by tailing. Before, you have to install rTail application again in the box/VM from where you want send log events.
I have created a Puppet module for rTail and I have included It to the Vagrant box to have the rTail (client) ready to be used.

1) Using rTail (client) to send log events to rTail Server

To get a Vagrant box with rTail (client) pre-installed, you could use these Vagrant scripts (https://github.com/chilcano/vagrant-wso2-dev-srv).

$ git clone https://github.com/chilcano/vagrant-wso2-dev-srv.git

$ cd ~/github-repo/vagrant-wso2-dev-srv

# start
$ vagrant up

# re-load and provision
$ vagrant reload --provision

2) Check if rTail (as Client) is working in the Vagrant box and if can reach to Docker Container

To check if rTail was installed/provisioned properly, get SSH access, try to reach and send some traces to the existing rTail Server Docker Container.

$ vagrant ssh

# use netcat instead of telnet, because telnet doesn't use UDP
$ nc -vuzw 3  9191
Connection to 192.168.99.100 9191 port [udp/*] succeeded!

# send ping events to IP address
$ ping 8.8.4.4 | rtail --id logs-ping --host 192.168.99.100 --port 9191 --mute
$

rTail – Browsing log events
rTail - Browsing log events

3) Send log events to rTail Server Docker Container from the Vagrant box

Wiremock is a mock server that should be running in the box. Then, we will send the Wiremock traces/events to the rTail server.

# start wiremock
$ sudo service wiremock start
[wiremock] server starting ... success (pid 15601)

# tailing a log file
$ tail -f /opt/wiremock/wiremock.log | rtail --id wiremock --host 192.168.99.100 --port 9191 --mute

Now, to send the multiple log events of multiple log files to unique merged stream we will use in this case the multitail.

# install 'multitail'
$ sudo apt-get install multitail

# test 'multitail' (merge the output of 2 commands)
$ multitail -l "ping 8.8.8.8" -L "ping 8.8.4.4"

# send 2 ping output to rTail
$ multitail -l "ping 8.8.8.8" -L "ping 8.8.4.4" | rtail --id logs-ping --host 192.168.99.100 --port 9191 --mute

Now, to send 3 log file to rTail Server to an unique merged stream using this process/pattern, i.e.: WSO2 API Manager, WSO2 ESB and as backend Wiremock (wso2am02a -&gt; wso2esb02a -&gt; wiremock), then you should multitail the 3 log files

# tailing the flow 'wso2am02a -> wso2esb02a -> wiremock'
$ multitail -ke "[ \t]+$" /opt/wso2am02a/repository/logs/wso2carbon.log -I /opt/wso2esb02a/repository/logs/wso2carbon.log -I /opt/wiremock/wiremock.log | rtail --id logs-wso2-01 --host 192.168.99.100 --port 9191 --tty --mute 

If you use tail instead of multitail you will see all log events merged but with a mark/header. You could create a shell script to remove these headers.

$ tail -f /opt/wso2am02a/repository/logs/wso2carbon.log -f /opt/wso2esb02a/repository/logs/wso2carbon.log -f /opt/wiremock/wiremock.log | rtail --id logs-wso2-02 --host 192.168.99.100 --port 9191 --tty --mute

Where:

  • -ts add a timestamp (format configurable in multitail.conf) before each line
  • -ke "[ \t]+$" remove TABs and blankspaces in every line.
  • -I merge the log file.
  • --tty keeps ansi colors.

Observations:

  • multitail consolidate multiple log lines in on2 line associated a to timestamp (date+hh:mm:ss), but doesn’t accept milliseconds.
  • Using tail you require create a shell script to remove header or apply filters to standarize Date formats, etc.

rTail – Multiple log tailing using multitail
rTail - Multiple log tailing using 'multitail'

rTail – Multiple log tailing using tail
rTail - Multiple log tailing using 'tail'

4) Shell scripts to send multiple WSO2 log files

I have created a bash script to send all log events to the rTail server. You can find the bash script under /etc/init.d/rtail-send-logs and can run it whenever.

# initial status of rtail scripts
$ service --status-all
 ...
 [ - ]  rtail-server
 [ - ]  rtail-send-logs
 ...

# start rTail Server, useful just for rTail Server Docker Container
$ sudo service rtail-server status
[rTail] server is running (pid 1234)

There is a rTail Puppet module to enable the rTail server to start automatically when booting the VM.
In other words, rTail server always is listening in the port UDP to receive events and logs.

# start, stop and status of WSO2 log files simultaneously (not merged)
$ sudo service rtail-send-logs status
[wso2am02a] is sending logs to rTail.
[wso2esb01a] is sending logs to rTail.
[wso2esb02a] is sending logs to rTail.
[wso2dss01a] is sending logs to rTail.
[wso2greg01a] is sending logs to rTail.
[wiremock] is sending logs to rTail.

$ sudo service rtail-send-logs stop
[wso2am02a] is stopping sending logs to rTail ... success
[wso2esb01a] is stopping sending logs to rTail ... success
[wso2esb02a] is stopping sending logs to rTail ... success
[wso2dss01a] is stopping sending logs to rTail ... success
[wso2greg01a] is stopping sending logs to rTail ... success
[wiremock] is stopping sending logs to rTail ... success

$ sudo service rtail-send-logs start
[wso2am02a] is starting sending logs to rTail ... success
[wso2esb01a] is starting sending logs to rTail ... success
[wso2esb02a] is starting sending logs to rTail ... success
[wso2dss01a] is starting sending logs to rTail ... success
[wso2greg01a] is starting sending logs to rTail ... success
[wiremock] is starting sending logs to rTail ... success

That’s all.

Tagged with: , , , , , , , , , ,
Posted in Microservices, SOA
Archives