Setting Up the TICK Stack

We're going to set up the TICK stack using Docker. If you wanted to install it on a remote server, you could follow the documentation here: https://docs.influxdata.com/chronograf/v1.3/introduction/getting-started/

What Is TICK?

TICK stands for Telegraf, InfluxDb, Chronograf, and Kapacitor. For convenience, I am going to quote descriptions of the TICK stack here:

Telegraf

Telegraf is a plugin-driven server agent for collecting and reporting metrics. Telegraf has plugins or integrations to source a variety of metrics directly from the system it’s running on, pull metrics from third party APIs, or even listen for metrics via a StatsD and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.

InfluxDB

InfluxDB is a Time Series Database built from the ground up to handle high write & query loads. InfluxDB is a custom high performance datastore written specifically for timestamped data, including DevOps monitoring, application metrics, IoT sensor data, & real-time analytics. Conserve space on your machine by configuring InfluxDB to keep data for a defined length of time, automatically expiring & deleting any unwanted data from the system. InfluxDB also offers a SQL-like query language for interacting with data.

Chronograf

Chronograf is the administrative user interface and visualization engine of the platform. It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with real-time visualizations of your data and easily create alerting and automation rules.

Kapacitor

Kapacitor is a native data processing engine. It can process both stream and batch data from InfluxDB. Kapacitor lets you plug in your own custom logic or user-defined functions to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, and perform specific actions based on these alerts like dynamic load rebalancing. Kapacitor integrates with HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, and more.

Running the Setup

Assuming we're going the Docker route, you need to do the following:

  1. Start up Kitematic. You will be asked to use virtualbox if the program can't find a "native" setup (HyperV). Use VirtualBox if no native setup is fine, otherwise you shouldn't see a message.
  2. You will be presented with a message that says "Starting a Docker VM." This is creating a virtual machine for you called default that will be used to host your Docker containers.
  3. After this runs, you're ready to set up your Docker containers! You will be asked to log in to Docker Hub. I recommend you make an account or log in, but if you'd rather not there's a "Skip For Now" hidden in the bottom right corner.
  4. You should now be at the home menu for Kitematic. We're going to use the Docker command line to enter the below instructions:
docker network create influxdb
docker run -d -p 8086:8086 --name=influxdb --net=influxdb library/influxdb
docker run -d -p 9092:9092 --name=kapacitor -v kapacitor:/var/lib/kapacitor --net=influxdb -e KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086 library/kapacitor 
docker run -d -p 8888:8888 --name=chronograf --net=influxdb library/chronograf --influxdb-url=http://influxdb:8086

If you want to know what we just did, here is a version of the commands to run with comments:

# create a new network called influxdb.  
# docker containers connected to the image can access it with http://influxdb:<portname>
docker network create influxdb

# start an inflxudb container connected to the influxdb network with 8086 mapped (influxdb's port)
docker run -d -p 8086:8086 --name=influxdb --net=influxdb library/influxdb

# start a kapacitor container with 9092 mapped (kapacitor's port)
# and create a docker volume we can use to make tickscripts
# we also connect this to the influxdb network and force an influxdb url
docker run -d -p 9092:9092 --name=kapacitor -v kapacitor:/var/lib/kapacitor --net=influxdb -e KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086 library/kapacitor 

# start a chronograf container with 8888 mapped (chronograf's port)
# and force an influxdb url on the influxdb network
docker run -d -p 8888:8888 --name=chronograf --net=influxdb library/chronograf --influxdb-url=http://influxdb:8086

Setting Up Grafana (Optional)

Grafana is an alternative to Chronograf. If you want features like pie charts, you want to use Grafana.

docker run -d -p 3000:3000 --name=grafana --net=influxdb grafana/grafana

Exposing Your Dashboard To Your Network (optional)

After finishing the above, you should be able to see in Kitematic, you may install nginx to expose your docker containers to the rest of the world.

  1. Download nginx for Windows http://nginx.org/en/download.html
  2. Extract nginx to a directory on your machine (I like to put these sorts of things in C:\Utils\)
  3. Open the conf directory in your nginx installation and edit nginx.conf
    1. This is where all of your nginx server settings are
    2. Go into the server section of the configuration file
    3. Change listen 80; to listen *:80 (should do the same thing, but I'm paranoid on Windows)
    4. In location / {} delete everything already present (should be root html index stuff)
    5. In location put proxy_pass http://<your docker's ip>:<either 8888 for Chronograf or 3000 for Grafana>

Your result should look something like this:

   ........

    server {
        listen       *:80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            proxy_pass http://192.168.99.100:3000;
        }

    ........

Docker likes to use 192.168.99.100, so there's a high likelihood but not a guarantee that it'll be the same address for you.

Making Your First Database

You can either use cURL and make HTML requests to your InfluxDB instance, or you can log into the shell of your influxdb container and use the influx shell. For the purpose of not needing to potentially install more tools, we will go into the influx shell.
Return to the Docker CLI from Kitematic and run the below:

docker exec -u 0 -it influxdb bash

Welcome to your container's shell! From here we can execute most unix commands like ls cd etc. For now we are just going to run influx.
There's a lot you can do from the Influx shell, which you can check out here: https://docs.influxdata.com/influxdb/v1.3/introduction/getting_started/

For now, use the command CREATE DATABASE to make a new database and SHOW DATABASES to confirm you made a new database.

InfluxDB shell version: 1.3.5
> CREATE DATABASE analytics
> SHOW DATABASES
name: databases
name
----
_internal
analytics

After you've created a database, you can directly insert data into it like so (and then "drop" or delete all of the data):

> USE analytics
Using database analytics
> INSERT seriesname hello='world',value=0.432
ERR: {"error":"unable to parse 'seriesname hello='world',value=0.432': invalid boolean"}

> USE analytics
Using database analytics
> INSERT seriesname foo=2232,value=0.432
> SELECT * FROM seriesname
name: seriesname
time                foo  value
----                ---  -----
1505339027450073450 2232 0.432
> INSERT seriesname foo=2232,value=13.23
> SELECT * FROM seriesname WHERE "value" > 1
name: seriesname
time                foo  value
----                ---  -----
1505339079898428819 2232 13.23
> DROP SERIES FROM seriesname

results matching ""

    No results matching ""