Simple Service Discovery

Published on:

When app a, b and c need to know about each other . . .

One approach is a deployment system with config files where you enter the location of each of the apps. This works ok unless things change a lot or you have lots of apps that need to know about each other.

The slam dunk is to not configure anything. When you deploy your app it tells the service discovery system about itself, and the service discovery system takes care of letting other apps know what they need to know.

How does this work in practice?

We deploy app a. It tells the service discovery system it's at 34.25.2.11:294. The service discovery system sees that app a's info has changed. It updates the config files of the other apps and tells them their config files have changed. The service discovery system could update environment variables as well.

With this approach your applications don't know anything about the service discovery system. They just need a way to be notified about config changes. Or you could just restart the app whenever there's a change.

This approach can be used for anything else apps need to know about each other, feature flags, etc.

Consul along with consul-template and envconsul allow you to do all this fairly simply.

Route AWS EC2 VPC internal traffic with Route 53

Published on:

I wanted to easily route VPC internal traffic.

This is all you need to know.

http://www.cloudtrek.com.au/blog/add-private-route-53-dns-to-your-aws-vpc/

AND !!! You must set up the Route 53 stuff before creating EC2 instances. Instances created before you set up your Route 53 stuff won't work.

ALSO !!! The bit at the end about creating a new DHCP Options Set (which is very nice!) needs to be done before EC2 instances are created.

Song List

Published on:
key ccli name
G 4611679 Your Name
G 3148428 Forever
G 4491002 Marvelous Light
C 5895580 Joyful (The One Who Saves)
E 25400 All Hail The Power Of Jesus' Name
E 2397964 My Redeemer Lives

(order determined)

ElasticSearch nginx HTTP Tricks

Published on:

Playing HTTP Tricks with Nginx

Karel Minařík October 07, 2014

One of the defining features of Elasticsearch is that it’s exposed as a (loosely) RESTful service over HTTP.

The benefits are easy to spell out, of course: the API is familiar and predictable to all web developers. It’s easy to use with “bare hands” via the curl command, or in the browser. It’s easy to write API wrappers in various programming languages.

Nevertheless, the importance of the HTTP-based nature of Elasticsearch is rooted deeper: in the way it fits into the existing paradigm of software development and architecture.

https://www.elastic.co/blog/playing-http-tricks-nginx

Point GoDaddy Subdomain to AWS Route 53

Published on:

So I have a domain from GoDaddy and I want to manage just one or more subdomains of that domain on AWS Route 53. Actually I tried to do this with my NetworkSolutions account, but on phone they said it is not possible.

Here goes with GoDaddy.

Pretty much followed this exactly. Helps if you select classic DNS manager in GoDaddy.
http://blog.sefindustries.com/redirect-a-subdomain-to-route-53-from-godaddy/

Ended up with this DNS file. So ratz.jsks.us and everything under it is managed by Route 53.

; SOA Record
JSKS.US.    3600    IN  SOA ns05.domaincontrol.com. dns.jomax.net (
                2015072400
                28800
                7200
                604800
                3600
                )

; A Records
@   1800    IN  A   54.82.143.206

; CNAME Records
blog    3600    IN  CNAME   @
img 3600    IN  CNAME   8e00c44fb080228f198f-d1e9688c9fbaaa049b87caca2fdd4594.r84.cf2.rackcdn.com
notes   3600    IN  CNAME   @
www 3600    IN  CNAME   @

; NS Records
@   3600    IN  NS  ns05.domaincontrol.com
@   3600    IN  NS  ns06.domaincontrol.com
ratz    1800    IN  NS  ns-1061.awsdns-04.org
ratz    1800    IN  NS  ns-1882.awsdns-43.co.uk
ratz    1800    IN  NS  ns-214.awsdns-26.com
ratz    1800    IN  NS  ns-992.awsdns-60.net

Tracking logins with sshd logs

Published on:

Instead of having everyone log in with separate user account, you could just have everyone use a different key pair. Then set LogLevel in /etc/ssh/sshd_config to VERBOSE. Logs will look like this.

Jun 24 22:43:42 localhost sshd[29779]: Found matching RSA key: d8:d5:f3:5a:7e:27:42:91:e6:a5:e6:9e:f9:fd:d3:ce
Jun 24 22:43:42 localhost sshd[29779]: Accepted publickey for caleb from 127.0.0.1 port 59630 ssh2

http://unix.stackexchange.com/questions/15575/can-i-find-out-which-ssh-key-was-used-to-access-an-account

Docker nginx ssl reverse proxy wildcard cert

Published on:

OK, so here is how you can run nginx in docker and configure it to terminate SSL and handle wildcard cert and multiple related domain names.

Create Folders

The first order of business is to create folders on the host machine that we will map to the docker container for config files and logs.

$ mkdir -p ~/etc/nginx/conf.d
$ mkdir -p ~/var/log/nginx

After you have created all the config files and everything is up and running you will end up with files like this.

~$ find etc/nginx/*
etc/nginx/woohoo-wildcard.key
etc/nginx/common
etc/nginx/conf.d
etc/nginx/conf.d/ratfoo.conf
etc/nginx/gd79_bundle.crt

~$ find var/*
var/log
var/log/nginx
var/log/nginx/foo.access.log
var/log/nginx/access.log
var/log/nginx/rat.access.log
var/log/nginx/error.log

Create Config Files

Next create your config files like so (we just catted them out for display).

~$ cat etc/nginx/common
ssl_certificate           gd79_bundle.crt;
ssl_certificate_key       wohoo-wildcard.key;
ssl on;
ssl_session_cache  builtin:1000  shared:SSL:10m;
ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
~$ cat etc/nginx/conf.d/ratfoo.conf
server {
    listen 80;
    return 301 https://$host$request_uri;
}

server {

    listen 443;
    server_name rat.woohoo.com;

    include common;

    access_log            /var/log/nginx/rat.access.log;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      # Fix the “It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://localhost:1919;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:1919 https://rat.woohoo.com;
    }
}

server {

    listen 443;
    server_name foo.woohoo.com;

    include common;

    access_log            /var/log/nginx/foo.access.log;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      # Fix the “It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://localhost:1919;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:1919 https://foo.woohoo.com;
    }
}

Spin Up Hello World Dockers

Here we spin up two dockers that just display hello world in response to HTTP request.

docker run -d -p 1919:80 tutum/hello-world 
docker run -d -p 2020:80 tutum/hello-world 

Spin up nginx Docker

Finally spin up your nginx docker

docker run -d \
 --net=host \
 -v /home/ubuntu/etc/nginx/conf.d:/etc/nginx/conf.d \
 -v /home/ubuntu/etc/nginx/common:/etc/nginx/common \
 -v /home/ubuntu/etc/nginx/gd79_bundle.crt:/etc/nginx/gd79_bundle.crt \
 -v /home/ubuntu/etc/nginx/woohoo-wildcard.key:/etc/nginx/woohoo-wildcard.key \
 -v /home/ubuntu/var/log/nginx:/var/log/nginx \
 nginx

That's All

Assuming foo.woohoo.com and rat.woohoo.com map to the public IP of your server . . .

And assuming port 443 is not blocked by firewalls or AWS security groups . . .

You should be able to browse to https://foo.woohoo.com and https://rat.woohoo.com, and see a lovely hello world page.

The only other trick is getting your ssl_certificate and ssl_certificate_key correct. That's a topic for another time.

References

http://serverfault.com/questions/538803/nginx-reverse-ssl-proxy-with-multiple-subdomains

Martin Holste on AWS Lambda Functions

Published on:

Building Scalable and Responsive Big Data Interfaces with AWS Lambda

July 10, 2015 FireEye AWS Lambda

This is a guest post by Martin Holste, a co-founder of the Threat Analytics Platform at FireEye where he is a senior researcher specializing in prototypes.

Overview

At FireEye, Inc., we process billions of security events every day with our Threat Analytics Platform, running on AWS. In building our platform, one of the problems we had to solve was how to be efficient and responsive with user-driven event analysis at this scale. Our analysis falls into three basic categories: threat intelligence matching, anomaly detection, and user-driven queries. We relentlessly search for ways to improve our efficiency and responsiveness, and AWS Lambda is a solution that has shown significant value in fulfilling these goals by providing a simple platform for scaling user-driven workloads.

http://blogs.aws.amazon.com/bigdata/post/Tx3KH6BEUL2SGVA/Building-Scalable-and-Responsive-Big-Data-Interfaces-with-AWS-Lambda

DigitialOcean ELK Stack

Published on:

The DigitalOcean ELK Stack One-Click Application provides you with a quick way to launch a centralized logging server. The ELK Stack is made up of three key pieces of software: Elasticsearch, Logstash, and Kibana. Together they allow you to collect, search, and analyze logs files from across your infrastructure. Logstash collects and parses the incoming logs, Elasticsearch indexes them, and Kibana gives you a powerful web interface to visualize the data.

This tutorial will show you how to launch an ELK instance and set up Logstash Forwarder on your other servers to send their logs to your new centralized logging server.

https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-elk-stack-one-click-application

Making DigitalOceans Private Networking Secure

Published on:

A few weeks ago here at Hone, we decided to spin a new server cluster in DigitalOcean’s NYC3 data center.

DigitalOcean introduced ‘private’ networking just over a year ago. However, it turns out that DigitalOcean actually refers to this as Shared Private Networking—and many of the comments under their announcement point out that their private networking isn’t really too private.

We decided to use OpenVPN to layer a secure network on top of DigitalOcean’s shared private networking.

https://cardoni.net/how-to-install-and-configure-openvpn/

Problem with Promises

Published on:

Fellow JavaScripters, it's time to admit it: we have a problem with promises.

No, not with promises themselves. Promises, as defined by the A+ spec, are awesome.

The big problem, which has revealed itself to me over the course of the past year, as I've watched numerous programmers struggle with the PouchDB API and other promise-heavy APIs, is this:

Many of us are using promises without really understanding them.

http://pouchdb.com/2015/05/18/we-have-a-problem-with-promises.html

nginx ssl reverse proxy

Published on:

This came in really handy.

https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins

server {
    listen 80;
    return 301 https://$host$request_uri;
}

server {

    listen 443;
    server_name jenkins.domain.com;

    ssl_certificate           /etc/nginx/cert.crt;
    ssl_certificate_key       /etc/nginx/cert.key;

    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    access_log            /var/log/nginx/jenkins.access.log;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      # Fix the “It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://localhost:8080;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:8080 https://jenkins.domain.com;
    }
}

iptables fun

Published on:
#!/bin/bash

# /etc/init.d/vimmau-whitelist

# Controls access to  aztek banana and jersey server ports.
# Allows us to whitelist a lot more ip ranges than AWS
# security groups allow.

# To be run only on vimmau hosts.

# AWS security groups will grant full access to banana server--$BANANA_PORT,
# and jersey server--$JERSEY_PORT. We control whitelists here.

# banana and jersey servers may or may not be on the same box. And we may move
# them around. Currently all aztek services run on a single box. We may
# move banana and/or jersey off to a separate box. So this script needs to
# always turn off access on boxes that don't have banana or jersey running
# as well as turn on access if they do.

# NOTE: We assume we are the only one that does anything with iptables on
# a given box. We broadly nuke all chains in all tables, etc.

# whitelist of current customer ip ranges
CUST_WHITELIST=/home/ubuntu/customer.whitelist
# whitelist for rfc1918 addresses and
# whitelist for other special access like the vpn endpoint
RFC1918_WHITELIST=/home/ubuntu/rfc1918-custom.whitelist
HASHSIZE=4096

if [ -z "$ALLOW_BANANA" ]; then
  # so since we currently run banana and jersey on same server we will
  # just hard code this to true. we will need to get more fancy if
  # banana and jersey on different servers.
  ALLOW_BANANA=true
fi
if [ -z "$ALLOW_JERSEY" ]; then
  # so since we currently run banana and jersey on same server we will
  # just hard code this to true. we will need to get more fancy if
  # banana and jersey on different servers.
  ALLOW_JERSEY=true
fi
if [ -z "$BANANA_PORT" ]; then
  BANANA_PORT=4959
fi
if [ -z "$JERSEY_PORT" ]; then
  JERSEY_PORT=4960
fi

echo "ALLOW_BANANA: $ALLOW_BANANA"
echo "ALLOW_JERSEY: $ALLOW_JERSEY"
echo "BANANA_PORT: $BANANA_PORT"
echo "JERSEY_PORT: $JERSEY_PORT"

start() {
  echo "start"
  # nuke all rules in all chains which essentially allows everything.
  open
  port80and443
  whitelist
  if [ "$ALLOW_BANANA" = true ] ; then
    allow_port $BANANA_PORT
  else
    echo "not allowing banana $BANANA_PORT"
  fi
  if [ "$ALLOW_JERSEY" = true ] ; then
    allow_port $JERSEY_PORT
  else
    echo "not allowing jersey $JERSEY_PORT"
  fi
  block_port $BANANA_PORT
  block_port $JERSEY_PORT
}

# So this redirects 80 to 4959 and 443 to 4960 -- doh. The way this works
# customers can use either port and it will work. No need to set or configure
# per customer ports. We will of course have to set port per customer for
# vimmau receptor, but not for server side.
# We can allow all 4 in AWS security groups or just the two we know the
# customer will use.
port80and443() {
  iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 4959
  iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 4960
}

# opens all ports and deletes the whitelist.
open() {
  iptables -F
  iptables -X
  iptables -t nat -F
  iptables -t nat -X
  iptables -t mangle -F
  iptables -t mangle -X
  iptables -P INPUT ACCEPT
  iptables -P FORWARD ACCEPT
  iptables -P OUTPUT ACCEPT
  ipset destroy whitelist &>/dev/null || true
}

# blocks 'all' ports and deletes the whitelist.
stop() {
  open
  block_port $BANANA_PORT
  block_port $JERSEY_PORT
}

whitelist() {
  echo "whitelist"
  ipset flush whitelist &>/dev/null || true
  load_whitelist $CUST_WHITELIST
  load_whitelist $RFC1918_WHITELIST
}

load_whitelist() {
  if [ -f $1 ] ; then
    # make sure there's an ipset named 'whitelist'.
    # this command seems to be idempotent.
    ipset -exist create whitelist hash:net hashsize $HASHSIZE
    # loop over lines in file
    while read rule; do
      # skip lines that start with #
      if [[ ${rule:0:1} == '#' ]]; then
        echo "skipping $rule"
      else
        ipset -exist add whitelist $rule
      fi
    done <"$1"
  else
    echo "$1 not found"
  fi
}

allow_port() {
  echo "allowing port: $1"
  iptables -v -A INPUT -m set --match-set whitelist src -p TCP --dport $1 -j ACCEPT
}

block_port() {
  echo "blocking port: $1"
  iptables -v -A INPUT -p TCP --dport $1 -j LOG --log-prefix "IPTABLES DROPPED: "
  iptables -v -A INPUT -p TCP --dport $1 -j DROP
}

status() {
  echo ""
  iptables -t nat -nvL
  echo ""
  iptables -nvL
  echo ""
  ipset list
}

if [ "$1" == "start" ]; then
  start
elif [ "$1" == "reload" ]; then
  whitelist
elif [ "$1" == "stop" ]; then
  stop
elif [ "$1" == "open" ]; then
  open
elif [ "$1" == "status" ]; then
  status
else
  echo "whitelist.sh <start|stop|reload|status>"
  echo "  start - allow only whitelisted ports and ips"
  echo "  stop - block both ports for all ips"
  echo "  reload - reload list of whitelisted ips"
  echo "  open - allow all ips for both ports (DANGER)"
  echo "  status - show rules and whitelist"
fi

Visa and FireEye

Published on:

Visa and FireEye Join Forces to Help Merchants, Financial Institutions Defend Against Targeted Attacks on Consumer Payment Data

New Visa and FireEye cyber watch program will provide advanced cyber protection capabilities for merchants of all sizes

San Francisco and Milpitas, Calif. – June 3, 2015 – Visa Inc. (NYSE: V) and FireEye, Inc. (NASDAQ: FEYE) today announced their intention to co-develop tools and services to help merchants and issuers protect against advanced cyber attacks targeting payment data. The first of its kind Visa and FireEye Community Threat Intelligence (CTI) offering will bring together threat information from both companies, allowing merchants and issuers to quickly detect and respond to attacks against their IT and payment infrastructure. Under the offering, FireEye will operate the easy-to-use web based service to enhance stakeholders’ knowledge of attacks targeting the ecosystem, providing a significant improvement over current industry practices of sharing threat intelligence via e-mail or static documents.

https://www.fireeye.com/company/press-releases/2015/06/visa-and-fireeye-join-forces-to-help-merchants--financial-instit.html

FireEye management wants you to know

Published on:

5 Things FireEye's Management Wants You to Know

By Investopedia | May 29, 2015 AAA |

Cybersecurity company FireEye (NASDAQ: FEYE) keeps posting numbers that suggest a sustained demand for its threat detection, prevention, and resolution services. The company's recently filed first-quarter 2015 results revealed a leap in revenue of 69% from the prior-year quarter.

http://www.investopedia.com/stock-analysis/052915/5-things-fireeyes-management-wants-you-know-feye.aspx?partner=YahooSA

Ansible uri body bug

Published on:

So there's a very annoying Ansible bug related to the uri body. Let's say you have something like this.

- uri:
    url: "http://some.url"
    method: PUT
    body: '{"url": "http://{{ ansible_eth0.ipv4.address }}:3003"}'
    body_format: json

No, you didn't do anything wrong, but you will get a nasty error: TypeError: must be string or buffer, not dict. You can read about the details here: https://github.com/ansible/ansible-modules-core/issues/265

This hack will work around the problem. Essentially you have to add a null value to your JSON so that Ansible won't wrongly turn the JSON string back into a dict.

vars:
  listener_body:
    url: "http://{{ ansible_eth0.ipv4.address }}:3003"
    _hack: null
tasks:
  - uri:
    url: "http://some.url"
    method: PUT
    body: "{{ listener_body | to_json }}"
    body_format: json