When app a, b and c need to know about each other . . .
One approach is a deployment system with config files where you enter the location of each of the apps. This works ok unless things change a lot or you have lots of apps that need to know about each other.
The slam dunk is to not configure anything. When you deploy your app it tells the service discovery system about itself, and the service discovery system takes care of letting other apps know what they need to know.
How does this work in practice?
We deploy app a. It tells the service discovery system it's at 34.25.2.11:294. The service discovery system sees that app a's info has changed. It updates the config files of the other apps and tells them their config files have changed. The service discovery system could update environment variables as well.
With this approach your applications don't know anything about the service discovery system. They just need a way to be notified about config changes. Or you could just restart the app whenever there's a change.
This approach can be used for anything else apps need to know about each other, feature flags, etc.
Consul along with consul-template and envconsul allow you to do all this fairly simply.
One of the defining features of Elasticsearch is that it’s exposed as a (loosely) RESTful service over HTTP.
The benefits are easy to spell out, of course: the API is familiar and predictable to all web developers. It’s easy to use with “bare hands” via the curl command, or in the browser. It’s easy to write API wrappers in various programming languages.
Nevertheless, the importance of the HTTP-based nature of Elasticsearch is rooted deeper: in the way it fits into the existing paradigm of software development and architecture.
So I have a domain from GoDaddy and I want to manage just one or more subdomains of that domain on AWS Route 53. Actually I tried to do this with my NetworkSolutions account, but on phone they said it is not possible.
Ended up with this DNS file. So ratz.jsks.us and everything under it is managed by Route 53.
; SOA Record
JSKS.US. 3600 IN SOA ns05.domaincontrol.com. dns.jomax.net (
2015072400
28800
7200
604800
3600
)
; A Records
@ 1800 IN A 54.82.143.206
; CNAME Records
blog 3600 IN CNAME @
img 3600 IN CNAME 8e00c44fb080228f198f-d1e9688c9fbaaa049b87caca2fdd4594.r84.cf2.rackcdn.com
notes 3600 IN CNAME @
www 3600 IN CNAME @
; NS Records
@ 3600 IN NS ns05.domaincontrol.com
@ 3600 IN NS ns06.domaincontrol.com
ratz 1800 IN NS ns-1061.awsdns-04.org
ratz 1800 IN NS ns-1882.awsdns-43.co.uk
ratz 1800 IN NS ns-214.awsdns-26.com
ratz 1800 IN NS ns-992.awsdns-60.net
Instead of having everyone log in with separate user account, you could just have everyone use a different key pair. Then set LogLevel in /etc/ssh/sshd_config to VERBOSE. Logs will look like this.
Jun 24 22:43:42 localhost sshd[29779]: Found matching RSA key: d8:d5:f3:5a:7e:27:42:91:e6:a5:e6:9e:f9:fd:d3:ce
Jun 24 22:43:42 localhost sshd[29779]: Accepted publickey for caleb from 127.0.0.1 port 59630 ssh2
Building Scalable and Responsive Big Data Interfaces with AWS Lambda
July 10, 2015 FireEye AWS Lambda
This is a guest post by Martin Holste, a co-founder of the Threat Analytics Platform at FireEye where he is a senior researcher specializing in prototypes.
Overview
At FireEye, Inc., we process billions of security events every day with our Threat Analytics Platform, running on AWS. In building our platform, one of the problems we had to solve was how to be efficient and responsive with user-driven event analysis at this scale. Our analysis falls into three basic categories: threat intelligence matching, anomaly detection, and user-driven queries. We relentlessly search for ways to improve our efficiency and responsiveness, and AWS Lambda is a solution that has shown significant value in fulfilling these goals by providing a simple platform for scaling user-driven workloads.
The DigitalOcean ELK Stack One-Click Application provides you with a quick way to launch a centralized logging server. The ELK Stack is made up of three key pieces of software: Elasticsearch, Logstash, and Kibana. Together they allow you to collect, search, and analyze logs files from across your infrastructure. Logstash collects and parses the incoming logs, Elasticsearch indexes them, and Kibana gives you a powerful web interface to visualize the data.
This tutorial will show you how to launch an ELK instance and set up Logstash Forwarder on your other servers to send their logs to your new centralized logging server.
A few weeks ago here at Hone, we decided to spin a new server cluster in DigitalOcean’s NYC3 data center.
DigitalOcean introduced ‘private’ networking just over a year ago. However, it turns out that DigitalOcean actually refers to this as Shared Private Networking—and many of the comments under their announcement point out that their private networking isn’t really too private.
We decided to use OpenVPN to layer a secure network on top of DigitalOcean’s shared private networking.
Fellow JavaScripters, it's time to admit it: we have a problem with promises.
No, not with promises themselves. Promises, as defined by the A+ spec, are awesome.
The big problem, which has revealed itself to me over the course of the past year, as I've watched numerous programmers struggle with the PouchDB API and other promise-heavy APIs, is this:
Many of us are using promises without really understanding them.
#!/bin/bash
# /etc/init.d/vimmau-whitelist
# Controls access to aztek banana and jersey server ports.
# Allows us to whitelist a lot more ip ranges than AWS
# security groups allow.
# To be run only on vimmau hosts.
# AWS security groups will grant full access to banana server--$BANANA_PORT,
# and jersey server--$JERSEY_PORT. We control whitelists here.
# banana and jersey servers may or may not be on the same box. And we may move
# them around. Currently all aztek services run on a single box. We may
# move banana and/or jersey off to a separate box. So this script needs to
# always turn off access on boxes that don't have banana or jersey running
# as well as turn on access if they do.
# NOTE: We assume we are the only one that does anything with iptables on
# a given box. We broadly nuke all chains in all tables, etc.
# whitelist of current customer ip ranges
CUST_WHITELIST=/home/ubuntu/customer.whitelist
# whitelist for rfc1918 addresses and
# whitelist for other special access like the vpn endpoint
RFC1918_WHITELIST=/home/ubuntu/rfc1918-custom.whitelist
HASHSIZE=4096
if [ -z "$ALLOW_BANANA" ]; then
# so since we currently run banana and jersey on same server we will
# just hard code this to true. we will need to get more fancy if
# banana and jersey on different servers.
ALLOW_BANANA=true
fi
if [ -z "$ALLOW_JERSEY" ]; then
# so since we currently run banana and jersey on same server we will
# just hard code this to true. we will need to get more fancy if
# banana and jersey on different servers.
ALLOW_JERSEY=true
fi
if [ -z "$BANANA_PORT" ]; then
BANANA_PORT=4959
fi
if [ -z "$JERSEY_PORT" ]; then
JERSEY_PORT=4960
fi
echo "ALLOW_BANANA: $ALLOW_BANANA"
echo "ALLOW_JERSEY: $ALLOW_JERSEY"
echo "BANANA_PORT: $BANANA_PORT"
echo "JERSEY_PORT: $JERSEY_PORT"
start() {
echo "start"
# nuke all rules in all chains which essentially allows everything.
open
port80and443
whitelist
if [ "$ALLOW_BANANA" = true ] ; then
allow_port $BANANA_PORT
else
echo "not allowing banana $BANANA_PORT"
fi
if [ "$ALLOW_JERSEY" = true ] ; then
allow_port $JERSEY_PORT
else
echo "not allowing jersey $JERSEY_PORT"
fi
block_port $BANANA_PORT
block_port $JERSEY_PORT
}
# So this redirects 80 to 4959 and 443 to 4960 -- doh. The way this works
# customers can use either port and it will work. No need to set or configure
# per customer ports. We will of course have to set port per customer for
# vimmau receptor, but not for server side.
# We can allow all 4 in AWS security groups or just the two we know the
# customer will use.
port80and443() {
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 4959
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 4960
}
# opens all ports and deletes the whitelist.
open() {
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
ipset destroy whitelist &>/dev/null || true
}
# blocks 'all' ports and deletes the whitelist.
stop() {
open
block_port $BANANA_PORT
block_port $JERSEY_PORT
}
whitelist() {
echo "whitelist"
ipset flush whitelist &>/dev/null || true
load_whitelist $CUST_WHITELIST
load_whitelist $RFC1918_WHITELIST
}
load_whitelist() {
if [ -f $1 ] ; then
# make sure there's an ipset named 'whitelist'.
# this command seems to be idempotent.
ipset -exist create whitelist hash:net hashsize $HASHSIZE
# loop over lines in file
while read rule; do
# skip lines that start with #
if [[ ${rule:0:1} == '#' ]]; then
echo "skipping $rule"
else
ipset -exist add whitelist $rule
fi
done <"$1"
else
echo "$1 not found"
fi
}
allow_port() {
echo "allowing port: $1"
iptables -v -A INPUT -m set --match-set whitelist src -p TCP --dport $1 -j ACCEPT
}
block_port() {
echo "blocking port: $1"
iptables -v -A INPUT -p TCP --dport $1 -j LOG --log-prefix "IPTABLES DROPPED: "
iptables -v -A INPUT -p TCP --dport $1 -j DROP
}
status() {
echo ""
iptables -t nat -nvL
echo ""
iptables -nvL
echo ""
ipset list
}
if [ "$1" == "start" ]; then
start
elif [ "$1" == "reload" ]; then
whitelist
elif [ "$1" == "stop" ]; then
stop
elif [ "$1" == "open" ]; then
open
elif [ "$1" == "status" ]; then
status
else
echo "whitelist.sh <start|stop|reload|status>"
echo " start - allow only whitelisted ports and ips"
echo " stop - block both ports for all ips"
echo " reload - reload list of whitelisted ips"
echo " open - allow all ips for both ports (DANGER)"
echo " status - show rules and whitelist"
fi
Visa and FireEye Join Forces to Help Merchants, Financial Institutions Defend Against Targeted Attacks on Consumer Payment Data
New Visa and FireEye cyber watch program will provide advanced cyber protection capabilities for merchants of all sizes
San Francisco and Milpitas, Calif. – June 3, 2015 – Visa Inc. (NYSE: V) and FireEye, Inc. (NASDAQ: FEYE) today announced their intention to co-develop tools and services to help merchants and issuers protect against advanced cyber attacks targeting payment data. The first of its kind Visa and FireEye Community Threat Intelligence (CTI) offering will bring together threat information from both companies, allowing merchants and issuers to quickly detect and respond to attacks against their IT and payment infrastructure. Under the offering, FireEye will operate the easy-to-use web based service to enhance stakeholders’ knowledge of attacks targeting the ecosystem, providing a significant improvement over current industry practices of sharing threat intelligence via e-mail or static documents.
Cybersecurity company FireEye (NASDAQ: FEYE) keeps posting numbers that suggest a sustained demand for its threat detection, prevention, and resolution services. The company's recently filed first-quarter 2015 results revealed a leap in revenue of 69% from the prior-year quarter.
This hack will work around the problem. Essentially you have to add a null value to your JSON so that Ansible won't wrongly turn the JSON string back into a dict.