Quick and dirty way to scan host keys

ssh-keyscan  -t rsa  $i\
,`python -c "import socket; print socket.gethostbyname('$i')"` \
> b

#TODO make this better

Stopping unwanted git commits on a shared repository

I’ve occasionally run into people who modify and commit to a checked out build repoistory instead of making the changes on their workstation and pushing them in. I noticed that Galen Grover had a fairly neat solution to prevent himself committing to master (one I will probably copy for another workflow.)

So I borrowed and extended it.

DATE=`date --iso-8601=minute`



if [ `whoami` = $BUILDUSER ];
    echo "You should not be attempting to commit as the $BUILDUSER user"
    exit 1

function diff_all {
    git --no-pager diff $REPODIR
    git ls-files --others --exclude-standard |
    while read -r i; do git --no-pager diff -- /dev/null "$i"; done

function repo {

    if [ ! -d "$HOME/$REPONAME" ]; then
        git clone -q $MAINREPO $HOME/$REPONAME 2>/dev/null
    elif [ -d "$HOME/$REPONAME/.git" ]; then
        GIT_WORK_TREE="$HOME/$REPONAME" GIT_DIR="$HOME/$REPONAME/.git" git pull -q $MAINREPO 2>/dev/null

function unstage {
    git reset HEAD

echo "Do not commit to this repository."
echo ""
echo "This repository has changes deployed automatically."
echo ""
echo "a clone of this repository will be created or updated in your HOME Directory"
echo ""
echo "Your changes will be unstaged"
echo "and a patch will be created in your home directory"
diff_all > $HOME/$PATCHFILE
echo "you can apply the patchfile with the following command"
git apply ~/$PATCHFILE --directory=$HOME/$REPONAME/
echo "cd ~/$REPONAME && git-apply ~/$PATCHFILE"
echo "but this script has already done that for you"
echo "now simply add them, commit and push to the primary repo"
echo ""
echo "Please clean up after yourself with a git reset to ensure that this repo is consistant"
diff_all |mail -s "$USER just tried to commit" $EMAIL
echo "git reset HEAD --hard #note that this command should be used sparingly as it wipes all changes"

exit 1

Wasting time on reddit

So bored and on reddit reading about myki

I guess it’s time to do some hypothetical over-engineering.

First the challenge: that a reasonably well designed system could not handle frequent updates from the number of buses and trams in Melbourne (trains are a different question because updating the stations is easier and harder.)


  • Reduce the time it takes to update the card balance.

And some assumptions (some of which are there to make the maths easier for me some to over engineer and some because I just felt they were reasonable.)

Assumptions on operation:

  • Everything is fine asynchronous. This allows touch on\ off operations and card charges to be applied from local data removing slow network connections.
  • We have less than 5000 buses or trams in Melbourne.
  • We don’t care about people who have touched off a tram or people who have failed to touch off more than 90 minutes ago.
  • The database is always correct (if the central db isn’t always correct other things are a problem)
  • We want the product to respond for trips of more than 1 minute
  • We want to aim to be able to respond to 75% of 1 to 5 minutes trips.
  • 5+ minutes 99% of the time.
  • Buses trains and trams operate 24 hours a day 365.26 days per year. (this is one of the ones for the maths)

Assumptions on tech:

  • 8 bits to a byte
  • b = bits
  • B = Bytes
  • each field gets a 16 bit label
  • 64 bits for card id 1.84467 x 10^19 total cards
  • 24 bits for number of cents ( Allows it to fit in 3 bytes and make maths easier)
  • 64 bits for bus\ tram\ station ID (never reused)
  • 216 bits for Bus location information. This is overkill by design (it’s roughly enough to assign a few thousands of values to every atom in the solar system) (includes 64 bit time, lat, long fields 8 bits for speed (I doubt buses\ trams would exceed 128 KMH (or 256 if you wanted to just record speed and ignore reverse)) 16 bits for direction)
  • We don’t see more than 100 people board a bus or tram in a minute. (this will be the baseline number for all future calculations)
  • ignore IP / TCP / UDP overheads

OK. Now what? let’s see if we are going to saturate networks with this amount of data flowing in and out.

Each packet would contain at most (assuming 100 people per min) the following

From the source to central:

376 bits of header
16 bits of array id
16 bits of array length
100 objects
    80 bit length
        64 bits UUID
        16 bits of separator / label

Raw Size 8408 bits 1051 Bytes
add encryption and assume it doubles the file size
2102 Bytes per request
5000 Sources requesting once per minute
just over 10MB/minute
1Mb/s average incoming to central vol

From central to source:

64 bits of source
64 bits of time
100 objects each
    120 bits long.
        64 bits of UUID
        24 bits of balance
        2 16 bit separator / labels

Raw size 12192 bits 1524 Bytes
encryption and assumptions
3048B / response
15MB/minute of traffic
2Mb/s outgoing from central average

So from a connection point of view it’s rather possible to provision that much bandwidth to the server and provide that much bandwidth to the entire network (remember each bus or tram is only going to be needing < 5 KB/s for 1 or 2 seconds per minute)

Now RPM(Request Per Minute) wise is another question.

Taking the earlier assumption of 5000 concurrent devices over the entire network and assuming 1 Request Per Minute (RPM)/source You get 5000 RPM (maths is fun!!). Now 5k RPM is a reasonably difficult target to meet if you are running a wonderful enterprisy app hitting an ACID DB with no caching. But if you split the stuff up into something that looks kinda similar to the below:

Message-queues are used heavily in this they are distinguished by the names in parentheses.

    High CPU requirements but other than fairly minimal
    receives request decrypts
        header information passed into a message queue (ext) for other systems to use.
        UUIDs and header parsed and placed in a message queue (ResponseGen)
        checks for SourceId on message-queue (Responded)
        encrypts and returns if present if none return 200 (or similar)

Response Returner
    moderate CPU and Memory
    reads (ResponseGen)
        passes UUIDs to message-queue (CardLookup)
        and holds for response
    formats response once returned \ timeout
    writes response to (ResPonded)

Card Lookup
    High Memory and High Memory IO
    reads (CardLookup) and
        performs a look up on UUID into a memory resident key / value store
            (100 million records should fit with 0 compression in about 2 GB of ram
                (actually raw it's just over 1GB but we round up ) )
        writes response to (CardLookup)

This system written in a interpretative language such as ruby or python would easily hit the 5k RPM required.

So now on about $20,000 worth of servers we can hit around way more RPM than we require. Of course developer time is going to be a more significant cost as will the required hardware or network links in each of your sources.

TL;DR Doable easily. the only hard part will be getting the feed from the current system.

Curriculum Vitæ


December 2012 - Ongoing, Linux Systems Engineer - Aconex, Melbourne, Victoria

  • Was part of on call rotation for global support and escalation requests
  • Responded to critical incidents in major production environments
  • Build out infrastructure to deploy new production and QA instances on AWS
  • Identify significant cost savings in deployments
  • Trained new members on infrastructure and ops procedures
  • Worked with developement teams to build and deploy new products
  • Modified and improved internal puppet code to manage postgresql
  • Maintained and improved puppet infrastructure globally
  • Investigated and implemented packer for automated linux image building
  • Standardised and Automated linux build process
  • Built and deployed new el7 builds
  • Implemented packer builds for windows databases
  • Senior member of the team responsible for 6 data centre moves. Assisted new teams in 2 other moves
  • Internal puppet SME
  • Migrated imap testing into a more managable and performant Go utility
  • Improved Postfix configuration in short notice and under significant pressure
  • Managed, moved, monitored and assisted in the implementation of several SANs
  • Internal presentation to spread best practices in puppet
  • Internal presentation discussing new QA environments
  • Developed initial isolated QA environments for use by Engineering teams
  • Puppet wrangling
  • Develop and maintain internal scripts to automate and improve services
  • Perform production releases
  • Discuss and plan with other teams improvments to QA and development environments

July 2015 Volunteer - Packer and Vagrant Workshop with Norton Truter, Infracoders Melbourne

  • Helped test plan in advance of workshop
  • Helped attendees as they came across issues and discussed use of packer

October 2013 - Ongoing, Senior Sysadmin - Servers And Networks Team of Awesome (SANTA), Pax Australia, Melbourne, Victoria

  • Help attendees with questions of a wide variety
  • Bump in and out (unload, setup and cable 150+ PCs 10 to 20 Servers, kilometres of network cables)
  • Setup of windows game boxes and linux & windows game servers
  • Operation and implementation of infrastructure services
  • Panic driven automation of game patching and deployment
  • Work with comps team to run servers as they request
  • Stuffed many many show bags!
  • Assist other areas as asked

July 2011-December 2012, Systems Administrator - Experian Hitwise, Melbourne, Victoria

  • Involved in maintenance and growth of puppet modules
  • Developed in house puppet modules to manage Jenkins build nodes
  • Managed provisioning of Xen and KVM virtual machines with Cobbler and Koan
  • Supported Centos, Fedora, Debian and Ubuntu hosts
  • Implemented puppet managed Linux based selenium testing infrastructure
  • Developed custom scripts to automated the provisioning and deployment of production systems with the AWS api
  • Assisted in the development of requirements and deployment of a production Hadoop environment
  • Responded to escalation requests our operation support team

July 2008-July 2011, IT Support Specialist - RMIT University, Melbourne, Victoria

  • Managed imaging back-end for multi-boot labs.
  • Developed and implemented pilot method to multi cast clone a number of Apple products.
  • Primary Support Staff for Linux and UNIX machines.
  • Assisted in implementing initial multi boot lab at our RMIT Vietnam Campus.
  • Assisted in implementing and support of consumer grade storage systems for research users.
  • Designated
  • Lead college involvement implementation of improved authentication and directory mounting solutions for Linux and UNIX clients.
  • Trained new staff on IT support.
  • Determine requirements in a complex environment and develop non-trivial solutions.
  • Assisted in the implementation and planning of a new video on demand training tool.
  • Primary support for inter-uni video conferencing using the Access Grid system.
  • Documented software installations.
  • Trained and developed training systems for other IT staff on the access grid system.
  • Ensured that software compliance including licensing was correct and current.
  • Respond to support calls from users in various areas of the University.
  • Organise the scheduled replacement of hardware with users.
  • Identify and resolve faults with hardware and software installations in varying environments.
  • Coordinate with external support agents to resolve hardware and software issues.
  • Support Linux, OS X and Windows systems throughout the university.
  • Supply consumables to users in supported lab environments.
  • Arrange suitable replacements for historical equipment for specialist hardware.
  • Assist in the creation of images for various areas throughout RMIT.
  • Assist in the development of software and hardware requirements.

2006-June 2008, MIS Operator, Customer Support Operator - Lasseters Online Casino Alice Springs

  • Monitored the network and servers involved in running an on-line business.
  • Assisted the Senior Technicians in designing and choosing systems for continuing operations.
  • Was responsible for maintaining desktops systems.
  • Was responsible for first level after hours support for the hotel casino operations.
  • Involved in the day to day customer care operations as a first level contact.
  • Required to prioritise issues and determine escalation requirements.
  • Was required by this position passed a police background check prior to beginning this position and received a NT Government Gambling Key License.

2005-2006, Field Service Technician - Bizcom NT Alice Springs

  • Repaired and maintained various computer systems from Windows 95 to Windows XP and server operating systems.
  • Examined client requirements, advised, and then organized the purchasing of solutions.
  • Supported clients in confidential environments.
  • Supported clients in both primary and acute health care businesses.
  • Managed virus removal and prevention.
  • Provided support at several sites of medium to large size (30-160 Seats).
  • Developed several methods for remote resolution of jobs.
  • Assisted in migration of key systems and desktops to new administration technologies.
  • Was the field service technician responsible for refresh projects at 2 sites.
  • Managed disaster recovery and backup solutions.
  • Managed client migration from Novell Directory solutions to MS Active directory solutions.
  • Diagnosed faults in medium sized networks.
  • Managed own workload relating to sites under personal responsibility.
  • Managed resolution of computer problems for remote sites via phone and also remote control systems (Microsoft Remote Desktop Connection, VNC based solutions and DameWare primarily).
  • Installed and supported messaging and collaboration systems such as Microsoft Exchange & Outlook and Lotus Domino & Notes.

2003-2004, Technical support consultant - Micropower NT Alice Springs

  • Repaired and maintained various computer systems from Windows 95 to Windows XP.
  • Examined client requirements and advised them of solutions.
  • Managed virus removal and prevention.
  • Organised diagnostic and replace/repair of faulty products.
  • Maintained and installed new desktops, servers and software in a primary health care environment.

2001-2003, Part time technician - Self Employed Alice Springs

  • Repaired and maintained various computer systems from Windows 95 to Windows 2000.
  • Examined client requirements and advised them of solutions.
  • Assembled and tested custom computer systems to the clients requirement.
  • Built and maintained network systems for clients of various sizes.

Current projects

traefik: A modern Reverse Proxy and what this means.

This is a talk that will be presented in initial form at Infracoders July Meetup Diving into the performance characteristics and useful features the tool can bring. With a slight touch on service discovery. I plan to submit this to the SysAdmin miniconf for Linux.Conf.au 2017. The talk and git repo for tooling will be up as soon as it looks less horrible.



Graduate Certificate of Information Technology Swinburne University


Nothing really to say. I do sys adminy things at various places.