Docker Push an Image to a Remote Server

Reading Time: < 1 minute

I needed to be able to push an image from my CI/CD server. Before I was using docker-hub to host my images and wanted to streamline the process. I would build my code >> build my image >> push to docker-hub >> have my server download the new image from docker-hub >> than update / restart container.


  1. CI/CD Build
  2. Push image to Remote Server
  3. Stop container / Restart

Things needed:

Python / PIP
docker-push-ssh (

Assuming you have docker and python installed, and you have have setup ssh for key access.

Setup docker-push-ssh

sudo -H pip install docker-push-ssh

I followed the guide to add an insecure registry to docker for linux.

// /etc/docker/daemon.json

    "insecure-registries" : [ "localhost:5000" ]
sudo systemctl daemon-reload
sudo systemctl restart docker

Build a test image


# vi Dockerfile

FROM alpine
RUN touch /etc/testimage
docker build -t testimage .

Push image to remote server

docker-push-ssh -i ~/.ssh/id_rsa username@host_ip_or_name testimage


Useful Commands

# list images
docker images

# list all containers
docker ps -a

# force remove image
docker rmi -f image_id

# remove container
docker rm container_id

Going Serverless: AWS Lambda, and S3

Reading Time: < 1 minute

Setting up a local development environment for a serverless AWS system. I will be setting up a local development environment.

Things needed:
AWS CLI tool Version 1
Serverless Framework

Install AWS CLI Version 1:

The guide to install can be found here.

Install NVM:

curl -o- | bash

Install Node:

nvm install 10

nvm use 10

Install Serverless Framework:

npm install -g serverless

Create Project:

serverless create --template aws-nodejs --path projectName
cd projectName

Initialize NPM:

npm init

Node packages we’ll be using:

npm install aws-sdk --save-dev
npm install serverless-offline --save-dev
npm install serverless-http -save-dev
npm install serverless-s3-local --save-dev

AWS Profile (~/.aws/credentials):

aws_access_key_id = S3RVER
aws_secret_access_key = S3RVER

Using the profile:

export AWS_PROFILE=s3local

Testing and Using S3

Configure: (serverless.yml)

  - serverless-s3-local
  - serverless-offline

    port: 8081
    directory: ./tmp

      Type: AWS::S3::Bucket
        BucketName: local-bucket


    handler: handler.s3hook
      - s3: local-bucket

Edit handler.js:

module.exports.s3hook = async (event, context) => {

Start Serverless Offline: (try to use separate terminals)

sls offline start

Pushing up file to Local S3:

aws --endpoint http://localhost:8081 s3api put-object --bucket local-bucket --key handler.js --body ./handler.js

Retrieving a file from Local S3:

aws --endpoint http://localhost:8081 s3api get-object --bucket local-bucket --key handler.js ./tmp/handler.js


On your second terminal, the one you ran sls offline start you should have log outputs stating you have successful PUT and GET responses.

For the GET request, you should have a file in your tmp/ directory.

Upgrading Rails to 6.0

Reading Time: < 1 minute

Update Gemfile

# update gem 'rails', '~>' to
gem 'rails', '~> 6.0.0'

# add
gem 'webpacker', '~> 4.0'

Grab updates & install webpack

bundle update

bin/rails webpacker:install

Run your test and fix any issues. I mostly got issues revolving around update_attributes and expanding where.not() queries.

I had to update a bunch of

I also had scopes that looked like:

scope :complete, -> { where.not(resume_id: nil, cover_letter_id: nil, company_id: nil) }

They had to be updated to:

scope :complete, -> { where.not(resume_id: nil).where.not(cover_letter_id: nil).where.not(company_id: nil) }

Also, I had an issue with one of my render partials.

Failure/Error: <%= render partial: :company_datum, collection: @company_data, locals: { offset: @company_data.offset }, cached: true %>

Had to update it to:

<%= render partial: 'company_datum', collection: @company_data, locals: { offset: @company_data.offset }, cached: true %>

Updating application.rb

# config/application.rb
# change config.load_defaults 5.2
config.load_defaults 6.0

# I removed
config.i18n.fallbacks = [I18n.default_locale]

If using rubocop: add this to your .rubocop.yml

    - 'node_modules/**/*'


How to Use Postgres to Calculate and Save Rankings

Reading Time: 2 minutes

When you need to calculate rank and save it to a field in Postgres. This example uses Rails ActiveRecord but if you can execute the query you are good.

A naive way would be to build an iterative method with an index. An example in Ruby:

Product.order(price: :desc).each_with_index do |product, index|
  product.price_rank = index + 1

This might work great if you have a limit(10) or have a small data sample. But what if you have 100,000 records or even worse 1 million records.

For 100 items the above code took 0.181s.
For 1,000 items .49s.
For 100,000 items 155.04s that’s roughly 2 minutes 36 seconds.

Now using Postgres I know it’ll be faster but how much?

query = "UPDATE products SET price_rank = r.rnk FROM (SELECT id, RANK() OVER (ORDER BY price DESC) as rnk FROM products) r WHERE ="


I am getting run times of .0005 because I am dumping the work onto the database.

Running EXPLAIN ANALYZE on the query.

 Update on products  (cost=16813.40..25200.70 rows=101002 width=112) (actual time=15247.855..15247.855 rows=0 loops=1)
   ->  Hash Join  (cost=16813.40..25200.70 rows=101002 width=112) (actual time=8693.946..12299.080 rows=101002 loops=1)
         Hash Cond: ( =
         ->  Seq Scan on products  (cost=0.00..3391.02 rows=101002 width=68) (actual time=0.584..990.371 rows=101002 loops=1)
         ->  Hash  (cost=14563.87..14563.87 rows=101002 width=56) (actual time=8693.010..8693.010 rows=101002 loops=1)
               Buckets: 65536  Batches: 4  Memory Usage: 2685kB
               ->  Subquery Scan on r  (cost=11786.32..14563.87 rows=101002 width=56) (actual time=2278.120..7232.235 rows=101002 loops=1)
                     ->  WindowAgg  (cost=11786.32..13553.85 rows=101002 width=24) (actual time=2278.083..5283.472 rows=101002 loops=1)
                           ->  Sort  (cost=11786.32..12038.82 rows=101002 width=16) (actual time=2278.053..3262.086 rows=101002 loops=1)
                                 Sort Key: products_1.price DESC
                                 Sort Method: external merge  Disk: 2576kB
                                 ->  Seq Scan on products products_1  (cost=0.00..3391.02 rows=101002 width=16) (actual time=0.639..992.346 rows=101002 loops=1)
 Planning time: 3.271 ms
 Execution time: 15252.513 ms 

Over 100,000 records ranked and updated with values in 15s. Not bad.

You also get the added logic of a true ranking when there is a tie like #1, #2, #2, #2, #5, #6, etc. The calculated version was simple and did not account for it. If you do want to, you can implement it but with Ruby, you’ll add more bloat and complexity.

Is Rails Scalable?

Reading Time: 7 minutes

Being an optimist I would say, Yes! Having worked on a large scale high visibility site I’ve personally seen it happen, so definitely Yes! When the budget was not a limiting factor, I saw vertical and horizontal scaling at it’s best. Get the biggest badest app server, then multiply that by 20 with multiple load balancers. CDN cache everything possible. Scaling solved. Millions of views, thousands of transactions per second no problem. The databases were actually starting to buckle from all the connections. Enter pgBouncer connection pooling and more load balancers, but that is the point. You can scale to the point where it isn’t a problem with code but infrastructure.

But what if you can’t. Here are my experiences.

Vertical Scaling

You add more RAM, CPUs, faster pipes (fiber optics directly to another server or network), SSDs. This is all possible, as CPUs reach their limits this gets to be a limiting factor. This has a threshold. With more users, the server can still meltdown. This is not for an exponent growth more of a linear one.

This also holds true for the database server, as most issues are database related. Slow queries can run faster by beefing up the hardware it runs on.

I consider this a quick fix for linear or predictable growth.

Horizontal Scaling

Load Balancing! Get more servers to do the job. Two or twenty is better than one. Share the load between servers. This can and will get costly. Be it cloud or metal or a combination of the two. Horizontal scaling if done correctly can be a really easy solution to scaling issues.

Continue reading

Part 1: An API using Phoenix, Absinthe (GraphQL), Guardian, React, Apollo Client

Reading Time: 2 minutes

I love my Rails, but this is something I am really interested in.

It all started with a little bookkeeping app written using Phoenix as an API and React as a view layer. Has now turned into an extremely deep rabbit hole. After completing the first features of the app, I had the question “These states in these components are getting out of hand, I wonder if there’s something to help me?” This lead to Redux, one object to hold the state of the entire app. Then lead to the question, “Is there a better way to handle the data request to the API?” This lead to GraphQL which lead to Absinthe which made sense since I am using Phoenix as my API.

I think the best way to learn is to build something simple but not trivial. So let us begin with an e-commerce store that sales widgets.

Users >> Carts >> Widgets

Users can have carts to place widgets in them, etc.

I am going to break this down to 3 to 4 articles:
Part 1: Introduction to Phoenix, Absinthe, and Guardian – explaining the pieces.

Part 2: Implementation of the Phoenix/Absinthe API.
This is how you go about creating the API.

Part 3: Implementation of React/Redux with Apollo client.
We will create the frontend while still inside Phoenix, no additional steps Webpack will handle it.

Part 4: Additional nice things to have like: Elasticsearch, a Pub/Sub feature, etc. This one is a maybe.

Code can be found here.

What is Phoenix?

Phoenix is a web development framework similar in vein to Ruby on Rails. Phoenix is written in Elixir. Elixir is a functional, concurrent, and general-purpose programming language. Phoenix follows the MVC pattern and was developed to create highly performant and scalable web applications.

Laymen terms: You use it to build websites fast and efficiently.

What is GraphQL?

GraphQL was developed by Facebook in 2012 and released to the public in 2015. It is a data query and manipulation language for APIs.

Simply: An efficient and flexible way to get data, by eliminating unwanted data. I define what data I need from the endpoint.

What is Absinthe GraphQL?

Absinthe is “The GraphQL toolkit for Elixir.” Since Phoenix is written in Elixir and most APIs are web APIs. This just makes sense.


If you are familiar with Devise in the Ruby world Guardian is similar. It is an authenication library for Elixir applications.


React is a JavaScript library used to build user interfaces for single-page applications and is optimal for fetching rapidly changing data.

Apollo GraphQL (Client)

Apollo comes in server and client. We will be dealing with the client. Apollo Client is a state management library for JavaScript GraphQL apps.


docker-credential-pass Ubuntu setup for Bamboo CI/CD

Reading Time: < 1 minute

Ubuntu: 18.04
Docker: 18.09.5
docker-credential-pass: 0.6.2
pass: 1.7.1
gpg2: 2.2.4

Since my setup is for CI/CD I have a bamboo user to deploy my docker containers to Docker Hub and deploy to my production servers. Your use case may vary, but if you want your credentials to persist longer than a couple of hours before your system starts asking you to re-enter your passphrase this might work for you.

My gpg-agent.conf file looks like this:

This worked for me. Also note the default-cache-ttl value is in seconds, the values I used is 10 years in seconds.

Getting SimpleCov-summary (0.0.5) to work with SimpleCov (0.16.1) and Rails 5.2


Reading Time: < 1 minuteIf you run into issues with installing simplecov-summary using:

Here’s how to fix it:

Since Rails 5 removed silence_stream because it is not thread-safe. I found implementations of silence_stream but I got it to work by just removing it completely and I prefer to see these outputs at the end of my test run.

Rails 5.2 Has And Belongs To Many with UUID


Reading Time: < 1 minuteWhen using UUIDs and you are creating a join table like so:

You will get a migration file like so:

You’ll need to modify it accordingly:

Then run:

And you should be good. Just remember if you are using UUIDs you might want to make sure all models you generate use UUIDs.

In config/application.rb:

And make sure your generated migrations have:


Installing Jira Software Server


Reading Time: < 1 minuteAssuming you have downloaded from Atlassian. If not read how to prepare for installs.

Follow prompts and make sure you enable the service option if you want Jira Software to autostart on bootup.

If you installed using sudo┬áit’ll place the app in dir: /opt/atlassian/jira, which is good.

Note these files in case you don’t want it to be installed as a service or if something happens.