Docker and docker-machine usage notes

I’ve been playing on and off with Docker but not frequently enough to remember what I did last time, so here’s a few random unstructured notes (running on Mac OS X):

docker ps : list running containers (this shows a container id which is used in most other commands)

docker ps -a : show all containers including those that are stopped

If you see this:

Get http:///var/run/docker.sock/v1.20/containers/json: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?

Then your docker-machine is not running.

Start it up with:

docker-machine start default

After starting, run this to set env vars:

eval "$(docker-machine env default)"

Also see this post, and recommendation to use ‘Docker Quickstart Terminal’ on Mac.

Managing containers and images:

docker images : list created images

docker rm containerid : delete a container

docker rmi imageid : delete an image

Create a new container from an image, in interactive mode, grab the tty, and execute bash in the container (get a command line into the container):

docker run -it imageid bash

Run as background daemon: -d

To start a shell into a running container:

docker exec -it containerid sh (or bash)

Stop a running container:

docker stop containerid

To get the IP address of a container:

docker inspect containername | grep IPAddress

Accessing a container from the host

Each running container has it’s own IP address. When a container restarts, it gets a new IP. To access a container running in a docker-machine, find the IP of the docker-machine vm:

docker-machine ls

.. this will list the IP for the VM.

When creating a new container, forward the port in the container to the host with -p, for example for a Weblogic server:

docker run -d –expose:7001 -p 7001:7001 –name=containername imagename

Java EE App Servers – are they still relevant in a world of Docker containers?

This year at JavaOne 2015 there was a recurring theme across many of the Cloud and DevOp tech sessions that was promoting self-contained deployable Jars (not EARs or WARs) as a more efficient approach for deploying lightweight Java based services. This approach also plays well for a microservices type architecture and can also easily make use of containers, like Docker, PaaS offerings like CloudFoundry or Heroku, or even Oracle’s new Java SE Cloud Services offering.

Being able to deploy small, self-contained services (compared to large, monolithic EARs) brings a whole range of benefits, like being able to deploy more often, and deploy individual services individually without touching or impacting any of the other deployed services.

If you go a step further to consider a deployable unit as a disposable Docker container, then you have to start asking questions about whether a traditional Java EE App Server is still required, or even fits at all this this kind of deployment approach.

Oracle has a supported project on GitHub for Dockerfiles to build images running Weblogic 12c… if you try to do this you also wonder just just how practical it is to even attempt to run Docker containers that are 2GB+ in size… yes, you can do it, but are you really getting the benefits of the lightweight and lean, disposable containers at this point?

I came across this article by James Strachan, “The decline of Java application servers when using docker containers“, which is an interesting read on this topic. The more you look into Docker, containers and lightweight services, you have to question whether Java EE App Servers are losing their relevance. Or at least whether they are relevant for this approach of containerized deployments.