When computers had lights, buttons, switches and miles of cabling (@ the Computer History Museum)

Enjoyed a fascinating visit to the Computer History Museum in Mountain View today for the Vintage Computer Festival West. There was plenty to see, and plenty of old computers to check out on show in the festival exhibit.

Downstairs in the main museum, there is an incredible display of anything and everything to do with computer history, all the way from mechanical counting devices, all the way through to current technology.

I find the human interface design of some of these old systems particular fascinating. You just don’t see current technology with such a bewildering array of lights, buttons and switches anymore, and the miles of cabling wiring these machines together is completely insane. Here’s a few pics from my visit:

Configuring nginx to proxy REST requests across multiple Spring Boot microservices in Docker Containers

Using nginx to proxy requests across Docker containers is a common use case for nginx, and covered in many posts and articles. Trying to get it working from scratch is a not-so-trivial task. Following many articles, I got confused about whether I was needing to load balance requests across identical container instances, or whether I needed nginx to proxy requests across my different containers. I ran across quite a few configuration issues and challenges, but mostly I think because I didn’t understand what I was trying to do (and ended up with configuration trying to load balance and proxy at the same time which was not what I needed) 🙂

Articles like this one and this one show how to configure ‘upstream’ servers that you can load balance requests across. My understanding of this approach is that this is what you need if you have multiple server or container instances, and you want nginx to load balance across the instances. After a while of trying to get this configuration working what I realised I needed was just the proxy_pass config for nginx, telling it to proxy requests for a matching url to a given Spring Boot service in a container. This question captures this approach well.

To explore a typical configuration, I have 2 simple Spring Boot REST services, springbootservice1 and springbootservice2, each will be in their own Docker container. When run in individual containers, they are accessed via:

http://localhost:8080/service1/example1

and

http://localhost:8080/service2/example2

Without additional configuration to run them on different ports, as standalone Spring Boot services they can’t obviously be run on the same host at the same time. This would be trivial to do with just two services, but once you scale up this approach for many different services, managing different ports with manual configuration is not particularly practical.

Out of the box, docker lets you manage the exposes external ports easily when you ‘docker run’ a container with the ‘-p’ option. What I was interested in was a solution to run them in different containers, without manually defining and managing ports by hand, and to allow a web app to be able to call any of the services on port 80 so an app consuming these services has no need to know what ports the services are actually running on.

The Dockerfile for each looks like this (replace Service1 for Service2 for the second service):

#Official JDK8 Alpine-based image
FROM java:openjdk-8-alpine
ADD target/SpringBootService1-0.0.1-SNAPSHOT.jar /opt/SpringBootService1-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/opt/SpringBootService1-0.0.1-SNAPSHOT.jar"]

The Dockerfile for nginx looks like this:

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY conf /etc/nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

And here’s my config for nginx:

http {
 server {
   listen 80;
   location /service1 {
     proxy_pass http://springbootservice1:8080/service1/;
   }
   location /service2 {
     proxy_pass http://springbootservice2:8080/service2/;
   }
 }
}
events { worker_connections 1024; }

The key part of this config is the proxy_pass config:

location /service1 { 
  proxy_pass http://springbootservice1:8080/service1/;
}

This takes incoming requests with a URI matching /service1 and proxies it to http://springbootservice1:8080/service1/ – in this case springbootservice1 is the default host name of the container running Service1. This is it’s default name from the docker-compose config which we’ll cover next. Note that the lack of trailing and trailing ‘/’s is relevant and important in order to match any incoming pattern and then forward that URI appended to the end of the target URI on the container service.

version: '3'
 
services:
     nginx-lb:
         build: ../nginx-loadbalancer
         #image: nginx-lb
         ports:
             - "80:80"
         links:
             - springbootservice1
             - springbootservice2
         depends_on:
             - springbootservice1
             - springbootservice2
     springbootservice1:
         image: springbootservice1
         ports:
             - "8080"
     springbootservice2:
         image: springbootservice2
         ports:
             - "8080"

To bring up all the contains in one go, ‘docker-compose up’ starts up all 3 containers, and now nginx handles requests on port 80 and proxies to the correct container based on the path. This approach could easily be scaled up to include more containers, but at some point it will become obvious with this configuration hardcoded in both the nginx.conf file and the docker-compose file, a better solution would be to use some kind of dynamic discovery of the containers as they become available. I’ll be investigating some options for this kind of approach next.