In cloud native application development we have several micro services each responsible for a certain task. And we also have off the shelf dependent softwares like databases. These softwares are called platform services. In the development space, developers need to run these interconnected softwares and applications locally. The approach that many of the developers take is running all of them in a separate docker container in the local machine. All of these applications need to communicate with each other. One of them, generally, is a web server. We should be able to reach the web server from the browser of the local machine.
The proper way to do this is exposing only the web server to the outer world. And all other docker containers along with the web server should run in the docker network. When the docker daemon runs in a machine it creates its own network. When we spin up a docker container these containers run in the docker network by default. In the development space we can avoid the complexity and quickly make everything running and accessible, communicating with each other. Here I am going to explain how we can do so using port mapping.
We should deploy each microservice in a separate container with a container name, which should have a specific port exposed. These ports are used to reach the micro service or database. When we run the container we can assign an external port to this exposed port. The external port is the port of the machine where the docker container is running. This is called port mapping. Then we can add the --net=host as a parameter in the docker run command. This means the docker container will use the same network of the machine and not docker network. As an example, let's say we have two apps deployed in two docker containers, app1 and app2 , both containers have internal port 8000 exposed. app1 is mapped with 8001(8001:8000) of the machine, app2 is mapped with 8002(8002:8000)
Now the app1 and app2 can be reached by the host name/IP address of the machine and the external port from the outside world. Let's assume the host name and domain is myhost.mycomp.com
We can reach app1 using http(s)://myhost.mycomp.com/8001
We can reach app2 using http(s)://myhost.mycomp.com/8002
Also these apps can reach each other using the same host name and or using localhost and port.
This is the simplest way to establish communication between two containerized apps. There are other ways for example defining an overlay network or hosting the containers in the docker's own network as I have mentioned before.
For example to a postgres in a container in the network of the local host, the docker run command will look like this.
docker run -t --rm --net=host -e PGPASSWORD=<pass> postgres:10.3-alpine psql -P
pager=off -h localhost -p 63000 -d <dbname> -U <user> -c "\d landscapes"
here we are saying the container(no name assigned) running postgresql is assigned the external port 63000. We can also use the external:internal mapping format. This format is needed when the docker image itself has an predetermined exposed post. That means, when the docker image is created it is configured such that, the application running inside a container created from the image will be reachable only through this port.