You can solve your problem through several approaches. The one I would use would be as follows.
A single container would be responsible for responding to requests through the URL www.teste.com
, as if it were a load Balancer. This container would receive the path requests and, depending on the path, would make another request for a second container. This strategy is up to you.
For my Alancer load, I wrote a code in Go that receives the request and the transfer to another server based on the path called:
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
)
func router(w http.ResponseWriter, r *http.Request) {
var url string
switch r.URL.Path {
case "/stack":
url = "http://app1.dev" + r.URL.Path
case "/overflow":
url = "http://app2.dev" + r.URL.Path
default:
url = "http://app3.dev" + r.URL.Path
}
resp, err := http.Get(url)
if err != nil {
w.WriteHeader(500)
w.Write([]byte(fmt.Sprintf("Could not call '%s'.\n", url)))
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
w.WriteHeader(500)
w.Write([]byte(fmt.Sprintf("Could not read '%s'.\n", url)))
}
w.WriteHeader(resp.StatusCode)
w.Write(body)
}
func main() {
http.HandleFunc("/", router)
err := http.ListenAndServe(":80", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
To orchestrate the containers, I chose to write a docker-compose.yml
, in which I discriminate all application containers:
mm_lb:
image: mm/lb:latest
container_name: mm_lb
links:
- mm_app1:app1.dev
- mm_app2:app2.dev
- mm_app3:app3.dev
ports:
- "80"
mm_app1:
image: mm/app1:latest
container_name: mm_app1
ports:
- "80"
mm_app2:
image: mm/app2:latest
container_name: mm_app2
ports:
- "80"
mm_app3:
image: mm/app3:latest
container_name: mm_app3
ports:
- "80"
Note that the container configuration mm_lb
creates links with other containers. That’s how I can call them from the load Alancer which in turn responds to the main URL.