How do I deploy to Kubernetes programmatically?

Asked

Viewed 65 times

0

I have a flask application - A1 - running in a Kubernetes cluster. This application is responsible for receiving HTTP requests and starting another application - A2 - that is also run in this cluster. Note: A2 runs on a fixed port, say 5050. For example:

Suppose the flask application receives a message that should be displayed on the terminal.

{
  "message": "Hello World!"
}

From that received json, I need:

  1. transfer this json to the A2 application
  2. await the reply of A2.
  3. return the response of running A2 through A1.

However, I need this second application to be created and destroyed for each HTTP request made. I need to create this A2 pod with different settings, depending on which json the A1 application receives. For example:

  1. A1 receives:
{
  "message":"Hello World!",
  "resources":{
    "requests":{
      "memory":"64Mi",
      "cpu":"250m"
    },
    "limits":{
      "memory":"128Mi",
      "cpu":"500m"
    }
  }
}
  1. A1 starts an A2 pod with the resource settings present in JSON.
  2. A1 expects A2 to finish.
  3. A2 pod is destroyed.
  4. A1 returns a response with the result of running A2.

How do I make the A2 application to be created and destroyed as explained above?

Possible problem: A2 performs on a fixed port, and as it will be created and destroyed on demand, it means that there can only be one pod running at a time on a cluster machine, right?

2 answers

0

Your question is actually several in one, so I’ll try to answer it in parts:

A1 make A2 deploy:

A good way to deploy in K8s via templates is by using Helm https://helm.sh/ In Helm you will create a template of what your application’s deploy should look like and pass a values file that are the variables to that specific deploy. In other words, your JSON with the settings you received in A1 would be the Values of an A2 deploy you created in Helm.

A2 can communicate back with A1:

The best way I can think to solve this problem with the requirements I passed is to A1 pass a callback to A2, where A2 when finished will call A1 back with the result in a specific A1 URL with an identifier payload. This callback can be passed next to the values A1 will set for the A2 deploy.

A2 needs to complete each time:

To do this you just need to deploy A2 as Job, so it will run only once until the end.

A2 performs on a fixed port, and as it will be created and destroyed on demand, it means that there can only be one pod running at a time on a cluster machine, right?

Inside K8s it is not good practice to chat applications via IP and port, if you communicate via K8s Service this will never be a problem and you can have as many applications as you need using the specific port, because it is abstracted for you.

-1

Thinking of Kubernetes as a framework for application infrastructure, it would be possible through the following elements:

Service A1 has:

  • Kubernetes Deployment
  • Kubernetes Service

Service A2 has:

Being Servicea2 created on demand (a request made by Servicea1) it can be encapsulated as a Kubernetes Job within Servicea even facilitating the parameterization of the values specified in Json received by A1.

The application flow would be as follows:

  1. Servicea1 receives a request via request.
  2. Servicea1 validates the sent parameters (Ex: maximum RAM possible).
  3. Servicea1 creates dynamically from a base template a Job https://kubernetes.io/docs/concepts/workloads/controllers/job/#running-an-example-job
  4. Servicea1 with the necessary credentials requests the creation of the Servicea2 Job in Kubernetes with the obtained parameters and waits for the execution to receive the generated logs.
  5. Servicea2 starts command execution.
  6. Servicea2 ends command execution.
  7. Servicea1 receives the signal that Servicea2 has completed.
  8. Servicea1 returns the result generated by Servicea2 logs.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.