Guniconr/Nginx/virtualenv inside Docker

Asked

Viewed 80 times

0

I’m trying to run the tutorial routine digitalocean inside a Docker machine.

Using the official website as a basis gunicorn

My problem happens in trying to reverse proxy inside the container.

My idea was Nginx at gate 80, the gunicorn at gate 5000 . However when I rotate the container the displayed message is RESET CONNECTION

Would anyone have any idea how to adjust this reverse proxy?

I created a Dokerfile

    FROM python:3.5-slim-jessie

ENV WD=/deploy
ENV WD_CONF=/deploy/configuracao

RUN apt-get update --fix-missing
RUN apt-get install -y nginx

WORKDIR $WD

EXPOSE 80

#Criar pasta para copiar APP
RUN mkdir -p $WD


#Criar pasta para arquivos staticos
RUN mkdir -p  $WD/app && mkdir -p  $WD/app/vendor


#copia pasta de configuracao
COPY /configuracao/ $WD_CONF/

RUN chmod +x $WD_CONF/venv.sh

#copia script para abrir virtualenv
COPY abrevenv.sh $WD/abrevenv.sh
RUN chmod +x $WD/abrevenv.sh


COPY wsgi.py $WD/app/wsgi.py

RUN sh $WD_CONF/venv.sh 

COPY myproject.py $WD/app/myproject.py

# Setup nginx
RUN rm /etc/nginx/sites-enabled/default
COPY /configuracao/default.conf /etc/nginx/sites-available/default
COPY /configuracao/nginx.conf /etc/nginx/nginx.conf
RUN ln -s /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled/default.conf
COPY nginx_start.sh $WD/nginx_start.sh
RUN chmod +x $WD/nginx_start.sh
RUN sh $WD/nginx_start.sh


ENTRYPOINT ["sh","abrevenv.sh"]

I even created some support files: abreenv.sh

. venv/bin/activate && gunicorn --workers 4 -b 0.0.0.0:5000 -m 007  --chdir ./app/app wsgi:app  && /etc/init.d/nginx start

venv.sh

pip install virtualenv && virtualenv venv && . venv/bin/activate && pip install --no-cache-dir -r /deploy/configuracao/requirements.txt

Requirements.txt

flask==1.0.2
requests_ntlm 
beautifulsoup4==4.6.0 
pymongo==3.7.2 
gunicorn==19.9.0

nginx_start.sh

/etc/init.d/nginx start && /etc/init.d/nginx status

wsgi py.

from myproject import app

if __name__ == "__main__":
    app.run()

default.conf (Nginx)

server {
 listen 80;
 server_name http://127.0.0.1 ;
 location / {
 proxy_pass 0.0.0.0:5000 ;
 }
}

Nginx.conf

worker_processes 1;

user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
error_log  /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
  worker_connections 1024; # increase if you have lots of clients
  accept_mutex off; # set to 'on' if nginx worker_processes > 1
  # 'use epoll;' to enable for Linux 2.6+
  # 'use kqueue;' to enable for FreeBSD, OSX
}

http {
  include mime.types;
  # fallback in case we can't determine a type
  default_type application/octet-stream;
  access_log /var/log/nginx/access.log combined;
  sendfile on;

  upstream app_server {
    # fail_timeout=0 means we always retry an upstream even if it failed
    # to return a good HTTP response

    # for UNIX domain socket setups
    #server unix:/tmp/gunicorn.sock fail_timeout=0;

    # for a TCP configuration
     server 0.0.0.0:5000 fail_timeout=0;
  }

  server {
    # if no Host match, close the connection to prevent host spoofing
    listen 80 default_server;
    return 444;
  }

  server {
    # use 'listen 80 deferred;' for Linux
    # use 'listen 80 accept_filter=httpready;' for FreeBSD
    listen 80;
    client_max_body_size 4G;

    # set the correct host(s) for your site
    server_name example.com www.example.com;

    keepalive_timeout 5;

    # path for static files
    root /deploy/app/vendor;

    location / {
      # checks for static file, if not found proxy to app
      try_files $uri @proxy_to_app;
    }

    location @proxy_to_app {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      # we don't want nginx trying to do something clever with
      # redirects, we set the Host: header above already.
      proxy_redirect off;
      proxy_pass http://0.0.0.0:5000;
    }

    error_page 500 502 503 504 /500.html;
    location = /500.html {
      root /deploy/app/app;
    }
  }
}
  • Hello Israel, it’s a bit confusing this mix of scripts in your Dockerfile. It would be interesting to reorganize its architecture, and climb at least two containers: one to the nginx and another for the application flask together with the gunicorn all this using docker-compose. Follows a link with something similar to what I said. I can further detail this process if it has become too generic ;)

  • Complementing: has a nice little project called flusk that brings together all that you want :D

  • I’ll look.... Thank you.. I rewrote the old one, and I was able to make it really simple... Thank you

2 answers

0

Usando composer fica:

version: '3.1'
services:
    api:
      restart: always
      build: ./core
      container_name: "core"
      expose:
        - "5000"
      volumes:
        - .:/usr/src/app
      links:
        - mongodb
      depends_on:
        - mongodb
      command:
        gunicorn -w 1 -b 0.0.0.0:5000 run:wsgi
    #tail -f /dev/null

    nginx:
      restart: always
      build: ./nginx
      container_name: "nginx"
      ports:
        - "82:80"
      links:
        - api:api

    mongodb:
      image: mongo:latest
      container_name: "mongodb"
      environment:
        - MONGO_INITDB_ROOT_USERNAME=meuuser
        - MONGO_INITDB_ROOT_PASSWORD=minhasenha
        - MONGO_INITDB_DATABASE=meubanco
      volumes:
        - ./mongodb/data/db:/data/db
      ports:
        - 27017:27017

0

I decided as follows :

Dockerfile

FROM centos:7

# This bit is exactly as per the documentation and can be stuck
# existing Dockerfiles without issue
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
  systemd-tmpfiles-setup.service ] || rm -f $i; done); \
  rm -f /lib/systemd/system/multi-user.target.wants/*;\
  rm -f /etc/systemd/system/*.wants/*;\
  rm -f /lib/systemd/system/local-fs.target.wants/*; \
  rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
  rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
  rm -f /lib/systemd/system/basic.target.wants/*;\
  rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]


RUN yum -y install epel-release;
RUN yum -y install python-pip python-devel gcc nginx;
RUN pip install virtualenv
WORKDIR /usr/share/myproject


EXPOSE 80

COPY requirements.txt .
RUN mkdir /usr/share/myproject/app


COPY myproject.py /usr/share/myproject/app
RUN cd app && virtualenv myprojectenv && source myprojectenv/bin/activate &&  pip install --no-cache-dir -r $PWD/../requirements.txt

COPY myproject.service /etc/systemd/system/myproject.service

COPY wsgi.py /usr/share/myproject/app/wsgi.py
COPY index.html /usr/share/myproject/app/index.html


COPY nginx.conf /etc/nginx/nginx.conf

RUN systemctl enable myproject

RUN systemctl enable nginx


CMD ["/usr/sbin/init"]

myproject.py

from flask import Flask
application = Flask(__name__)

@application.route("/")
def hello():
    return "<h1 style='color:red'>Hello There!</h1>"


@application.route("/azul")
def helloa():
    return "<h1 style='color:blue'>Hello There!</h1>"


if __name__ == "__main__":
    application.run(host='0.0.0.0')

myproject.service

[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

[Service]
User=root
Group=nginx
WorkingDirectory=/usr/share/myproject/app
Environment="PATH=/usr/share/myproject/app/myprojectenv/bin"
ExecStart=/usr/share/myproject/app/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:application

[Install]
WantedBy=multi-user.target

Nginx.conf

# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/myproject/app;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {

         proxy_pass http://unix:/usr/share/myproject/app/myproject.sock;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }



}

To build the image: Docker build --rm -t centos7-mImage .
Build container: Docker run -tid -v /sys/Fs/cgroup:/sys/Fs/cgroup:ro --cap-add SYS_ADMIN -p 8000:80 --name cadprsev centos7-pathImage:Latest

Browser other questions tagged

You are not signed in. Login or sign up in order to post.