Swipe left or right to navigate to next or previous post
Docker is the container managing service that manages the container. The main philosophy of docker is to develop, ship and run anywhere. The docker helps developers/architecture to easily develop applications, ship the applications into the containers that can be deployed everywhere.
Docker file is usually created with the Docker name without any extension. The basic structure of Docker file for Django Project is
FROM python:3.10-alpine ENV PYTHONUNBUFFERED=1 RUN apk update \ && apk add --no-cache --virtual .build-deps \ ca-certificates gcc linux-headers musl-dev \ libffi-dev jpeg-dev zlib-dev libc-dev python3-dev \ postgresql-dev cargo RUN pip install --upgrade pip # Create group and user RUN addgroup group_name && adduser -D user_name -G group_name -h /home/user_name ENV HOME /home/user_name ENV APP_DIR ${HOME}/project_directory WORKDIR ${APP_DIR} ADD requirements.txt ${APP_DIR}/ COPY ./ ${APP_DIR} RUN chown -R group_name:group_name ${APP_DIR} USER user_name RUN pip install -r ${APP_DIR}/requirements.txt EXPOSE 8000 ENTRYPOINT sh -c "python manage.py runserver 0.0.0.0:8000"
Create a Docker.production file without any file extension
FROM python:3.10-alpine ENV PYTHONUNBUFFERED=1 RUN apk update \ && apk add --no-cache --virtual .build-deps \ ca-certificates gcc linux-headers musl-dev \ libffi-dev jpeg-dev zlib-dev libc-dev python3-dev \ postgresql-dev cargo RUN pip install --upgrade pip # Create group and user RUN addgroup group_name && adduser -D user_name -G group_name -h /home/user_name ENV HOME /home/user_name ENV APP_DIR ${HOME}/project_name WORKDIR ${APP_DIR} ADD requirements.txt ${APP_DIR}/ RUN pip install -r ${APP_DIR}/requirements.txt COPY ./ ${APP_DIR} RUN chown -R group_name:group_name ${APP_DIR} USER user_name EXPOSE 8000 CMD exec gunicorn config.wsgi:application --bind 0.0.0.0:8000 --workers 3
Create a docker-compose.yml file.The following docker file consists of redis for caching, and postgres for database
version: '3.10' volumes: dbdata: networks: localhost: driver: bridge redis-net: services: redis: image: redis container_name: redis_container command: redis-server ports: - '6380:6379' networks: - redis-net api: build: context: . ports: - '8000:8000' volumes: - .:/home/user_name/project env_file: .env container_name: unique_container_name depends_on: - db links: - db - redis networks: - localhost - redis-net db: image: postgres:alpine environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} container_name: unique_container_name-db ports: - '5433:5432' volumes: - dbdata:/var/lib/postgresql networks: - localhost
Create a docker-compose.production.yml file. The following docker file consists of redis service for caching.
version: '3.10' networks: redis-net: services: redis: image: redis container_name: redis_container command: redis-server ports: - '6379:6379' networks: - redis-net api: build: context: . dockerfile: Dockerfile.production ports: - '8000:8000' networks: - redis-net volumes: - .:/home/user_name/project_name env_file: .env container_name: unique_container_name
docker build -f Dockerfile -t ImageName .
This method allows the users to build own Docker images.
Options
docker-compose -f docker-compose.yml up
docker-compose -f docker-compose.yml up -d
Or, you can use the following command
docker-compose -f docker-compose.yml start
docker-compose stop
docker-compose down --volume
docker system prune -a
docker-compose exec api python manage.py makemigrations [options]
docker-compose exec api python manage.py migrate [options]
docker-compose exec api python manage.py commands_name