Philosophy

In a perfect world...



"Small boat, small problems. Big boat, big problems." -- Sven Irvind

Don't create giant monolithic CI pipelines that build your modules, then configures your container, then runs your tests, then deploys to production. Break the work up into sensible pieces. Keep everything small:

the stack

This particular stack consists of mainly Flask applications running on Docker. Orchestration is managed via Komodo. The application type is irrelevant, as Komodo can manage any container install in a "gitops" like manner. The only requirement is a docker compose file. Container images are stored in JFrog's "jcr" container repo. Custom python modules are stored in a local Pypi repo.

module ci

Consider the following gitlab ci script:

stages:
  - build
  - test
image: python:3.12

test:
  stage: test
  script:
    - python3 -m venv .
    - . bin/activate
    - pip install --upgrade pip
    - pip install -e .
    - pip install pytest
    - pytest
  only:
    - dev

release-package:
  stage: build
  tags:
    - shared
  script:
    - python3 -m venv .
    - . bin/activate
    - pip install --upgrade pip
    - pip install setuptools build twine
    - echo "[distutils]" > ~/.pypirc
    - echo "index-servers =" >> ~/.pypirc
    - echo "    local" >> ~/.pypirc
    - echo "" >> ~/.pypirc
    - echo "[local]" >> ~/.pypirc
    - 'echo "repository: http://10.0.0.60:8010" >> ~/.pypirc'
    - 'echo "username: ${TWINE_USER}" >> ~/.pypirc'
    - 'echo "password: ${TWINE_PASSWORD}" >> ~/.pypirc'
    - python3 -m build 
    - twine upload -r local dist/* --config-file ~/.pypirc --verbose
  only:
    - main
  

In this example the dev branch gets tested, and the main branch gets pushed to our local pypi. You can tell at a glance what the script does. Should it fail, you have less moving parts to look at to diagnose an issue.

app ci

The app container side uses the same basic logic. Here, we push both dev and prod containers to the repo. We only pytest the dev instance, because once the tests pass and the application passes acceptance testing, we can just push a prod image to the repo.

image: docker:stable
variables:
  DOCKER_HOST: tcp://docker:2375
  DOCKER_TLS_CERTDIR: ""
  DOCKER_DRIVER: overlay2
  DOCKER_REGISTRY: surly.forerivercloudworks.com/docker-local
services:
  - docker:24-dind
before_script:
  - timeout 30 sh -c 'until docker info; do sleep 1; done'
stages:
  - test
  - build
test:
  stage: test
  only:
    - dev
  script:
    - docker run --rm -v "$PWD":/app -w /app python:3.12-slim sh -c "pip install flask requests pytest && python test_setup.py && python -m pytest tests/test_app.py -v"
build:
  stage: build
  tags:
    - containers
  script:
    - export MYVERSION=$(cat VERSION)
    - echo "$AFCRED" | docker login -u repouser --password-stdin $DOCKER_REGISTRY
    - docker build -t flask-chat-app .
    - |
      if [ "$CI_COMMIT_REF_NAME" == "main" ]; then
        TAG="prod.$MYVERSION"
      elif [ "$CI_COMMIT_REF_NAME" == "dev" ]; then
        TAG="dev.$MYVERSION"
      else
        echo "Skipping build for branch: $CI_COMMIT_REF_NAME"
        exit 0
      fi
    - docker tag flask-chat-app:latest $DOCKER_REGISTRY/flask-chat-app:$TAG
    - docker push $DOCKER_REGISTRY/flask-chat-app:$TAG
    - docker logout $DOCKER_REGISTRY || true

deployment

This is the last mile. Your module code is built and tested. Your container is built and tested. Now you want to deploy.

services:
  flask-chat-app:
    image: surly.forerivercloudworks.com/docker-local/flask-chat-app:prod.0.0.2
    container_name: flask-chat-app
    ports:
      - "5900:5900"
    restart: unless-stopped
Yup. That's it. A simple docker compose file, in a repo used by Komodo to push the container where it needs to go. Here's a look at the repo Komodo will use to manage applications.
komodo-main$ tree
.
├── README.md
└── stacks
    ├── 10.0.0.159
    │   ├── check-ollamas
    │   │   ├── compose.yaml
    │   │   ├── Dockerfile
    │   │   ├── LICENSE
    │   │   └── README.md
    │   ├── surly-nginx-1
    │   │   ├── compose.yaml
    │   │   └── README.md
    │   ├── surly-nginx-2
    │   │   ├── compose.yaml
    │   │   └── README.md
    │   ├── surly-nginx-3
    │   │   ├── compose.yaml
    │   │   └── README.md
    │   └── surly-nginx-4
    │       ├── compose.yaml
    │       └── README.md
    ├── 10.0.0.60
    │   └── flask-chat-app
    │       ├── compose.yaml
    │       ├── Dockerfile
    │       ├── LICENSE
    │       └── README.md
    └── Local
        ├── scary-nginx-1
        │   ├── compose.yaml
        │   └── README.md
        └── scary-nginx-2
            ├── compose.yaml
            └── README.md

13 directories, 21 files
You can use Komodo (with Docker) in much the same way as you'd use Weave or Flux on Kubernetes. The philosophy of keeping things small, breaking work into manageable pieces, and automating the deployment process has been applied to the CI/CD pipeline for both module and application. This approach allows for faster iteration, easier debugging, and reduced risk of errors. With Komodo managing the deployment of containerized applications, the process is streamlined and efficient, making it easier to manage applications running on Docker. By following this philosophy and using the right tools, developers can focus on writing code and delivering value to users, rather than worrying about the intricacies of deployment.