Docker Swarm / Caddy / PHP / error 502

1. Output of caddy version:

Version 2.5.2

2. How I run Caddy:

i run caddy via a docker-compose file

a. System environment:

debian

b. Command:

 docker stack deploy --compose-file docker-compose.yml app

c. Service/unit/compose file:

app installed :Docker swarm / keepalived

here my docker-compose.yml:
version: "3.1"
services:

  # Web proxy for SSL termination
  caddy:
    image: docker.io/caddy:latest
    # ou utiliser cette image image: abiosoft/caddy
    restart: unless-stopped
    ports:
    # HTTP et HTTPS
      - "80:80"
      - "443:443"
    networks:
    - caddy
    volumes:
      - ./apps/caddy/Caddyfile:/etc/caddy/Caddyfile:ro  #fichier de configuration principal de Caddy
      - ./data/caddy:/data  #indispensable pour stocker les données de certificats
      #- ./data/caddy/config:/config
      #- ./data/caddy/log:/log
      - ./publichtml:/srv:ro #on lie le dossier de l'endroit oÃÂč se situes les fichier du conteneur PHP au niveau local avec l'endroit oÃÂč le front pourra ÃÂȘtre vu par Caddy

  # Web server
  #nginx:
    #image: docker.io/nginx:latest
    #restart: unless-stopped
    #volumes:
    #  - ./apps/nginx:/etc/nginx/conf.d/:ro
    #  - ./sock:/home
  php:
    image: thecodingmachine/php:7.4.30-v4-fpm
    #image: bitnami/php-fpm:7.4.30-debian-11-r21
    container_name: "php"
    networks:
      - caddy
    #on monte le volume persistent des données du site web client et on passe en paramÚtre le fichier php.ini dans le conteneur
    volumes:
    #- ./publichtml:/srv:rw,cached
    - ./publichtml:/srv:rw
    - ./apps/php/php.ini-development:/usr/lib/php/7.4/php.ini-development:ro
    ports :
    - 9000:9000
    restart: unless-stopped
    depends_on:
    - db
    links:
    # Lien vers le conteneur "db" (déclaré ci-aprÚs)
    - db:db
    environment:
      MYSQL_DB_HOST: db  #conteneur mysql
      MYSQL_DATABASE: neosaiyan
      MYSQL_USER: USERNAME
      MYSQL_PASSWORD: USERPASSWORD

  # création base de données
  db:
    image: docker.io/mariadb:latest
    #command: --init-file /data/application/dbfull.sql
    command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci
    networks:
      - caddy
    restart: unless-stopped
    volumes:
      #vidage et création d'une bdd client a partir d'un fichier sql
      #- ./apps/mysql:/docker-entrypoint-initdb.d
      #on monte le volume persistent pour la database
      #- ./apps/mysql/dbfull.sql:/docker-entrypoint-initdb.d/dbfull.sql
      #- ./apps/mysql/test.sql:/docker-entrypoint-initdb.d/test.sql
      - ./data/mysql:/var/lib/mysql
      #fichier de configuration myslq - ./apps/mysql/my.cnf:/etc/mysql/my.cnf
      #fichier de configuration myslq - ./apps/mysql/my.cnf:/etc/my.cnf
    environment:
      MYSQL_DATABASE: neosaiyan
      MYSQL_USER: USER
      MYSQL_PASSWORD: USERPASSWORD
      MYSQL_ALLOW_EMPTY_PASSWORD: 'no'
      MYSQL_ROOT_PASSWORD: ROOTPASSWORD
    ports:
      - "3306:3306" 

  #PHPMyAdmin
  phpmyadmin:
    depends_on:
      - db
    image: phpmyadmin/phpmyadmin
    networks:
      - caddy
    container_name: phpmyadmin
    restart: always
    ports:
      - '8080:80'   #ports serveur:conteneur(Apache)
    environment:
      MYSQL_DB_HOST: db  #conteneur mysql
      MYSQL_USER: USER
      MYSQL_PASSWORD: USERPASSWORD
#Réseau Caddy
networks:
  caddy:
    external: true

d. My complete Caddy config:

I don’t put the snippet rewriterule for mor visibility (rewriterules works by the way)

{
    # email to generate a valid SSL certificate
    email mgagnant@neosaiyan.fr

    #HTTP/3 support
    servers {
        protocol {
            experimental_http3
        }
    }
}
# Site PHP
docker.neosaiyan.fr {
# On indique quel sera le dossier racine du serveur web
root * /srv
php_fastcgi php:9000
encode gzip zstd
file_server
import rewriterule
}

3. The problem I’m having:

I try to be the more complete. I have a subdomain named docker.neosaiyan.fr. this domaine redirect to my public ip on port 80. i have a home router that redirect to a VIP 192.168.0.40 (managed by keepalived).
My infrastructure is composed with 3 debian vms with keepalived installed.
Server1, Server2 and Server3.
I try to simulate failover :
When I shut down Server 2, containers are rebuild on Server1/Server3 and my website is always UP.
When I shut down Server 3, containers are rebuild on Server1/Server2 and my website is always UP. When i shut down server 1, containers are rebuild on Server2/Server3 but my website is DOWN.

I really dunno why only when i down the server1 this happend.

You can observ on caddy log that he receive a connexion refused from 10.0.2.2 (but dunno where is this ip address) and an error 502

4. Error messages and/or full log output:

{
   "level":"error",
   "ts":1659076710.6441965,
   "logger":"http.log.error",
   "msg":"dialing backend: dial tcp 10.0.2.2:9000: connect: connection refused",
   "request":{
      "remote_ip":"10.0.0.17",
      "remote_port":"22495",
      "proto":"HTTP/2.0",
      "method":"GET",
      "host":"docker.neosaiyan.fr",
      "uri":"/solutions/index.php",
      "headers":{
         "Sec-Ch-Ua":[
            "\".Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"103\", \"Chromium\";v=\"103\""
         ],
         "Sec-Ch-Ua-Platform":[
            "\"Windows\""
         ],
         "Upgrade-Insecure-Requests":[
            "1"
         ],
         "Purpose":[
            "prefetch"
         ],
         "Sec-Fetch-Dest":[
            "document"
         ],
         "Accept-Language":[
            "fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7,pt;q=0.6"
         ],
         "User-Agent":[
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
         ],
         "Sec-Fetch-User":[
            "?1"
         ],
         "Sec-Fetch-Site":[
            "none"
         ],
         "Sec-Fetch-Mode":[
            "navigate"
         ],
         "Accept-Encoding":[
            "gzip, deflate, br"
         ],
         "Sec-Ch-Ua-Mobile":[
            "?0"
         ],
         "Accept":[
            "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
         ],
         "Cookie":[
            
         ]
      },
      "tls":{
         "resumed":false,
         "version":772,
         "cipher_suite":4865,
         "proto":"h2",
         "server_name":"docker.neosaiyan.fr"
      }
   },
   "duration":0.067565638,
   "status":502,
   "err_id":"fbschgfc6",
   "err_trace":"reverseproxy.statusError (reverseproxy.go:1184)"
}

5. What I already tried:

I try many things like create a specific caddy network but always same


I will appreciate any help on this.

Thanks in advance,

Micka

6. Links to relevant resources:

I precise that phpmyadmin is always running btw but php_fpm not when server 1 is down.

You can remove this, that’s Caddy v1 and is no longer supported.

So are you saying you have 3 separate Caddy instances running?

If that’s the case, you should make sure to sync their storage so they can coordinate TLS issuance. See the docs:

That’s probably the IP address of your php-fpm container, using Docker’s networking. That’s not an issue with Caddy, it’s a problem with your Docker setup.

2 Likes

Hello Francis,
Thx for your reply
1/ OK I will update caddy with the latest version
2/ no One occurrence of caddy deployed on the swarm docker.
3/docker network seems to be Ok.

I will update caddy and i ll put the result here.
Thx again !
Micka

You’re already using the latest version, I’m just saying you can remove that comment because it’s no longer useful. The abiosoft/caddy image is not maintained, since it was for Caddy v1 only.

I’m not sure I understand how the load balancing works then. Caddy is only serving a single upstream.

Ok thanks for reply.
there is just one upstream (named php in my docker-compose file) and Caddy deployed in a docker swarm can serve this upstream nevermind where the caddy container is deployed. It works for server2 or server3 down. But doesn’t work when server1 is down.
Otherwise when I power on the server1, the website becomes again UP. All containers (php db caddy and phpmyadmin) are yet deployed on server2 and 3.
Very strange behavior.
I try to deploy 3 instances of Caddy via “deploy:global” option but same conclusion.
Nobody deploy a HA-Failover Cluster with Docker-swarm and caddy ?
Micka

I again,

it seems to be that is because my node1 is master node and the “ingress network” is down when server1 is down.
I will be back if i found a solution.

Micka

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.