Beginner Friendly — Security-Oriented Setup: Rootless Podman running Pi-hole and Unbound using Reverse Proxy via Caddy with Socket

Security-Oriented Setup:
Rootless Podman running Pi-hole and Unbound using Reverse Proxy via Caddy with Socket

If you happened to land here because you were curious about Podman, read on. If you don't care and just want the sample configurations, continue to Step 1, assuming you know what you're doing. If not, then start from Prerequisites. There's also a troubleshooting section at the end.

If there is sufficient demand, I will also add WireGuard into this setup so that you can utilize this ad-free, safer, more secure, and more private networking while away from the network it was configured on.


Why use Podman versus Docker?

Docker uses a daemon, whilst Podman is daemonless. Why is daemonless better? For a few reasons.

  • Stability
    The Docker daemon is responsible for managing images, containers, networking, and storage. This daemon runs as a privileged root process, making it a single point of failure. This means, if the daemon crashes, all running containers are affected. Podman’s fork-exec model is more stable and secure, as containers can be managed independently. The fork system creates a process that is a copy of the parent process, but it gets its own independent memory space, as in a child process. This means that if a child process crashes or becomes unstable, it won’t directly affect the parent process or other unrelated processes. Stability.

  • Security
    By creating a new process for each program execution, the fork-exec model limits the potential damage that a compromised program can inflict. Even if a child process is exploited, an attacker’s access is typically limited to the resources associated with that specific process. That also means the parent process can run with higher privileges and perform necessary setup tasks (e.g., opening files, establishing network connections), and then the child process can be executed with lower privileges, reducing the potential impact of vulnerabilities in the child program. Security.

  • Lighter Resources and Integration with Systemd if running Linux
    Because of the fork-exec model, it integrates very nicely into systemd. Systemd is the standard system and service manager in most Linux distributions. Podman’s integration allows you to manage containers using the same tools, that is systemctl.
    systemd provides robust process monitoring, automatic restarts on failure, resource limits, and dependency management. It also enables you to start containers automatically at boot time, ensuring critical services have minimal downtime upon system restart. If you’re not using Linux, you’re wrong. /s

Transitioning would be putting you in familiar territory. Podman’s CLI (command line interface) is essentially the same as Docker’s, so the only meaningful difference is using podman as the initial command instead of docker. Podman even offers a conversion process you can enable so that docker works in place of podman. The only downside is Podman doesn’t have great compose.yaml integration, as there are lots of translation issues. You’d have to use their Quadlets format. It doesn’t take too long to figure out, just like Docker Compose. Don’t let it overwhelm you. You can also use Kubernetes with Podman, but I don’t use it and can’t tell you much about it.


Why rootless Podman?

Running containers as root significantly increases the potential damage if a container is compromised. With rootless Podman, even if an attacker gains control of a container, their access is limited to the user's privileges, preventing them from easily escalating to root and taking over the entire system.


Am I safe using Docker or host networking?

Factually, there are more avenues for hoodlums to take over your system with Docker and/or host networking. It depends on the types of attackers you’re trying to prevent. We want it safe enough that random script-kiddies can’t do anything, and we don’t want malicious actors looking for an easy way in with random probing, but you don’t necessarily need NSA-tier security using everything here plus SELinux or App Armor.

If you’re configuring an average home server, whether it be with Windows, Linux, Raspberry Pi, or other kinds of hardware and software, Docker and host networking is likely to be safe enough. Unless you’re doing some dirty things, nobody should be targeting you specifically.


Should I get a domain?

Who’s to say? There are free options like DuckDNS that are absolutely fine for most home server applications. If you want to use a shorter domain name to access services, getting your own domain can be worth it. With your own domain, you need to understand basic networking outside of a local network. It’s not too demanding, and there are enough guides out there to help you.

Simply for more privacy, custom naming schemes, and me being in control (mostly) of my domain and its DNS, I got my own domain. You can get them as cheap as $5 a year, sometimes even less. I literally paid no more than $3 for my first year, and after that it goes up a little. It all depends on the demand for the name, and the tld (top-level domain, like .com/.net/.org) is probably the biggest factor in the pricing. There are a few domain providers out there, but I decided to go with Namecheap.

If you do want a domain, do your own research and don’t pick Namecheap just because it’s in this guide. There are lots of other options like Cloudflare, GoDaddy, IONOS, and DreamHost.


Why use a socket-activated socket?

There’s a few different reasons. With socket activation, in our use case of Caddy, only starts when a connection is actually made to its socket. This drastically reduces startup time, especially for applications that are not always needed. Ideally, multiple services can be started in parallel without waiting for each other to fully initialize. systemd activates the sockets, and the applications start independently when they receive connections. Unbound and Pihole, however, do not support socket-activation in their containers, while Caddy made it possible as of Q4 2024.

The term socket-activated socket sounds a bit weird, eh? In our context, it is managed by systemd and allows a service (like a daemon or application) to be started on demand when a connection is made to its associated socket. It’s a way to defer the startup of a service until it’s actually needed, leading to improved performance and resource utilization. As only Pihole’s admin panel is the only thing Caddy is reverse_proxying, you’re unlikely to be using it a lot. Unbound isn’t utilized by Caddy at all, since only Pihole is communicating things with it.

Socket activation integrates seamlessly with systemd. It manages the sockets and starts the applications as needed, which simplifies service management and provides better control over application lifecycles. systemd can also manage dependencies between services and ensure that applications are started in the correct order.

Security is better versus bridge networks, because inactive applications are not exposed to the network, reducing the attack surface. Only when a connection is made is the application started and exposed. systemd can start applications with different user privileges based on the socket configuration, enhancing security.

Socket activation allows multiple applications to share the same port. systemd can determine which application should handle the connection based on the incoming request.


Why use macvlan networking?

In the case of Pihole, we would like to resolve the client names so the source IP, as in the IP of the person requesting it, is preserved until Pihole receives it. With traditional bridge networking, this isn’t possible. Pihole only sees the host’s IP since the host is receiving the DNS request. If it doesn’t even see the host’s IP, it only sees the container’s IP and hostname, so the client will likely show up as something like 721f333940b1. If you don’t care, cool. Stick with traditional networking like I did in this guide. If you want to try macvlan, go for it. If you can actually have your router set a network interface for VLAN, that is even better. I’m not covering those types of networking in this guide. (Maybe later)

Containers on a macvlan network get their own MAC address and appear as separate devices on the network. MAC addresses are basically hardware identifiers. This means they can communicate directly with other devices on the network without needing NAT (Network Address Translation) or port mapping. That means you don’t need to expose ports on the host.

Containers have access to the full range of network features available on the macvlan interface, including VLAN, bonding, and advanced routing, hence the ‘MAC’-‘VLAN’ naming scheme. Some legacy applications might require direct access to a network and may not work well with NAT or port mapping. macvlan can provide a solution in these cases.

With macvlan, the container gets its own virtual network interface with a separate MAC address. This provides better isolation. Even if the container is compromised, the attacker’s access is limited to the container’s virtual interface, reducing the potential impact on the host.

A downside is each container on the macvlan network needs a unique MAC address. This is usually handled by Docker or Podman, so we shouldn’t have a problem in that regard. Setting up and managing macvlan networks can be more complex than bridge networks, especially if you’re not familiar with networking concepts. Incorrectly configured macvlan networks can lead to IP address conflicts or other network problems.


Why not use host networking?

Because we want security, not ease of setup. A container using host networking shares the host’s network stack. This means the container has access to all the host’s network interfaces and can potentially interact with any service running on the host, regardless of port mappings or firewalls within the container. If the container is compromised, the attacker has a much larger attack area and can potentially gain control of the entire host system, making the additional security with basic containerization completely useless.

You also lose the ability to easily manage ports using Docker or Podman’s port mapping (-p or PublishPort=), so ports are managed directly on the host. That can become more complex, especially with multiple containers.


Why VLAN?

VLANs are a way to segment a physical network into multiple virtual networks. This is done by tagging Ethernet frames with a VLAN ID. Devices on the same VLAN can communicate with each other, but devices on different VLANs are isolated unless there is routing between them.

VLAN is managed by network switches and routers, while macvlan is managed by the host OS. macvlan creates multiple virtual network interfaces (with unique MAC addresses) on a single physical interface. Each virtual interface can be assigned to a different container, giving each container its own direct connection to the physical network.

So essentially, VLAN gives you strong network-level isolation. macvlan is only container-level isolation. If you’re looking for the best network isolation and security, VLAN is the way to go, though it may be the most difficult to configure properly. If your router is capable to manage a VLAN, I highly recommend that. My (current) router is not capable of that, so I’m not covering it in this guide.


Prerequisites

Skip if you already know what you're doing and have done the necessary configurations to get rootless Podman functional. If not, click here.

If you haven’t already installed Podman, do so now.


In this guide, I’m using nano as the editor. Replace the nano command with your editor of choice. (vim, gedit, emacs, kate)


Have linger enabled. Linger allows user services to continue running even after the user has logged out. This is useful for running long-running processes or services, and necessary to start immediately after rebooting without needing the user to be actively logged in.

You can triple-click the box to select it all, then press Ctrl+C to copy it. When using those shortcuts in a CLI like Konsole, you’ll probably need to use Ctrl+Shift instead of just Ctrl for the commands. This is because Ctrl+C sends an interrupt signal (SIGINT) to a running process, which usually terminates it. So to paste in a CLI, try Ctrl+Shift+V to paste.

loginctl enable-linger $USER

Ensure the user can to bind to privileged ports (ports under 1024), and has user namespace enabled. All that is necessary for this is to have a file within sysctl.d. Replace nano with your editor of choice.

sudo nano /etc/sysctl.d/99-sysctl.conf

Inside, all you need to add is:

net.ipv4.ip_unprivileged_port_start=53
kernel.unprivileged_userns_clone = 1

and then reload sysctl with:

sudo sysctl --system

Your firewall must allow incoming connects at ports 80 and 443 so that Caddy can do its job. You must determine if you’re only using iptables, or if you installed a firewall package like firewalld or ufw (Uncomplicated Firewall). I will list the command for each of these services.

iptables has been replaced with nftables, but the commands for iptables still work. I’m going to utilize nftables commands here.

I highly recommend you read up on the service you’re using before you blindly enter commands.

For nftables:

sudo nft add rule ip filter input tcp dport 80 accept && sudo nft add rule ip filter input tcp dport 443 accept && sudo nft add rule ip filter input udp dport 443 accept && sudo nft add rule ip filter input tcp dport 53 accept && sudo nft add rule ip filter input udp dport 53 accept

For firewalld:

sudo firewall-cmd --permanent --add-service=http && sudo firewall-cmd --permanent --add-service=https && sudo firewall-cmd --permanent --add-service=dns && sudo firewall-cmd --reload

For ufw:

sudo ufw allow 80/tcp && sudo ufw allow 443/tcp && sudo ufw allow 443/udp && sudo ufw allow 53/tcp && sudo ufw allow 53/udp


If you have not yet worked with Caddy, then you need to do a few things. The default location for Podman’s image data is in ~/.config/containers/storage. To keep things organized, I like to create subfolders for each service. Start with:

mkdir -p ~/.config/containers/storage/caddy

While you're at it, you might as well do Pihole's and Unbound's if you haven't, yet:

mkdir -p ~/.config/containers/storage/pihole && mkdir -p mkdir -p ~/.config/containers/storage/unbound

‘-p’ makes sure that if any parent folders do not exist, it will create them.

If this is your first time doing anything with Podman, create your container’s systemd directory ~/.config/containers/systemd/. Run the following command:

mkdir -p ~/.config/containers/systemd

which creates the user directory that Podman reads from to generate the systemd .service file, compliant with OCI (Open Container Initiative) standards.

We also need to create the directory for where our socket is going.

mkdir -p ~/.config/systemd/user

Step 1: Pull Images

Remember: You can triple-click the box to select it all, then press Ctrl+C to copy it. When using those shortcuts in a CLI like KDE Konsole, you’ll probably need to use Ctrl+Shift instead of just Ctrl for the commands. This is because Ctrl+C sends an interrupt signal (SIGINT) to a running process in the CLI, which usually terminates it. So to paste in a CLI, try Ctrl+Shift+V to paste.


Assuming you already have Podman installed, pull the images with:

podman pull docker.io/library/caddy pihole/pihole:latest mvance/unbound:latest

If you haven’t installed it, there’s a great guide for Linux users on the Arch Linux wiki. Just make sure you read everything within the relevant section, especially regarding registries in section 2.1.


Step 2: Caddy Configuration

Let’s start with Caddy.

If you need Caddy built with a plugin, click here.

You can use xcaddy to build caddy with the necessary plugins (like DNSDuckDNS, DNS/Cloudfare or DNS/Namecheap). The alternative is to download it with the needed plugins.

Either way, when the file is made, ensure you rename it to caddy. If you downloaded it versus using xcaddy, then it likely went into your Downloads directory. Use this to rename it and move it to where we need it:

mv ~/Downloads/caddy_linux_amd64_custom ~/.config/containers/storage/caddy/caddy

Make it executable with

chmod +x ~/.config/containers/storage/caddy/caddy

Create a .container file with

nano ~/.config/containers/systemd/caddy.container

and then put in the following:

[Unit]
AssertPathExists=%h/.config/containers/storage/caddy/Caddyfile

[Container]
ContainerName=caddy
Image=docker.io/library/caddy
Exec=/usr/bin/caddy run --config /etc/caddy/Caddyfile
# Change your email to the one to use for ACME registration
Environment=EMAIL=example@example.tld LOG_FILE=/data/access.log

# This first volume bind is an executable with modules I need for DNS.
# In this case, it has the Namecheap plugin attached to the regular
# caddy. If no modules are needed, it can be omitted.
Volume=%h/.config/containers/storage/caddy/caddy:/usr/bin/caddy
Volume=%h/.config/containers/storage/caddy/Caddyfile:/etc/caddy/Caddyfile:Z
Volume=%h/.config/containers/storage/caddy/caddy-config:/config
Volume=%h/.config/containers/storage/caddy/caddy-data:/data
Notify=true

Network=dns.network
AddHost=pihole:172.17.0.5
AddHost=unbound:172.17.0.10
# AddHost shouldn't be necessary, but I've had weird things happen if I don't use it.

[Install]
WantedBy=default.target

[Service]
Restart=always

A few things to cover if you want to know. If not, continue on.
  • A hash (#) is used to make a line a comment, so Podman’s generator does not read the values and incorporate it into the .service file.

  • Ending the file in the ~/.config/containers/systemd directory with .container, .network, .pod, .volume, and .kube allow Podman Quadlets to translate the contents into a systemd .service file

  • The IANA-reserved ranges for private networks are:
    10.0.0.0/8: (10.0.0.0 to 10.255.255.255)
    172.16.0.0/12: (172.16.0.0 to 172.31.255.255)
    192.168.0.0/16: (192.168.0.0 to 192.168.255.255)
    Any of these work as long as it’s consistent in your network configuration.

  • .tld is top level domain. (.com,.net,.org)

  • ~/ simply references to the home directory of the user.

  • %h also refers to the home directory of the user executing:

systemd --user start caddy.service

  • The Network= directive points to a file with the name in the same ~/.config/containers/systemd directory.

  • Notify=true provides a way for services to communicate their state changes (starting, running, stopping, etc.) back to systemd. This is done through a Unix socket, often referred to as the “notify socket.” When Notify=true is set, the service is expected to use this socket to send notifications to systemd. This is necessary with our socket-activated socket being passed into the container.

  • [Install] contains installation directives that tells systemd how to enable and disable the generated caddy.service service.

  • The WantedBy= directive specifies one or more targets that the service should be “pulled in” or “wanted by” when those targets are activated.

  • default.target is a special target in systemd. It represents the default multi-user system state. In simpler terms, it’s what systemd aims to achieve when the system boots into normal operating mode (not single-user mode or other special modes).

  • [Service] contains directives that control the behavior of the service itself, such as how it’s started, stopped, and restarted.

  • The Restart= directive specifies under what conditions systemd should automatically restart the service.

  • The always value means that systemd should restart the service regardless of why it stopped. This includes normal exits, crashes, kill signals, and timeouts.


Create a Caddyfile with:

nano ~/.config/containers/storage/caddy/Caddyfile

and input the following:

{
        admin fd/6
}



# Replace the domain.tld below with your domain. If you plan on having more subdomains,
# use a wildcard (*.domain.tld) to get the certificate(s) for the (sub)domain(s).

domain.tld {

# If you have a different DNS provider for the domain, then use that.
        tls {
                dns namecheap {
                        api_key example
                        user example
                        api_endpoint https://api.namecheap.com/xml.response
                        client_ip example
                }
        }

# This hashed section is for those without a domain. Remember, a free
# domain is available through duckdns.org and is preferred.
# https://caddyserver.com/docs/automatic-https#local-https
# Caddy's reverse_proxy directive allows for automatic HTTPS certificate issuing.
# That makes accessing things with encryption easy.
#
# Thanks to Matt Holt for starting the Caddy project, and thanks to
# those who have contributed to making it better over time with him.
#
# localhost {
#      reverse_proxy pihole:80
# }

        bind fd/3 {
                protocols h1
        }
        bind fd/4 {
                protocols h1 h2
        }
        bind fdgram/5 {
                protocols h3
        }
        reverse_proxy pihole:80 {
                        header_up X-Forwarded-For {http.request.header.X-Real-IP}
        }
 }

More details for this section. If you don't care, continue on.
  • Remember that I included the DNS/Namecheap plugin with my Caddy build. If you’re using another provider, such as Cloudflare, use those parameters.

  • There are a LOT of ways to configure a Caddyfile. Read here if you decide you’d like to make any revisions or additions.

  • domain.tld (or localhost if used) is the site block in the Caddyfile.

  • The bind directive overrides the interface to which the server’s socket should bind. Normally, the listener binds to the empty (wildcard) interface. However, you may force the listener to bind to another hostname or IP instead. This directive accepts only a host, not a port. The port is determined by the site address (defaulting to 443).

  • fd/# refers to file descriptor. Those are passed from the service manager (like systemd) to the Caddy server, allowing it to accept connections without binding to specific addresses. Caddy recognizes these file descriptors through environment variables provided by systemd, which indicate the number and names of the sockets it should use.

    So file descriptors like fd/3, fd/4 and fd/5 are typically assigned to standard ports such as :80 (HTTP) and :443 (HTTPS). systemd creates these file descriptors and passes them to the Caddy server when it starts. Caddy then uses these file descriptors to listen for incoming connections on the specified ports without needing to bind to them directly. This allows for more efficient management of services and resources, for example, socket activation.

    The exact meaning of fd/6 can vary depending on the configuration of the service and the number of sockets that have been set up. In the context of Caddy, fd/6 can be used as a file descriptor for the admin API when socket activation is enabled. The admin API allows for dynamic configuration and management of the Caddy server while it is running.

  • The protocols h1, h2, and h3 refer to the different versions of the HTTP protocol that the server can support:

    h1: This stands for HTTP/1.1, which is the traditional version of the HTTP protocol. It introduced persistent connections and chunked transfer encoding, among other features.

    h2: This refers to HTTP/2, which is a major revision of the HTTP protocol. It improves performance by allowing multiple streams of data to be sent over a single connection, reducing latency and improving loading times. It also includes features like header compression and prioritization of requests.

    h3: This denotes HTTP/3, which is the latest version of the HTTP protocol. It is built on top of QUIC (Quick UDP Internet Connections), a transport layer network protocol that aims to reduce latency and improve security. HTTP/3 further enhances performance, especially in scenarios with high packet loss or latency.

  • Ideally, header_up X-Forwarded-For {http.request.header.X-Real-IP} would provide the source IP, thus allowing proper hostname resolution for clients. This does not work in this configuration. An alternative may be posted at a later point.


Create the dns.network file with:

nano ~/.config/containers/systemd/dns.network

and input:

[Network]
NetworkName=dns
Subnet=172.17.0.0/16

[Install]
WantedBy=default.target

Step 3: Pihole Configuration

Let's start with making the directory. Use:
mkdir ~/.config/containers/storage/pihole/etc-pihole && mkdir ~/.config/containers/storage/pihole/etc-dnsmasq.d && mkdir ~/.config/containers/storage/pihole/logs

We want to make a password that will be hidden with Podman Secrets for Pihole’s login. I highly recommend you use a password manager to save login information and automatically input it when you load the Pihole interface. Vaultwarden is possible to host on your computer for a free, local alternative to Bitwarden.

Use this command to create the environmental variable that will be utilized by Podman Secrets:

export WEBPASSWORD=#yourpassword

Now we need to create a secret. my_secret is what Podman will call for when we add it to the .container file. Change it to whatever you want. WEBPASSWORD is the actual environmental variable we created with export. Create the secret with:

podman secret create --env=true my_secret WEBPASSWORD

Remove the environmental variable from the CLI with:

unset WEBPASSWORD

Now let’s create Pihole’s .container file:

[Container]
ContainerName=pihole
Environment=DNSMASQ_LISTENING=all VIRTUAL_HOST=#the domain used in Caddyfile
Secret=#the "my_secret" you made
Image=pihole/pihole:latest
PublishPort=53:53/tcp
PublishPort=53:53/udp
Network=dns.network
AddHost=unbound:172.17.0.10
IP=172.17.0.5
Volume=%h/.config/containers/storage/pihole/etc-pihole:/etc/pihole
Volume=%h/.config/containers/storage/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
Volume=%h/.config/containers/storage/pihole/logs:/var/log/pihole/pihole_debug.log

[Install]
WantedBy=default.target

[Service]
Restart=always

IP= is setting a static IP for the container.


Step 3: Unbound

Create the unbound.container file:

nano ~/.config/containers/systemd/unbound.container

and put these contents in:

[Container]
ContainerName=unbound
Image=mvance/unbound:latest
Network=dns.network
AddHost=pihole:172.17.0.5
IP=172.17.0.10
Volume=%h/.config/containers/storage/unbound/logs/:/var/log/unbound/unbound.log
Volume=%h/.config/containers/storage/unbound/unbound.conf:/opt/unbound/etc/unbound/unbound.conf:ro

[Install]
WantedBy=default.target

[Service]
Restart=always

We should only need to configure the unbound.conf file:

nano ~/.config/containers/systemd/unbound.container

A guide to configuring unbound.conf for Pihole’s use is in the Pihole Docs. There’s a LOT that can be configured in this document. With MatthewVance’s image, it forwards queries to Cloudflare DNS server over TLS by default. In other words, it does not act as a recursive server. The unbound.conf below will configure it as a recursive server.

If someone would like a guide to using a different Unbound image, such as madnuttah/unbound-docker, I can provide a separate post for it.


Create the logs folder:

mkdir ~/.config/containers/storage/unbound/logs

Step 4: Create the Socket

Thanks to Erik Sjölund for making this easy to be done. Create a caddy.socket file in ~/.config/systemd/user with:

nano ~/.config/systemd/user/caddy.socket

and input:

[Socket]
BindIPv6Only=both

### sockets for the HTTP reverse proxy
# fd/3
ListenStream=[::]:80

# fd/4
ListenStream=[::]:443

# fdgram/5
ListenDatagram=[::]:443

### socket for the admin API endpoint
# fd/6
ListenStream=%t/caddy.sock
SocketMode=0600

[Install]
WantedBy=sockets.target

# For an explanation of systemd specifier "%t",
# see https://www.freedesktop.org/software/systemd/man/latest/systemd.unit.html

Step 5: Run the Containers (.services)

We need to have Podman generate the systemd .service files with the information we’ve provided in the ~/.config/containers/systemd directory. Use daemon-reload anytime there is a change in this directory that you made and want implemented.

systemctl --user daemon-reload

Enable caddy.socket to listen to ports 80 and 443, then start it:

systemctl --user enable caddy.socket && systemctl --user start caddy.socket

Start Pihole and Unbound with:

systemctl --user start pihole.service && systemctl --user start unbound.service

If needed, manually create the network with the following and start the socket again:

podman network create --subnet=172.17.0.0/16 dns

You should be able to access and configure Pihole as you please.


Troubleshooting

I can't use port 53!

That’s likely due to one of two reasons.

  1. systemd-resolved is using it. systemd-resolved is a service in the systemd suite that provides network name resolution for local applications, acting as a caching and validating DNS resolver. It allows applications to resolve domain names to IP addresses using various protocols and can manage multiple DNS servers and search domains efficiently. Unless you have a reason to use DNSStubListener=yes, configure Pihole to exclusively be the listener on port 53. You should only need this if you are running a service outside of a container. Podman’s default network, pasta, can communicate to the containers without it. systemd-resolved listens usually on 127.0.0.53. For more information, JdeBP on stackexchange has a well-summed answer on resolved.
  2. Rootless Podman cannot bind to ports lower than 1024. There are a few ways around it, and this stackoverflow post lists a lot of them, but I’m going to list what I did. Use the command:
sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80

           Is this secure? Up for you to land your opinion on.

Ensure that port 53 isn’t in use already.

ss -tulnp | grep: 53

Client names aren't resolving. They're the same set of numbers and letters. Yeah. The only problem with this setup is that client names will not be properly resolved, and the client requests will be labeled by the container ID. There are two ways I recommend to correct that in this instance. Create a VLAN network and utilize that as the container's network, or use macvlan.

4 Likes