I have rootless podman containers all connected a network with caddy that proxies them by their hostname. It seems that the default networking mode doesn’t preserve the source ip and instead shows all traffic coming internally from 10.89.1.98. Preserving that ip requires pasta/slirp4netns which is incompatible with adding the container to a network. I’ve found a few solutions but I’m having trouble deciding what is the right way to move forward.
Using the host network or running caddy with host loopback abilites
Would require exposing all the ports on all my containers which means I would lost the ability to access containers by the DNS inside the podman network. I have a lot of containers and manually managing ports is not something I want to do again.
socket activation + libsdsock with caddy
Socket forwarding done using systemd. I’ve tested it and it works but it requires systemd on the container, and caddy is built on alpine which uses a different boot system. There are ways to get the systemd libs on alpine but it would be quite hacky.
socket activation + libsdsock with another os
Caddy provides ways to build with extensions on debian but it seems tricky to do in a Containerfile because systemd init issues.
Has anyone experienced this issue before? What direction did you take?
Please confirm for me, the client traffic looks like proxy is the source on the containered services?
I haven’t had that issue with caddy before, but may be I’m using some particular config to make sure it always passes the client IP.
Some services also need a setting to “know” they are behind a proxy and should look for client address in the headers like x-forwarded-for.
Yeah the remote ip is always local. This comes from a podman configuration, not a caddy one. Setting the podman network mode to pasta or slirp4netns will show the proper remote ips