celien
(CĆ©lien)
June 7, 2024, 4:00pm
1
Hi,
When trying to build a custom caddy 2.8.4 (for the cloudflare plugin) I get an error.
Dockerfile:
FROM caddy:2.8.4-builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare
FROM caddy:2.8.4
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
Error:
Step 1/4 : FROM caddy:2.8.4-builder AS builder
---> 45d732f429c5
Step 2/4 : RUN xcaddy build --with github.com/caddy-dns/cloudflare
---> Running in cae1b530a203
fatal error: nanotime returning zero
goroutine 1 gp=0xc02128 m=0 mp=0x3662b8 [running, locked to thread]:
runtime.throw
({0x1f95fd, 0x17})
runtime/panic.go:1023 +0x4c fp=0xc2c7a8 sp=0xc2c794 pc=0x51ecc
runtime.main()
runtime/proc.go:192 +0x4a4 fp=0xc2c7ec
sp=0xc2c7a8 pc=0x550f0
runtime.goexit({})
runtime/asm_arm.s:859 +0x4 fp=0xc2c7ec sp=0xc2c7ec pc=0x8ba6c
The command '/bin/sh -c xcaddy build --with github.com/caddy-dns/cloudflare' returned a non-zero code: 2
Iām trying to build it by using a Raspberry Pi 5 which runs Ubuntu 24.04 LTS and Docker 26.1.3
I donāt have any problem with the 2.8.1
Thank you in advance for your help.
matt
(Matt Holt)
June 7, 2024, 5:20pm
2
So uhh thatās really weird. Thereās gotta be something going on. Iām pretty sure itās not a bug in xcaddy, but Iāve never seen this before so I dunno what to tell you. Googling that error message only yields a couple results in English for me:
Hey folks, I am having some issues - I just finished upgrading to the latest Ubuntu release and since, pi-hole is not resolving. I cannot even, for example, upload the debug log - Error message: curl: (6) Could not resolve host:...
opened 02:01PM - 02 Sep 19 UTC
closed 09:47PM - 02 Sep 19 UTC
OS-Windows
NeedsInvestigation
FrozenDueToAge
```
go version devel +03ac39ce5e Mon Sep 2 12:57:37 2019 +0000 linux/amd64
```ā¦
I've started seeing panics like the one below when running some of my tests via `GOOS=windows go test -exec=wine`:
```
fatal error: nanotime returning zero
goroutine 1 [running, locked to thread]:
runtime.throw(0x60406f, 0x17)
/home/mvdan/tip/src/runtime/panic.go:774 +0x79 fp=0xc00003bf60 sp=0xc00003bf30 pc=0x4309f9
runtime.main()
/home/mvdan/tip/src/runtime/proc.go:152 +0x350 fp=0xc00003bfe0 sp=0xc00003bf60 pc=0x432500
runtime.goexit()
/home/mvdan/tip/src/runtime/asm_amd64.s:1375 +0x1 fp=0xc00003bfe8 sp=0xc00003bfe0 pc=0x45e2e1
exit status 2
```
If anyone wants to reproduce the crash, the Go package in question is https://github.com/mvdan/sh/tree/6af96bc17993a990fcd2c341c83a168a3158daa1/interp. I can provide a small reproducer if necessary, but I think the crash is pretty evident.
It looks like this was intentional as of https://go-review.googlesource.com/c/go/+/191759. The CL reads:
> 1. Wine is useful and developers will appreciate being able to debug stuff with it.
This gets a big +1 from me, of course :)
> (1) has been handled for some time by Wine with the introduction of the commit entitled "ntdll: Create thread to update user_shared_data time values when necessary".
Unfortunately, the commit message didn't include a direct link to said patch or fixed bug. All I could find is this patch maintained as part of wine-staging: https://github.com/wine-staging/wine-staging/blob/a46b9ff3dcb51398cd6626f7090d8885844e1b5b/patches/ntdll-User_Shared_Data/0003-ntdll-Create-thread-to-update-user_shared_data-time-.patch
I can further prove this; the crash happens on vanilla Wine 4.15, but doesn't happen on Wine 4.15 staging.
So I don't think that Wine has handled this for some time, as the CL describes. At best, a patch has been available on wine-staging since 2017, but most users don't run wine-staging. I would personally prefer sticking to vanilla Wine, as it's more stable, and I've had no reason to use staging patches until now.
This is not an issue to report a regression and demand a revert; I understand the code in the runtime package was a hack, and that in the long term we're better off without it. However, I think we should be aware that this will break a non-trivial amount of users. And, even if Wine 4.16 shipped tomorrow with a stable fix, it would at least be a year or two before most Linux users are using that newer Wine version.
At a minimum, I think we should keep an issue like this one open, to quickly point confused users at a temporary workaround. I'm all ears on that front; I can run wine-staging for now, but that's not an ideal solution.
The first link above looks closer / more relevant. But I donāt really know how to help beyond this. You say it works with 2.8.1, all else being the same? Everything?
2 Likes
celien
(CĆ©lien)
June 7, 2024, 6:58pm
3
Hello, indeed everything the same, just changing 2.8.4 by 2.8.1 so thatās very strange
celien
(CĆ©lien)
June 7, 2024, 8:11pm
4
And itās also not just with the cloudflare plugin, I also have the issue when trying to build caddy with another plugin.
Mohammed90
(Mohammed Al Sahaf)
June 8, 2024, 5:10pm
5
I wonder if your issue and the one linked below are related. Can you try the listed solution and check again?
opened 08:01AM - 04 Jun 24 UTC
closed 01:47PM - 05 Jun 24 UTC
A build of a container image failed on my side like this:
<details><summary>bā¦ uild failure logs</summary>
<p>
```
homelab/docker/caddy-cloudflare on ī master [$!] via š³ orbstack via āļø (nix-shell-env) took 1m35s
ā ./build.sh
[+] Building 1.8s (19/20) docker-container:multiarch
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 649B 0.0s
=> [linux/arm64 internal] load metadata for docker.io/library/caddy:2.8.4 1.4s
=> [linux/amd64 internal] load metadata for docker.io/library/caddy:2.8.4 1.1s
=> [linux/amd64 internal] load metadata for docker.io/library/caddy:builder 1.1s
=> [linux/arm64 internal] load metadata for docker.io/library/caddy:builder 1.1s
=> [auth] library/caddy:pull token for registry-1.docker.io 0.0s
=> [linux/amd64 builder 1/2] FROM docker.io/library/caddy:builder@sha256:3db37b69486800de84befa0dd692b4023beb435421703bbdb6f84a8055ddf011 0.1s
=> => resolve docker.io/library/caddy:builder@sha256:3db37b69486800de84befa0dd692b4023beb435421703bbdb6f84a8055ddf011 0.1s
=> [internal] load build context 0.0s
=> => transferring context: 39B 0.0s
=> [linux/amd64 stage-1 1/4] FROM docker.io/library/caddy:2.8.4@sha256:f5af55ebb433cb652b27f5b5fc5732853bcb00afeddedb9579be6a10c9a42b1c 0.2s
=> => resolve docker.io/library/caddy:2.8.4@sha256:f5af55ebb433cb652b27f5b5fc5732853bcb00afeddedb9579be6a10c9a42b1c 0.2s
=> [linux/arm64 stage-1 1/4] FROM docker.io/library/caddy:2.8.4@sha256:f5af55ebb433cb652b27f5b5fc5732853bcb00afeddedb9579be6a10c9a42b1c 0.2s
=> => resolve docker.io/library/caddy:2.8.4@sha256:f5af55ebb433cb652b27f5b5fc5732853bcb00afeddedb9579be6a10c9a42b1c 0.2s
=> [linux/arm64 builder 1/2] FROM docker.io/library/caddy:builder@sha256:3db37b69486800de84befa0dd692b4023beb435421703bbdb6f84a8055ddf011 0.2s
=> => resolve docker.io/library/caddy:builder@sha256:3db37b69486800de84befa0dd692b4023beb435421703bbdb6f84a8055ddf011 0.1s
=> CACHED [linux/amd64 builder 2/2] RUN caddy-builder github.com/caddy-dns/cloudflare 0.0s
=> CACHED [linux/amd64 stage-1 2/4] COPY --from=builder /usr/bin/caddy /usr/bin/caddy 0.0s
=> CACHED [linux/amd64 stage-1 3/4] RUN apk add --no-cache tini 0.0s
=> CACHED [linux/amd64 stage-1 4/4] COPY signal-handler.sh / 0.0s
=> CACHED [linux/arm64 builder 2/2] RUN caddy-builder github.com/caddy-dns/cloudflare 0.0s
=> CACHED [linux/arm64 stage-1 2/4] COPY --from=builder /usr/bin/caddy /usr/bin/caddy 0.0s
=> ERROR [linux/arm64 stage-1 3/4] RUN apk add --no-cache tini 0.1s
------
> [linux/arm64 stage-1 3/4] RUN apk add --no-cache tini:
0.103 .buildkit_qemu_emulator: /bin/sh: Invalid ELF image for this architecture
------
Dockerfile:16
--------------------
14 | # 1. implementation: https://github.com/optiz0r/caddy-consul/
15 | # Override the entrypoint with a bash script which handles SIGHUP and triggers reload
16 | >>> RUN apk add --no-cache tini
17 | COPY signal-handler.sh /
18 | ENTRYPOINT ["/sbin/tini", "--"]
--------------------
ERROR: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c apk add --no-cache tini" did not complete successfully: exit code: 255
```
</p>
</details>
<details><summary>My Dockerfile</summary>
<p>
```
ARG CADDY_VERSION=0.0.0
FROM caddy:builder AS builder
RUN caddy-builder \
github.com/caddy-dns/cloudflare
FROM caddy:${CADDY_VERSION}
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
# Reference:
# 1. discussion: https://github.com/caddyserver/caddy/issues/3967
# 1. implementation: https://github.com/optiz0r/caddy-consul/
# Override the entrypoint with a bash script which handles SIGHUP and triggers reload
RUN apk add --no-cache tini
COPY signal-handler.sh /
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["/signal-handler.sh", "caddy", "run", "--config", "/etc/caddy/Caddyfile", "--adapter", "caddyfile"]
```
</p>
</details>
I found this strange since if I run with arg version set to 2.8.1, the build works flawlessly. There must be something that broke (either my setup, or the image is wrong).
For a verification, I went to my arm64 (RPi5) and here are my logs from there, only using official caddy docker image
```bash
# this works perfectly!
milan@oberon ~ ā docker run --rm -ti --entrypoint /bin/sh caddy:2.8.1
Unable to find image 'caddy:2.8.1' locally
2.8.1: Pulling from library/caddy
94747bd81234: Pull complete
d679b063c3cc: Pull complete
9d3036766387: Pull complete
553446b932e7: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:7414db60780a20966cd9621d1dcffcdcef060607ff32ddbfde2a3737405846c4
Status: Downloaded newer image for caddy:2.8.1
/srv #
# this doesn't work, it's a SIGSEGV
milan@oberon ~ ā docker run --rm -ti --entrypoint /bin/sh caddy:2.8.4
milan@oberon ~ ā echo $?
139
```
I am not an expert, but I assume the arm64 wasn't labeled correctly and is actually based of some other architecture.
3 Likes
celien
(CĆ©lien)
June 9, 2024, 8:50am
6
Hello, thanks for your reply. Indeed, after having redownloaded the 2.8.4-builder image it solved the problem.
system
(system)
Closed
July 9, 2024, 8:50am
7
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.