-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Closed
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.
Description
Issue Description
When running a container with podman that defines a health check, the following state is returned.
"Health": {
"Status": "starting",
"FailingStreak": 0,
"Log": null
},
Steps to reproduce the issue
Steps to reproduce the issue
- Run podman with defined health checks in place
podman run --health-cmd 'pg_isready -U postgres' --health-interval 10s --health-timeout 5s --health-retries 30 -e POSTGRES_PASSWORD=password docker.io/library/postgres:15.2-alpine
- Check the state
podman inspect $cid "Health": { "Status": "starting", "FailingStreak": 0, "Log": null },
Describe the results you received
podman inspect $cid
"Health": { "Status": "starting", "FailingStreak": 0, "Log": null },
Describe the results you expected
"Health": { "Status": "healthy", "FailingStreak": 0, "Log": null },
podman info output
host:
arch: amd64
buildahVersion: 1.30.0
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.1.7-r1
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: unknown'
cpuUtilization:
idlePercent: 99.65
systemPercent: 0.12
userPercent: 0.23
cpus: 4
databaseBackend: boltdb
distribution:
distribution: alpine
version: 3.18.2
eventLogger: file
hostname: embed
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.1.38-0-lts
linkmode: dynamic
logDriver: k8s-file
memFree: 1238466560
memTotal: 5821636608
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.8.4-r0
path: /usr/bin/crun
version: |-
crun version 1.8.4
commit: 5a8fa99a5e41facba2eda4af12fa26313918805b
rundir: /tmp/podman-run-1000/crun
spec: 1.0.0
+SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
path: /tmp/podman-run-1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-r0
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 0
swapTotal: 0
uptime: 91h 19m 52.00s (Approximately 3.79 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /home/vagrant/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 1
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/vagrant/.local/share/containers/storage
graphRootAllocated: 20646682624
graphRootUsed: 10664951808
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /tmp/containers-user-1000/containers
transientStore: false
volumePath: /home/vagrant/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.1
Built: 1688368964
BuiltTime: Mon Jul 3 15:22:44 2023
GitCommit: ""
GoVersion: go1.20.5
Os: linux
OsArch: linux/amd64
Version: 4.5.1
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
No
Additional environment details
Alpine 3.18.2 running in a Hyper-V VM
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
mohsinsarwari and shikigami12
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.