Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: add Podman support #171

Open
magnusolsson80 opened this issue Sep 20, 2021 · 11 comments
Open

Feature: add Podman support #171

magnusolsson80 opened this issue Sep 20, 2021 · 11 comments
Labels
feature New feature request

Comments

@magnusolsson80
Copy link

Description
Podman seems to build Squest OK but it starts in degraded state due to nginix exiting with an error.

Versions

  • OS: RHEL 8.4
  • Squest Git commit: 1f1ffe9
  • podman version: 3.2.3-0.10

Nginix error
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a4948e79642 k8s.gcr.io/pause:3.5 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp dd6ffe87837a-infra
9c38d65b3ad5 docker.io/library/mariadb:latest mysqld 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_db_1
1116326b0d42 docker.io/phpmyadmin/phpmyadmin:5.1.0 apache2-foregroun... 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_phpmyadmin_1
55f3847f2a55 docker.io/library/rabbitmq:3-management rabbitmq-server 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_rabbitmq_1
4a48889d8c5e localhost/squest:latest bash -c /wait && ... 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_celery-worker_1
b090ee07fe7f localhost/squest:latest bash -c /wait && ... 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_celery-beat_1
96e485d9977d localhost/squest:latest /app/docker/entry... 44 hours ago Up 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_django_1
8dd0f0df26e3 docker.io/library/nginx:alpine nginx -c /etc/ngi... 44 hours ago Exited (1) 12 minutes ago 0.0.0.0:8082->80/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp squest_nginx_1

$ podman logs squest_nginx_1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/09/18 10:48:19 [emerg] 1#1: open() "/etc/nginx/squest/nginx.conf" failed (13: Permission denied)
nginx: [emerg] open() "/etc/nginx/squest/nginx.conf" failed (13: Permission denied)
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/

@Sispheor
Copy link
Contributor

Hi,
Could you try with the last dev version?

BTW, Podman is not officially supported yet.

@Sispheor
Copy link
Contributor

From the last dev branch just run:

docker-compose up

Then connect to the port 8080 of your workstation.

@magnusolsson80
Copy link
Author

Hi Sispheor..

Thanks for your quick reply. Unfortunately, podman is the only officially supported solution under RHEL 8. :-(

I have run "podman-compose up --build -d" to no avail.

@Sispheor
Copy link
Contributor

Do you have another host with docker-compose available so you can see if you have the same behavior?
So we can isolate Podman here.

@magnusolsson80
Copy link
Author

I installed docker according to this link:
https://www.linuxtechi.com/install-docker-ce-centos-8-rhel-8/

It works but I encountered another problem:

Successfully tagged squest:latest
WARNING: Image for service celery-worker was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.

Running docker-compose up --build fixes it.

@Sispheor
Copy link
Contributor

On our side the prod server for Squest is a Centos8 with Docker and Docker compose.

Please pull the last dev we've fixed an issue with image upload on service creation.
The dev branch should be stable now.

@magnusolsson80
Copy link
Author

Thanks for the quick fix.. It seems to work now with docker..

@Sispheor
Copy link
Contributor

Perfect.
I'll rename your issue to "add Podman support".
We'll see if we support it later in the development.

Thanks for testing Squest.

@Sispheor Sispheor changed the title Nginx exits with error under podman Feature: add Podman support Sep 20, 2021
@Sispheor Sispheor added the feature New feature request label Sep 20, 2021
@magnusolsson80
Copy link
Author

Just an update.

The issue first reported is an known issue with selinux and ngnix:alpine. By disabling selinux with setenforce 0 it runs. All looks good. However, the squest server is not accessable.

The last branch that runs and that squest is accessible is d84254d. The last usable branch without any issues is af179cd. All other branches after that are not accessible but running fine according to podman.

Could you please guide me how to debug this issue?

@Sispheor
Copy link
Contributor

You can try by adding z flag at the end of volumes path in docker compose files.

E.g:

- ./docker/nginx.conf:/etc/nginx/squest/nginx.conf:roz

@magnusolsson80
Copy link
Author

magnusolsson80 commented Sep 22, 2021

It is most certainly an selinux issue:

type=AVC msg=audit(1632311788.046:855): avc: denied { associate } for pid=45869 comm="nginx" name="2" scontext=system_u:object_r:container_t:s0:c82,c904 tcontext=system_u:object_r:proc_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1632311788.047:856): avc: denied { read } for pid=45869 comm="nginx" name="nginx.conf" dev="nvme0n1p4" ino=2498240 scontext=system_u:system_r:container_t:s0:c82,c904 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632311788.047:857): avc: denied { open } for pid=45869 comm="nginx" path="/etc/nginx/squest/nginx.conf" dev="nvme0n1p4" ino=2498240 scontext=system_u:system_r:container_t:s0:c82,c904 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=SERVICE_STOP msg=audit(1632311811.989:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'�UID="root" AUID="unset"
type=AVC msg=audit(1632312835.237:859): avc: denied { associate } for pid=47814 comm="apache2" name="2" scontext=system_u:object_r:container_t:s0:c82,c904 tcontext=system_u:object_r:proc_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1632312835.416:860): avc: denied { read } for pid=47884 comm="nginx" name="nginx.conf" dev="nvme0n1p4" ino=2498240 scontext=system_u:system_r:container_t:s0:c82,c904 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632312835.416:861): avc: denied { open } for pid=47884 comm="nginx" path="/etc/nginx/squest/nginx.conf" dev="nvme0n1p4" ino=2498240 scontext=system_u:system_r:container_t:s0:c82,c904 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632312938.034:862): avc: denied { associate } for pid=50780 comm="apache2" name="2" scontext=system_u:object_r:container_t:s0:c82,c904 tcontext=system_u:object_r:proc_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1632312938.270:863): avc: denied { read } for pid=50818 comm="nginx" name="nginx.conf" dev="nvme0n1p4" ino=2498240 scontext=system_u:system_r:container_t:s0:c82,c904 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632312938.270:864): avc: denied { open } for pid=50818 comm="nginx" path="/etc/nginx/squest/nginx.conf" dev="nvme0n1p4" ino=2498240 scontext=system_u:system_r:container_t:s0:c82,c904 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632313264.850:865): avc: denied { read } for pid=58337 comm="docker-entrypoi" name="mariadb-init.sql" dev="nvme0n1p4" ino=2498197 scontext=system_u:system_r:container_t:s0:c788,c815 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632313264.850:866): avc: denied { open } for pid=58337 comm="docker-entrypoi" path="/docker-entrypoint-initdb.d/mariadb-init.sql" dev="nvme0n1p4" ino=2498197 scontext=system_u:system_r:container_t:s0:c788,c815 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632313264.853:867): avc: denied { ioctl } for pid=58533 comm="mysql" path="/docker-entrypoint-initdb.d/mariadb-init.sql" dev="nvme0n1p4" ino=2498197 ioctlcmd=0x5401 scontext=system_u:system_r:container_t:s0:c788,c815 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=1
type=AVC msg=audit(1632313280.322:868): avc: denied { associate } for pid=58799 comm="apache2" name="2" scontext=system_u:object_r:container_t:s0:c788,c815 tcontext=system_u:object_r:proc_t:s0 tclass=filesystem permissive=1
type=SERVICE_START msg=audit(1632314023.732:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'�UID="root" AUID="unset"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature request
Projects
None yet
Development

No branches or pull requests

2 participants