Running Greenbone CE 22.4 with podman-compose at Hetzner

Veröffentlicht am 22. Februar 2023 von Dominik Pataky

Since I’m a huge fan of the OpenVAS, now called Greenbone Community Edition, network vulnerability scanner, I put some time down to try the latest release 22.4. This guide picks up the docs made for docker-compose and ports them to use with podman-compose on a Fedora 37 system.

Please note: the podman-compose version 1.0.3 is the latest release in the repos, but it’s in the state from December 2021. There have been significant updates since that can only be used by installing the devel release from Github.

For reference, it’s good to start reading with the official documentation for the container-based setup: Greenbone Community Containers 22.4. It explains what is needed and how the containers are started and configured. But, well, it’s written for Docker compose.

For my modifications you can use the scripts from the repo

Creating a server at Hetzner

The following config is sufficient to run an ad-hoc instance of GCE:

  • 4 vCPU
  • 8 GB RAM
  • 160 GB disk

At Hetzner, this results in the cpx31 server type which is available at least in NBG. Start an instance with Fedora 37 as base image. Configure an SSH key if possible.

It is also a good idea to create a volume for persistent data storage. Since all data created by the GCE is stored in podmans volumes, it is enough to backup /var/lib/containers/storage/volumes/. Keep this in mind if you plan to destroy your VMs after a scan.

Running podman-compose instead of docker-compose

You know Docker. But do you know Podman? Podman is the replacement for Docker on the Red Hat universe, quote: “Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.” It benefits from a very good integration in all the newer developments and standards (especially interfaces) of the cloud-native environments (and in Fedora obviously). podman-compose is the equivalent tool for docker-compose.

To begin, install the packages python3-pip and podman and load the ip_tables kernel module.

1~$ sudo dnf install -y python3-pip podman
2~$ sudo modprobe ip_tables

Now continue with the installation of podman-compose from the official Github source tree and check that it is newer than 1.0.3:

1~$ python3 -m pip install
2~$ podman-compose --version
3podman-compose version: 1.0.4
4using podman version: 4.4.1

Have a look at the cloud-init.yaml file in the repo for the initial server setup.

Enable IPv6 inside your containers

One important thing to remember is that containers do not by default have an IPv6 address provisioned. The default Docker and podman setups only use an IPv4 IP range for new containers. As a result the GCE OpenVAS scanner will not be able to scan or even ping hosts that are configured with their IPv6 address.

Red Hat has an excellent guide for IPv6 in Podman: How to configure Podman 4.0 for IPv6.

Here’s how to create an IPv6 network for GCE in this setup:

 1# Create an IPv6-configured network for podman pods
 2~$ sudo podman network create --ipv6 --gateway fd00::10:1:1 --subnet fd00::10:1:0/112 ipv6-net
 4# Check the new network
 5~$ sudo podman network inspect ipv6-net
 7     {
 8          "name": "ipv6-net",
 9          "id": "xxx",
10          "driver": "bridge",
11          "network_interface": "podman1",
12          "created": "xxx",
13          "subnets": [
14               {
15                    "subnet": "fd00::10:1:0/112",
16                    "gateway": "fd00::10:1:1"
17               },
18               {
19                    "subnet": "",
20                    "gateway": ""
21               }
22          ],
23          "ipv6_enabled": true,
24          "internal": false,
25          "dns_enabled": true,
26          "ipam_options": {
27               "driver": "host-local"
28          }
29     }

In the next section we will add the OpenVAS scanner service to this network, so it can reach and scan IPv6 hosts.

Installing Greenbone Community Edition

GCE version 22.4 made it really simple to start the whole setup in one go. Log into your server and run:

1# Download the compose file
2mkdir gce
3cd gce
4wget ''

Now we need to edit the compose file to enable IPv6 inside the OpenVAS container.

First, add a new the networks section with two entries, e.g. at the end of the file:

2  default:
3  ipv6-net:
4    external: true

Using external: true means that podman (or podman-compose in this case) will not do anything with this network, because it is managed externally. We did this by hand above, it’s just used as given when you run podman-compose.

The default network is needed by podman-compose and is used for every service that has no specific networks configuration.

Second, configure the ospd-openvas service to use both the default and the ipv6-net networks:

 1  ospd-openvas:
 2    image: greenbone/ospd-openvas:stable
 3    restart: on-failure
 4    init: true
 5    hostname: ospd-openvas.local
 6    networks:
 7      - ipv6-net
 8      - default
 9    cap_add:
10      - NET_ADMIN # for capturing packages in promiscuous mode

With this modification done, we can now continue to deploy the composition:

1# Pull all images and call the compose project "gce"
2sudo podman-compose -f docker-compose-22.4.yml -p gce pull
4# Now run all "services" defined in the compose file
5sudo podman-compose -f docker-compose-22.4.yml -p gce up -d

This should result in a complete container composition. At the beginning the feeds are loaded and parsed, which can take up some time (even if you start with existing volumes).

The web interface (powered by the Greenbone Security Assistant) listens on http://localhost:9392 for incoming web browser requests. You could for example forward the port to your local machine, or …

nginx as proxy

In the simplest setup, we can use nginx on unencrypted port 80 to pass our requests from the internet onto the GCE web interface.

1# Install nginx
2~$ sudo dnf install -y nginx
4# Edit default config according to the code block below
5~$ sudo vim /etc/nginx/nginx.conf
7# After edit, restart
8~$ sudo systemctl restart nginx

In the config, add these three lines into the default server block:

 1server {
 2    listen       80;
 3    listen       [::]:80;
 4    server_name  _;
 5    root         /usr/share/nginx/html;
 7    location / {
 8        proxy_pass http://localhost:9392;
 9    }
11    # Load configuration files for the default server block.
12    include /etc/nginx/default.d/*.conf;
14    error_page 404 /404.html;
15        location = /404.html {
16    }
18    error_page 500 502 503 504 /50x.html;
19        location = /50x.html {
20    }

After restarting nginx, call your public IP in your web browser. The GSA web UI should present a login interface which you can log in to with admin:admin. The Greenbone docs explain how to set up an admin user.

Fixes and bugs during setup

Not everything works smoothly on the first run if you don’t use Docker. Read this if you want to know more about some caveats that appeared during testing. Or if you found this article via a search engine.

Using the IPv6 registry

At the beginning of writing about this setup, the test VM at Hetzner was IPv6-only. Just FYI, the image registry does not support IPv6 by default. Solution:

1~$ sudo vim /etc/containers/registries.conf
3unqualified-search-registries = [""]

Now all image pulls are ran against the IPv6 registry.

Fixing error with ip_tables kernel module

If you happen to stumple upon an error containing “ip_tables” and “Operation not permitted”, simply run sudo modprobe ip_tables on the host.

Excursion: Fixing the networks aliases “bug”

This bug appears if you use podman-compose in version 1.0.3, from the Fedora repos or from PyPI for example.

Normally, according to the compose networks spec, the compose file should support the following service definition with aliases:

 2restart: on-failure
 3image: greenbone/mqtt-broker
 5  - 1883:1883
 7  default:
 8    aliases:
 9      - mqtt-broker
10      - broker

This would then allow the mqtt-broker to also be reachable via the broker hostname in the default network. podman-compose will fail during starting up due to a KeyError as long as the networks entry exists. This is a known bug with an existing fix, but it’s not yet merged into upstream version 1.0.3 of podman-compose.

If you try to work around this error, e.g. by cutting the service definition as follows:

 2  [...]
 4  mqtt-broker:
 5    restart: on-failure
 6    image: greenbone/mqtt-broker
 7    ports:
 8      - 1883:1883
10  [...]

… this breaks the notus-scanner service because it uses the broker hostname alias.

The container will fail at start and try to restart, looping forever. You will see this in the logs:

 12023-02-21 08:43:10,555 notus-scanner: INFO: (notus.scanner.daemon) Starting notus-scanner version 22.4.4.
 2Traceback (most recent call last):
 3  File "/usr/local/bin/notus-scanner", line 8, in <module>
 4    sys.exit(main())
 5  File "/usr/local/lib/python3.9/dist-packages/notus/scanner/", line 154, in main
 6    run_daemon(
 7  File "/usr/local/lib/python3.9/dist-packages/notus/scanner/", line 116, in run_daemon
 8    daemon = MQTTDaemon(client)
 9  File "/usr/local/lib/python3.9/dist-packages/notus/scanner/messaging/", line 160, in __init__
10    self._client.connect()
11  File "/usr/local/lib/python3.9/dist-packages/notus/scanner/messaging/", line 66, in connect
12    return super().connect(
13  File "/usr/local/lib/python3.9/dist-packages/paho/mqtt/", line 914, in connect
14    return self.reconnect()
15  File "/usr/local/lib/python3.9/dist-packages/paho/mqtt/", line 1044, in reconnect
16    sock = self._create_socket_connection()
17  File "/usr/local/lib/python3.9/dist-packages/paho/mqtt/", line 3685, in _create_socket_connection
18    return socket.create_connection(addr, timeout=self._connect_timeout, source_address=source)
19  File "/usr/lib/python3.9/", line 822, in create_connection
20    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
21  File "/usr/lib/python3.9/", line 953, in getaddrinfo
22    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
23socket.gaierror: [Errno -2] Name or service not known

At the time of debugging I did not yet know that the broker alias is definitely the culprit. To find the problem, we need to get into the DNS server of the network and look at it’s transactions.

Steps to debug:

  1. Run podman-compose -f docker-compose-22.4.yml -p gce up -d as explained in the Greenbone tutorial (here, the project is called gce)
  2. Wait for it to complete
  3. Run ps aux | grep dns. Result: ... 636384 ... /usr/libexec/podman/aardvark-dns ...
  4. Enter the network namespace of the DNS server: sudo nsenter -t 636384 -n
  5. Run tcpdump -i any "port 53"

You will now see queries like these:

 108:58:13.188676 podman1 In  IP > 34972+ A? broker. (24)
 208:58:13.188689 veth4 In  IP > 27804+ AAAA? broker. (24)
 308:58:13.188689 podman1 In  IP > 27804+ AAAA? broker. (24)
 408:58:13.188718 podman1 Out IP > 34972 NXDomain 0/0/0 (24)
 508:58:13.188719 veth4 Out IP > 34972 NXDomain 0/0/0 (24)
 608:58:13.188739 podman1 Out IP > 27804 NXDomain 0/0/0 (24)
 708:58:13.188739 veth4 Out IP > 27804 NXDomain 0/0/0 (24)
 808:58:13.401190 veth4 In  IP > 24504+ A? broker.dns.podman. (35)
 908:58:13.401195 podman1 In  IP > 24504+ A? broker.dns.podman. (35)
1008:58:13.401224 veth4 In  IP > 56231+ AAAA? broker.dns.podman. (35)
1108:58:13.401224 podman1 In  IP > 56231+ AAAA? broker.dns.podman. (35)
1208:58:13.401317 podman1 Out IP > 24504 NXDomain 0/0/0 (35)
1308:58:13.401320 veth4 Out IP > 24504 NXDomain 0/0/0 (35)
1408:58:13.401352 podman1 Out IP > 56231 NXDomain 0/0/0 (35)
1508:58:13.401353 veth4 Out IP > 56231 NXDomain 0/0/0 (35

Then you know that some service calls for broker as a hostname but gets no IP back. This breaks the service.

After fixing podman-compose (by either applying the linked patch or by installing 1.0.4 from Github), so that it sets up hostname aliases as expected, the results are also visible in the aardvark config:

1~$ cat /run/user/1000/containers/networks/aardvark-dns/gce_default
369c877b0d2a7f5cc73691935775fb3cc4eea9c4e80a195df42f9b0f1fc4bc286  gce_redis-server_1,redis-server,69c877b0d2a7
49ebc7c0b4bd50594b5dbf523e7afbf2c8864cd711273b66769a10e49e9427588  gce_pg-gvm_1,pg-gvm,9ebc7c0b4bd5
53f010e6073a63b5d0b2b0dfcc293348193aa51303e1963f7b5a1ffca0659a0c2  gce_mqtt-broker_1,mqtt-broker,mqtt-broker,broker,3f010e6073a6
6c2238eebc05abcc6d7e45b7818e7ba021a92d3dd7ae36ff782461ebef91b9e48  gce_ospd-openvas_1,ospd-openvas,c2238eebc05a
7139bbe3922e6a80258d2383dfb15b536611039cf77c2d6a1d177aeb6dd5a9dbd  gce_notus-scanner_1,notus-scanner,139bbe3922e6

Before the fix, the line only contained gce_mqtt-broker_1,mqtt-broker,3f010e6073a6, without the broker. Now a query to broker or broker.dns.podman is answered with the correct IP and the service can start.

Running podman in rootless mode

If you choose to run podman-compose as your local user, without sudo, you are running in rootless mode. Sadly, this breaks the ospd-openvas service, because it uses the Linux capabilities NET_ADMIN and NET_RAW. These can not be given to the container by the local user without root privileges.

  1Feb 22 09:21:08 gce-scanner gce_gvmd_1[32766]: event task:MESSAGE:2023-02-22 09h21.07 UTC:275: Status of task target cred (92070930-6e3f-4e52-a0ab-33ae7c709fc1) has changed to Queued
  2Feb 22 09:21:13 gce-scanner gce_gvmd_1[32766]: event task:MESSAGE:2023-02-22 09h21.12 UTC:275: Status of task target cred (92070930-6e3f-4e52-a0ab-33ae7c709fc1) has changed to Running
  3Feb 22 09:21:25 gce-scanner gce_mqtt-broker_1[31649]: 1677057685: New connection from on port 1883.
  4Feb 22 09:21:26 gce-scanner gce_mqtt-broker_1[31649]: 1677057686: New client connected from as 8ad84798-b89c-4ebe-9a82-ae9c48499cd3 (p5, c1, k0).
  5Feb 22 09:21:32 gce-scanner kernel: traps: openvas[34986] trap int3 ip:7f936eb0d332 sp:7ffc7f1d8d70 error:0 in[7f936ead0000+88000]
  6Feb 22 09:21:32 gce-scanner audit[34986]: ANOM_ABEND auid=1000 uid=101000 gid=101000 ses=3 subj=unconfined_u:system_r:spc_t:s0 pid=34986 comm="openvas" exe="/usr/local/sbin/openvas" sig=5 res=1
  7Feb 22 09:21:32 gce-scanner systemd[1]: Started systemd-coredump@3-35036-0.service - Process Core Dump (PID 35036/UID 0).
  8Feb 22 09:21:32 gce-scanner audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@3-35036-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
  9Feb 22 09:21:32 gce-scanner systemd-coredump[35037]: [🡕] Process 34986 (openvas) of user 101000 dumped core.
 10    Module /usr/local/sbin/openvas with build-id d559f161da27b20db0aa4fb37d25c33353c965d8
 11    Module /lib/x86_64-linux-gnu/ with build-id 897901ffceb83e3b4a9cc4a5ad5120f7e2204bf4
 12    Module /lib/x86_64-linux-gnu/ with build-id bab4b71665bcc7f3f9b142804534c6de15b6e824
 13    Module /lib/x86_64-linux-gnu/ with build-id d686b7ffe2e90b9aee46dc134879b598bc5c6319
 14    Module /lib/x86_64-linux-gnu/ with build-id 596409bc4e94583ef18f141c9b941a46540868ee
 15    Module /usr/lib/x86_64-linux-gnu/ with build-id 5cfe96fc398b43ac08c66e1cb91d953573d3b6f8
 16    Module /usr/lib/x86_64-linux-gnu/ with build-id 2421572a83e89276615b173445a81cc1b7db8852
 17    Module /usr/lib/x86_64-linux-gnu/ with build-id 79ce9f6175e6e5fc391962360c1ee1d981b0e82d
 18    Module /usr/lib/x86_64-linux-gnu/ with build-id 07e7de35c15a6d5b99a003e489ac7a086bbf4e72
 19    Module /usr/lib/x86_64-linux-gnu/ with build-id 964039e18af4b59e5a11f4ad26e9aa5e6a2d5db7
 20    Module /usr/lib/x86_64-linux-gnu/ with build-id 41c3563e0a41dc8ce48e990368e7a5640eeeea90
 21    Module /lib/x86_64-linux-gnu/ with build-id 02fef90b340c553239e4bf4b2213cc4cb49787eb
 22    Module /usr/lib/x86_64-linux-gnu/ with build-id b91e922b179e803eff9c0e9b13dc272ad5ee8e82
 23    Module /usr/lib/x86_64-linux-gnu/ with build-id f4b4b799dd49f037d84a30b4ce12a862cc9b2b84
 24    Module /usr/lib/x86_64-linux-gnu/ with build-id 96cbc064adc5cee44a13796504e4da59e2ac14c4
 25    Module /usr/lib/x86_64-linux-gnu/ with build-id af1c8261c6467749989b26881ab2f555b740d005
 26    Module /usr/lib/x86_64-linux-gnu/ with build-id 17990e69b4b1eabb6f872b3c449e09ef324af8f2
 27    Module /usr/lib/x86_64-linux-gnu/ with build-id 416e6cef8181f16b315ffdc0b0478bfe39e18646
 28    Module /usr/lib/x86_64-linux-gnu/ with build-id 14cf167ce7d2301e0d22c5c8636a418df713f39c
 29    Module /usr/lib/x86_64-linux-gnu/ with build-id ac61dd094547fee5c50c027fdec3ca73dcbb0b45
 30    Module /usr/lib/x86_64-linux-gnu/ with build-id e9ca493410fa013ab699c0c93006c8aea9c83306
 31    Module /usr/lib/x86_64-linux-gnu/ with build-id c2b99909147ad65e67d408d9ed118f290485f0a7
 32    Module /lib/x86_64-linux-gnu/ with build-id 5d67991a152e0b62f982a0e4110cc2262850c788
 33    Module /usr/lib/x86_64-linux-gnu/ with build-id f871bbd529a02abd860f0d16b842b5b20234cb49
 34    Module /usr/lib/x86_64-linux-gnu/ with build-id 59e35bfba32726ab7078cc70135a7ee53cc99996
 35    Module /usr/lib/x86_64-linux-gnu/ with build-id 52ea3338777de1e0c1d8c7e50d1162499ac4d71d
 36    Module /lib/x86_64-linux-gnu/ with build-id 82845af78df2c2866f440f3cae5a8103bd3b5acb
 37    Module /lib/x86_64-linux-gnu/ with build-id cc3fa4080d349d749e3045798819b0b5299618b0
 38    Module /usr/lib/x86_64-linux-gnu/ with build-id 3016ae73af115f3f2de9027a2001b6575ce9cae2
 39    Module /usr/lib/x86_64-linux-gnu/ with build-id 97bb973357d7c0ca4879cdf5569e851e409406bf
 40    Module /usr/lib/x86_64-linux-gnu/ with build-id 3d01b8b8886c2c75d008ee6730fd7dc08e95c330
 41    Module /usr/lib/x86_64-linux-gnu/ with build-id 384e87a72601f3073b4b4735e317bbb9ae49666a
 42    Module /usr/lib/x86_64-linux-gnu/ with build-id bf56231497a42ae1749e90a19bee688360326609
 43    Module /usr/lib/x86_64-linux-gnu/ with build-id cca6f877bc1e562d1d6755fff277874c02e921ae
 44    Module /usr/lib/x86_64-linux-gnu/ with build-id bab6dc81f1700a29689f7b56dfc0670974855423
 45    Module /usr/lib/x86_64-linux-gnu/ with build-id 5ea683571e5cb304f3da625c7443c812b93f297b
 46    Module /lib/x86_64-linux-gnu/ with build-id 52435fe86029575ca0ae5598c2ce822ff0a28f99
 47    Module /usr/lib/x86_64-linux-gnu/ with build-id b1614311caed1e1763894916889e9f7a9589207d
 48    Module /usr/lib/x86_64-linux-gnu/ with build-id 64d887a0a30b7e670e4b4b1a82b90689f0ed24b2
 49    Module /usr/lib/x86_64-linux-gnu/ with build-id 0da39989e9c5f8b2b47a0b54e0e8fb0aa0fe9f1e
 50    Module /usr/lib/x86_64-linux-gnu/ with build-id 181bc311fe813437349649028beba87f65418438
 51    Module /usr/lib/x86_64-linux-gnu/ with build-id 2a813fb8ed98bbb1abee3e240be4fc4a2c80c97f
 52    Module /usr/lib/x86_64-linux-gnu/ with build-id bc104618645979735399d88df5bb3b1a81753238
 53    Module /usr/lib/x86_64-linux-gnu/ with build-id a0fd01631c795d4955e5f6bef9f7e0367b20d13b
 54    Module /usr/lib/x86_64-linux-gnu/ with build-id da67a5a1577cbac716baeae27c7617db12141236
 55    Module /usr/lib/x86_64-linux-gnu/ with build-id daa6f7cf61ad6973e3bc396e76be234a1dd0cfc1
 56    Module /usr/lib/x86_64-linux-gnu/ with build-id 51b3fccda994c84c9ac6daa3bb7d084aa28f9e5c
 57    Module /usr/lib/x86_64-linux-gnu/ with build-id 96a47295f9d2322a6bf6116d6a5d386a6e9ab11d
 58    Module /lib/x86_64-linux-gnu/ with build-id 8f6561f7a9b3a9a4bbcd268d5afa265ee3ab2523
 59    Module /usr/lib/x86_64-linux-gnu/ with build-id 8b9c600a4664cab2267d50ff8ceccea668d45e2b
 60    Module /usr/lib/x86_64-linux-gnu/ with build-id fc2ed339faf8a706b1178d73c52f62eb895f8aa7
 61    Module /usr/lib/x86_64-linux-gnu/ with build-id 882598e79410515498f21c7fdd8f126b2a27b230
 62    Module /usr/lib/x86_64-linux-gnu/ with build-id 6db7f0c983ab1f7cc172b3ca784a3831c2b6d081
 63    Module /usr/lib/x86_64-linux-gnu/ with build-id 5f391bc2804b8f5e144cb1115378b6b7ed3e9439
 64    Module /lib/x86_64-linux-gnu/ with build-id 6d245aa7fed087c98525c2e9d3cf4d3d09addf5c
 65    Module /usr/lib/x86_64-linux-gnu/ with build-id 103e259ea7f013d891e078d4f939a73e19ea91aa
 66    Module /usr/lib/x86_64-linux-gnu/ with build-id 03c5783aac764fb54f1dc56a3bd3518d600c3885
 67    Module /lib/x86_64-linux-gnu/ with build-id 665f1b80589ca7b4d7f106afafd6be3b3e17706b
 68    Module /usr/lib/x86_64-linux-gnu/ with build-id d0e0fbe8b2783580f652ad6c14f3ef21cc4d223b
 69    Module /usr/lib/x86_64-linux-gnu/ with build-id 4c787a30d4430b9af719f69ae8428ba90a81c6b0
 70    Module /lib/x86_64-linux-gnu/ with build-id 0c9ba3bddb62dc87bf94b08198882ebb8f0637df
 71    Module /usr/lib/x86_64-linux-gnu/ with build-id b5e44f00687c4dfb2f70a3693b6a81c70c4a11d5
 72    Module /lib/x86_64-linux-gnu/ with build-id 46b3bf3f9b9eb092a5c0cf5575e89092f768054c
 73    Module /usr/lib/x86_64-linux-gnu/ with build-id 6b9d9b5a4216cd0bb277fc492dcdaafb1e04d4a6
 74    Module /usr/lib/x86_64-linux-gnu/ with build-id 0424dcfa4c433dc1a2a8850be15c1f61781a6a26
 75    Module /usr/lib/x86_64-linux-gnu/ with build-id 76b272c84d8982cda27ba85ad1fd611c828cbfd7
 76    Module /usr/local/lib/ with build-id 31e3b0c90ef62dbdd25838a615e05977516f6ee8
 77    Module /lib/x86_64-linux-gnu/ with build-id 1d6ff6c4c69f3572486bc27b8290ee932b0b9f39
 78    Module /lib/x86_64-linux-gnu/ with build-id 4a0ef131f8d49ac03bef8226aa6141a9426eccc4
 79    Module /usr/lib/x86_64-linux-gnu/ with build-id 968db03b42bf750c44ddfbfd10cfb706b43d53bd
 80    Module /lib/x86_64-linux-gnu/ with build-id b503275bf9fee51581fdceef97533b194035b4f7
 81    Module /usr/lib/x86_64-linux-gnu/ with build-id 2bdfb27a8005a1aec6854d25df10975ba7877177
 82    Module /usr/local/lib/ with build-id 955b30e4964e2a1673e09c980428535408aac0c7
 83    Module /usr/lib/x86_64-linux-gnu/ with build-id 1257f6b9bf1caebe75cb1e348a874209021a712d
 84    Module /usr/local/lib/ with build-id 35aaae62aa3e18ffb5b290626a4f1f7fbb1cdf60
 85    Module /lib/x86_64-linux-gnu/ with build-id 255e355c207aba91a59ae1f808e3b4da443abf0c
 86    Module /usr/lib/x86_64-linux-gnu/ with build-id ac2896f80896248f3a569ed03d7ac5876403b0a2
 87    Module /usr/lib/x86_64-linux-gnu/ with build-id f1c4a7976cda0683d976bbdf5fba08a41ba63fb4
 88    Module /usr/lib/x86_64-linux-gnu/ with build-id b04359610c861c7526a6e6c03b4500cd718116e3
 89    Module /usr/local/lib/ with build-id be72409ed96c9300d284687c5cf7c2d56f535906
 90    Module /usr/local/lib/ with build-id b1156cd6bb623264da31102382e3873e95166d4d
 91    Module /usr/local/lib/ with build-id a1cb75c5e09b925974c09325b66b1edaac0dc373
 92    Module /lib/x86_64-linux-gnu/ with build-id e25570740d590e5cb7b1a20d86332a8d1bb3b65f
 93    Module with build-id 08127f3ad7e3eb923fe3c16070d3452ba4dc49f8
 94    Stack trace of thread 233:
 95    #0  0x00007f936eb0d332 n/a (/usr/lib/x86_64-linux-gnu/ + 0x59332)
 96    ELF object binary architecture: AMD x86-64
 97Feb 22 09:21:32 gce-scanner systemd[1]: systemd-coredump@3-35036-0.service: Deactivated successfully.
 98Feb 22 09:21:32 gce-scanner audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@3-35036-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
 99Feb 22 09:21:32 gce-scanner gce_mqtt-broker_1[31649]: 1677057692: Client 8ad84798-b89c-4ebe-9a82-ae9c48499cd3 closed its connection.
100Feb 22 09:21:33 gce-scanner gce_ospd-openvas_1[32168]: OSPD[2] 2023-02-22 09:21:33,918: INFO: (ospd.ospd) 201cef55-1ace-448f-a533-c318f7f47ad0: Host scan finished.
101Feb 22 09:21:33 gce-scanner gce_ospd-openvas_1[32168]: OSPD[2] 2023-02-22 09:21:33,920: INFO: (ospd.ospd) 201cef55-1ace-448f-a533-c318f7f47ad0: Scan finished.

Fixing a failing bind to port 80 in GSA

If you happen to fix the capability problems in rootless mode, you might still encounter this error. Due to running podman-compose without root, binding to port 80 might be prohibited. As a fallback, add the line 9392:9392 into the ports and change the old entry to 9391. The binary inside the container will log something like Cannot bind to port 80, using 9392 instead.

 2  [...]
 4  gsa:
 5    image: greenbone/gsa:stable
 6    restart: on-failure
 7    ports:
 8      - 9391:80
 9      - 9392:9392
10    volumes:
11      - gvmd_socket_vol:/run/gvmd
12    depends_on:
13      - gvmd
15  [...]