You are here
johns67w - Sat, 2024/01/20 - 00:30
Hi, i am awaiting activiation, in the meanwhile i would appreciate some tech support please.
My current setup:
LXC (via proxmox) 8, just updated from pve7, but issue was the same before upgrading to pve8.
turnkey-version:
The setup worked, but my homelab has been discounted for about a year. tried to turn the server on, but cannot access webmin.
tried to do service stunnel4 restart, response:
Failed to restart stunnel4.service: Unit stunnel4.service is masked
root@gtcloud ~# service stunnel4 status * stunnel4.service Loaded: masked (Reason: Unit stunnel4.service is masked.) Active: inactive (dead)
I have tried to follow the steps on this link, no luck , but results below: Issue accessing webmin and webshell on Turnkey File Server | TurnKey GNU/Linux (turnkeylinux.org):
root@gtcloud ~# systemctl stop webmin shellinabox stunnel4@webmin stunnel4@shellinabox root@gtcloud ~# mkdir -p /var/lib/stunnel4 root@gtcloud ~# rm -f /var/lib/stunnel4/*.pid root@gtcloud ~# chown stunnel4:stunnel4 /var/lib/stunnel4 root@gtcloud ~# chmod 0755 /var/lib/stunnel4 root@gtcloud ~# systemctl start webmin shellinabox root@gtcloud ~# root@gtcloud ~# systemctl status webmin shellinabox stunnel4@webmin stunnel4@shellinabox | grep '^*' -A5 * webmin.service - Webmin server daemon Loaded: loaded (/lib/systemd/system/webmin.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/webmin.service.d `-override.conf Active: active (running) since Fri 2024-01-19 21:38:34 GMT; 21s ago Process: 238149 ExecStart=/usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf (code=exited, status=0/SUCCESS) -- * shellinabox.service - Shell In A Box Daemon (aka WebShell) Loaded: loaded (/etc/init.d/shellinabox; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 28s ago Process: 238150 ExecStart=/etc/init.d/shellinabox start (code=exited, status=0/SUCCESS) Tasks: 2 (limit: 4532) Memory: 1.6M -- * stunnel4@webmin.service - Universal SSL tunnel for network daemons (webmin) Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 30s ago Process: 238141 ExecStart=/usr/bin/stunnel4 /etc/stunnel/webmin.conf (code=exited, status=0/SUCCESS) Main PID: 238145 (stunnel4) Tasks: 2 (limit: 4532) -- * stunnel4@shellinabox.service - Universal SSL tunnel for network daemons (shellinabox) Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 33s ago Process: 238142 ExecStart=/usr/bin/stunnel4 /etc/stunnel/shellinabox.conf (code=exited, status=0/SUCCESS) Main PID: 238147 (stunnel4) Tasks: 2 (limit: 4532) root@gtcloud ~# root@gtcloud ~# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/pve-vm--201--disk--0 95G 32G 60G 35% / root@gtcloud ~# systemctl status webmin shellinabox stunnel4@webmin stunnel4@shellinabox | grep '^*' -A5 * webmin.service - Webmin server daemon Loaded: loaded (/lib/systemd/system/webmin.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/webmin.service.d `-override.conf Active: active (running) since Fri 2024-01-19 21:38:34 GMT; 21s ago Process: 238149 ExecStart=/usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf (code=exited, status=0/SUCCESS) -- * shellinabox.service - Shell In A Box Daemon (aka WebShell) Loaded: loaded (/etc/init.d/shellinabox; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 28s ago Process: 238150 ExecStart=/etc/init.d/shellinabox start (code=exited, status=0/SUCCESS) Tasks: 2 (limit: 4532) Memory: 1.6M -- * stunnel4@webmin.service - Universal SSL tunnel for network daemons (webmin) Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 30s ago Process: 238141 ExecStart=/usr/bin/stunnel4 /etc/stunnel/webmin.conf (code=exited, status=0/SUCCESS) Main PID: 238145 (stunnel4) Tasks: 2 (limit: 4532) -- * stunnel4@shellinabox.service - Universal SSL tunnel for network daemons (shellinabox) Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 33s ago Process: 238142 ExecStart=/usr/bin/stunnel4 /etc/stunnel/shellinabox.conf (code=exited, status=0/SUCCESS) Main PID: 238147 (stunnel4) Tasks: 2 (limit: 4532) root@gtcloud ~# root@gtcloud ~# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/pve-vm--201--disk--0 95G 32G 60G 35% / root@gtcloud ~# df -i / Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/pve-vm--201--disk--0 6291456 1239621 5051835 20% / root@gtcloud ~# journalctl -u stunnel4@webmin | tail -40 -- Boot b4aabbf0a972487cb772f79177888c85 -- Jan 18 23:27:21 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)... Jan 18 23:27:22 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin). Jan 18 23:42:56 gtcloud systemd[1]: Stopping Universal SSL tunnel for network daemons (webmin)... Jan 18 23:42:57 gtcloud stunnel[301]: LOG5[main]: Terminated Jan 18 23:42:57 gtcloud stunnel[301]: LOG5[main]: Terminating 1 service thread(s) Jan 18 23:42:57 gtcloud stunnel[301]: LOG5[main]: Service threads terminated Jan 18 23:42:57 gtcloud systemd[1]: stunnel4@webmin.service: Succeeded. Jan 18 23:42:57 gtcloud systemd[1]: Stopped Universal SSL tunnel for network daemons (webmin). -- Boot 81003b65649b4b32bfa30be269838745 -- Jan 19 19:09:13 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)... Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Compiled with OpenSSL 1.1.1k 25 Mar 2021 Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Running with OpenSSL 1.1.1w 11 Sep 2023 Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,SNI Auth:LIBWRAP Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: UTF-8 byte order mark not detected Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: FIPS mode disabled Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Configuration successful Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/ Jan 19 19:09:13 gtcloud systemd[1]: stunnel4@webmin.service: Can't open PID file /var/lib/stunnel4/webmin.pid (yet?) after start: Operation not permitted Jan 19 19:09:13 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin). Jan 19 21:37:07 gtcloud systemd[1]: Stopping Universal SSL tunnel for network daemons (webmin)... Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminated Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminating 1 service thread(s) Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Service threads terminated Jan 19 21:37:07 gtcloud systemd[1]: stunnel4@webmin.service: Succeeded. Jan 19 21:37:07 gtcloud systemd[1]: Stopped Universal SSL tunnel for network daemons (webmin). Jan 19 21:38:31 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)... Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Compiled with OpenSSL 1.1.1k 25 Mar 2021 Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Running with OpenSSL 1.1.1w 11 Sep 2023 Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,SNI Auth:LIBWRAP Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: UTF-8 byte order mark not detected Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: FIPS mode disabled Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Configuration successful Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Binding service [webmin] to :::12321: Address already in use (98) Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/ Jan 19 21:38:31 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin). root@gtcloud ~# root@gtcloud ~# ls -la /var/lib/stunnel4 total 16 drwxr-xr-x 2 stunnel4 stunnel4 4096 Jan 19 21:38 . drwxr-xr-x 29 root root 4096 Feb 25 2023 .. -rw-r--r-- 1 stunnel4 stunnel4 7 Jan 19 21:38 shellinabox.pid -rw-r--r-- 1 stunnel4 stunnel4 7 Jan 19 21:38 webmin.pid root@gtcloud ~#
root@gtcloud ~# netstat -tlnp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 172.27.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.24.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:9819 0.0.0.0:* LISTEN 275885/docker-proxy tcp 0 0 127.0.0.1:32401 0.0.0.0:* LISTEN 3188/Plex Media Ser tcp 0 0 172.17.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN 2161/docker-proxy tcp 0 0 127.0.0.1:32600 0.0.0.0:* LISTEN 9046/Plex Tuner Ser tcp 0 0 0.0.0.0:6052 0.0.0.0:* LISTEN 1894/python3 tcp 0 0 0.0.0.0:8123 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:8083 0.0.0.0:* LISTEN 2597/docker-proxy tcp 0 0 192.168.16.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 2839/docker-proxy tcp 0 0 0.0.0.0:1852 0.0.0.0:* LISTEN 2205/docker-proxy tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 238366/perl tcp 0 0 172.30.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:9443 0.0.0.0:* LISTEN 2787/docker-proxy tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 126/systemd-resolve tcp 0 0 192.168.48.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 126/systemd-resolve tcp 0 0 0.0.0.0:1114 0.0.0.0:* LISTEN 2227/docker-proxy tcp 0 0 0.0.0.0:1115 0.0.0.0:* LISTEN 2949/docker-proxy tcp 0 0 0.0.0.0:1116 0.0.0.0:* LISTEN 2921/docker-proxy tcp 0 0 172.25.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.21.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.29.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.22.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.26.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.19.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 192.168.32.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.23.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.28.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 192.168.128.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 192.168.64.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 172.18.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 192.168.112.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 2814/docker-proxy tcp 0 0 0.0.0.0:8980 0.0.0.0:* LISTEN 1705/docker-proxy tcp 0 0 0.0.0.0:8989 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 127.0.0.1:12319 0.0.0.0:* LISTEN 238196/shellinaboxd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 928/master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2692/docker-proxy tcp 0 0 0.0.0.0:81 0.0.0.0:* LISTEN 2661/docker-proxy tcp 0 0 0.0.0.0:83 0.0.0.0:* LISTEN 2585/docker-proxy tcp 0 0 0.0.0.0:89 0.0.0.0:* LISTEN 2125/docker-proxy tcp 0 0 0.0.0.0:12321 0.0.0.0:* LISTEN 238145/stunnel4 tcp 0 0 0.0.0.0:8200 0.0.0.0:* LISTEN 2051/docker-proxy tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 330/sshd: /usr/sbin tcp 0 0 127.0.0.1:37217 0.0.0.0:* LISTEN 7236/Plex Plug-in [ tcp 0 0 0.0.0.0:8640 0.0.0.0:* LISTEN 275507/docker-proxy tcp 0 0 172.20.0.1:40000 0.0.0.0:* LISTEN 2512/python3 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2625/docker-proxy tcp 0 0 0.0.0.0:444 0.0.0.0:* LISTEN 2563/docker-proxy tcp6 0 0 :::32400 :::* LISTEN 3188/Plex Media Ser tcp6 0 0 :::9819 :::* LISTEN 275891/docker-proxy tcp6 0 0 fe80::42:d5ff:fec:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::8181 :::* LISTEN 2167/docker-proxy tcp6 0 0 :::8123 :::* LISTEN 2512/python3 tcp6 0 0 :::8083 :::* LISTEN 2612/docker-proxy tcp6 0 0 :::8000 :::* LISTEN 2849/docker-proxy tcp6 0 0 fe80::42:4eff:fe3:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::1852 :::* LISTEN 2213/docker-proxy tcp6 0 0 :::9443 :::* LISTEN 2793/docker-proxy tcp6 0 0 :::5355 :::* LISTEN 126/systemd-resolve tcp6 0 0 fe80::42:5cff:fe5:40000 :::* LISTEN 2512/python3 tcp6 0 0 fe80::42:ddff:fe9:40000 :::* LISTEN 2512/python3 tcp6 0 0 fe80::42:89ff:fe1:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::1114 :::* LISTEN 2233/docker-proxy tcp6 0 0 :::1115 :::* LISTEN 2955/docker-proxy tcp6 0 0 :::1116 :::* LISTEN 2928/docker-proxy tcp6 0 0 fe80::42:ff:fe61::40000 :::* LISTEN 2512/python3 tcp6 0 0 fe80::42:79ff:fef:40000 :::* LISTEN 2512/python3 tcp6 0 0 fe80::42:53ff:fea:40000 :::* LISTEN 2512/python3 tcp6 0 0 fe80::42:70ff:fe0:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::8840 :::* LISTEN 1876/./WatchYourLAN tcp6 0 0 fe80::42:7ff:fe4c:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::9000 :::* LISTEN 2819/docker-proxy tcp6 0 0 fe80::42:89ff:fee:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::8980 :::* LISTEN 1712/docker-proxy tcp6 0 0 :::80 :::* LISTEN 2701/docker-proxy tcp6 0 0 :::81 :::* LISTEN 2669/docker-proxy tcp6 0 0 :::83 :::* LISTEN 2596/docker-proxy tcp6 0 0 :::89 :::* LISTEN 2130/docker-proxy tcp6 0 0 fe80::42:bcff:fe8:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::12320 :::* LISTEN 238147/stunnel4 tcp6 0 0 :::8200 :::* LISTEN 2057/docker-proxy tcp6 0 0 :::22 :::* LISTEN 330/sshd: /usr/sbin tcp6 0 0 :::8640 :::* LISTEN 275514/docker-proxy tcp6 0 0 :::443 :::* LISTEN 2631/docker-proxy tcp6 0 0 :::444 :::* LISTEN 2568/docker-proxy tcp6 0 0 fe80::42:87ff:fef:40000 :::* LISTEN 2512/python3 tcp6 0 0 :::6415 :::* LISTEN 2044/node root@gtcloud ~#
root@gtcloud ~# service stunnel4 status * stunnel4.service Loaded: masked (Reason: Unit stunnel4.service is masked.) Active: inactive (dead) root@gtcloud ~#
root@gtcloud ~# journalctl -b -u webmin.service -u stunnel4@webmin.service -- Journal begins at Mon 2023-02-27 01:26:45 GMT, ends at Fri 2024-01-19 22:12:37 GMT. -- Jan 19 19:09:13 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)... Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Compiled with OpenSSL 1.1.1k 25 Mar 2021 Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Running with OpenSSL 1.1.1w 11 Sep 2023 Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,S> Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: UTF-8 byte order mark not detected Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: FIPS mode disabled Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Configuration successful Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/ Jan 19 19:09:13 gtcloud systemd[1]: stunnel4@webmin.service: Can't open PID file /var/lib/stunnel4/webmin.pid (yet?) a> Jan 19 19:09:13 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin). Jan 19 19:09:13 gtcloud systemd[1]: Starting Webmin server daemon... Jan 19 19:09:15 gtcloud perl[329]: pam_unix(webmin:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rh> Jan 19 19:09:17 gtcloud webmin[329]: Webmin starting Jan 19 19:09:18 gtcloud systemd[1]: webmin.service: Can't open PID file /var/webmin/miniserv.pid (yet?) after start: O> Jan 19 19:09:18 gtcloud systemd[1]: Started Webmin server daemon. Jan 19 21:37:07 gtcloud systemd[1]: Stopping Webmin server daemon... Jan 19 21:37:07 gtcloud systemd[1]: webmin.service: Main process exited, code=exited, status=1/FAILURE Jan 19 21:37:07 gtcloud systemd[1]: webmin.service: Failed with result 'exit-code'. Jan 19 21:37:07 gtcloud systemd[1]: Stopped Webmin server daemon. Jan 19 21:37:07 gtcloud systemd[1]: webmin.service: Consumed 1.698s CPU time. Jan 19 21:37:07 gtcloud systemd[1]: Stopping Universal SSL tunnel for network daemons (webmin)... Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminated Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminating 1 service thread(s) Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Service threads terminated Jan 19 21:37:07 gtcloud systemd[1]: stunnel4@webmin.service: Succeeded. Jan 19 21:37:07 gtcloud systemd[1]: Stopped Universal SSL tunnel for network daemons (webmin). Jan 19 21:38:31 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)... Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Compiled with OpenSSL 1.1.1k 25 Mar 2021 Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Running with OpenSSL 1.1.1w 11 Sep 2023 Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PS> Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: UTF-8 byte order mark not detected Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: FIPS mode disabled Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Configuration successful Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Binding service [webmin] to :::12321: Address already in use (98) Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/ Jan 19 21:38:31 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin). Jan 19 21:38:31 gtcloud systemd[1]: Starting Webmin server daemon... Jan 19 21:38:32 gtcloud perl[238149]: pam_unix(webmin:auth): authentication failure; logname= uid=0 euid=0 tty= ruser=> Jan 19 21:38:34 gtcloud webmin[238149]: Webmin starting Jan 19 21:38:34 gtcloud systemd[1]: Started Webmin server daemon. lines 22-45/45 (END)
tried to apt get update got the below:
root@gtcloud ~# apt-get update Err:1 http://archive.turnkeylinux.org/debian bullseye-security InRelease Temporary failure resolving 'archive.turnkeylinux.org' Err:2 http://archive.turnkeylinux.org/debian bullseye InRelease Temporary failure resolving 'archive.turnkeylinux.org' Err:3 http://security.debian.org bullseye-security InRelease Temporary failure resolving 'security.debian.org' Err:4 https://pkgs.tailscale.com/stable/debian bullseye InRelease Temporary failure resolving 'pkgs.tailscale.com' Err:5 https://download.docker.com/linux/debian bullseye InRelease Temporary failure resolving 'download.docker.com' Err:6 http://deb.debian.org/debian bullseye InRelease Temporary failure resolving 'deb.debian.org' Reading package lists... Done W: Failed to fetch https://download.docker.com/linux/debian/dists/bullseye/InRelease Temporary failure resolving 'download.docker.com' W: Failed to fetch http://archive.turnkeylinux.org/debian/dists/bullseye-security/InRelease Temporary failure resolving 'archive.turnkeylinux.org' W: Failed to fetch http://security.debian.org/dists/bullseye-security/InRelease Temporary failure resolving 'security.debian.org' W: Failed to fetch http://archive.turnkeylinux.org/debian/dists/bullseye/InRelease Temporary failure resolving 'archive.turnkeylinux.org' W: Failed to fetch http://deb.debian.org/debian/dists/bullseye/InRelease Temporary failure resolving 'deb.debian.org' W: Failed to fetch https://pkgs.tailscale.com/stable/debian/dists/bullseye/InRelease Temporary failure resolving 'pkgs.tailscale.com' W: Some index files failed to download. They have been ignored, or old ones used instead. root@gtcloud ~#
Forum:
Hi John, looks like something wrong with networking?
I'm not really sure what or why, but on face value it looks like there is something wrong with your networking?! Perhaps it's just the network config of the TurnKey container, but perhaps it's something else?
The stunnel webmin service is running:
Note the "Active" line says "active (running)". If it wasn't running, then that would say something else, like "inactive (dead)", "active (exited)" or similar (there are a range of possible states - but "active (running)" is the one we want).
And the service is listening as it should be (as noted in your netstat output):
Note that the PID (238145) confirms that it's the correct service (matches the PID noted in the service status). The '0.0.0.0:12321' means that it is listening on all interfaces ('0.0.0.0') on port 12321.
Webmin is also running:
And is also listening on port 10000 - as expected:
That one would not have been so obvious as the PID doesn't match, nor does it explicitly say "webmin" - but that's because Webmin uses it's own miniserver and it's written in the Perl language (hence why the PID doesn't match).
As something of an aside, I note that for some reason Webmin is also listening on all interfaces. That isn't as it should be, it should only be listening on localhost (127.0.0.1). I.e. instead of '0.0.0.0:10000', it should be '127.0.0.1:10000'. Despite that not being as it should, that would not be causing your issues (it means that Webmin should also be available via http on port 10000 - where it should be hidden and only listening on localhost). Once the main issue is resolved, I can assist you to fix that (should be pretty easy).
Your apt output also suggest network issues (although that's outgoing internet access, rather than incoming access):
So my guess is that it's not just Webmin that isn't working, I suspect that many (if not all) of the other services are problematic too? Have you tried connected to any of the other services? Are any of those working? If so, which ones? And what interface (IP) are they listening on?
BTW it looks like there is a lot going on in this container! Whilst in theory that shouldn't stop it working, personally I prefer to separate different workloads into different containers. That does mean some redundancy, but I consider that a feature, not a bug. The system overhead of containers is minimal and it means that if you have a problem with one, it won't bring everything down. It also means that everything else keeps working when you're doing maintenance.
Also, if any of the services in this container are publicly available (i.e. outside your local network - although if it's secured via VPN access, that's not so bad), I wouldn't be running a privileged container(as I assume you are - AFAIK Docker requires that?) - possibly with nesting enabled too . Using Podman May (or may not) be a workaround if you're sure you want to run Docker style containers in an LXC container as Podman supports "rootless" containers (I think Docker are working on "rootless" too, but I don't think it's default yet). I personally prefer to ensure that any publicly facing LXC containers are unprivileged (if something goes wrong, the chances of a malicious actor getting access to the host system are vastly reduced). Thus if any services running on this container are publicly available, I'd encourage you to move them to an unprivileged container. As for Docker, I'd recommend running that on a "proper" VM instead (i.e. a KVM VM - not a container). Unprivileged containers (and VMs) will provide much better isolation from your host system and make it much harder for any potential "bad guys". VMs do have higher overhead, but IMO that's a price worth paying.
Regardless, I doubt any of that is a direct cause of your issues.
It looks like the network setup in this container is pretty complex as you're using multiple 192.168.x.x IP ranges, as well as 172.x.x.x ranges?! Usually you would only be using one or the other, although AFAIK Docker uses 172.17.0.0/16 by default. I can see Docker using some 172.17.x.x addresses, but there are other 172.x.x.x addresses too? Perhaps there is some clash between the Docker network and your LAN network? Perhaps changing the default Docker IP range would fix it? Although I'd need to know a bit more about your network before I would be more confident.
I'm not sure how much use I'm going to be, but if you'd like my 2c, could you please share the output of:
I don't think it will help much, but it might also be worth sharing the output of:
cannot access webmin
Thank you so much for your response.
I will action your recommendations on security, LXC, VMs and Docker.
Could this be to do with the fact that i have moved the server from 192.168.1.x to a new subnet 192.168.50.x.
You are correct, I cannot access any other webmin, (Portainer, dashboard NginX etc).
ping bbc.com works and. I can see in my router than the lxc is sending traffic out. including dns requests etc.
Networks should not be too complicated, as only things running are docker containers ( although this also includes MACVLANS on docker). ETH0 should be the main network, and TAILSCALE is my ZERO-config VPN setup
root@gtcloud ~# cat /etc/network/interfaces:
Wow, that's a lot of interfaces!
Wow, that's a lot of interfaces!
[Please note that I've edited this post since originally posted yesterday]
FWIW here's a container I have running:
Note that whilst it doesn't explicitly note it, that eth0 config is provided by Proxmox (i.e. when launching the container, I added that config in the UI). IIRC unless you edit your container config, then Proxmox will just overwrite in on reboot. Your container config can be found in Proxmox: either /etc/pve/local/lxc/VMID.conf or /etc/pve/nodes/NODE_NAME/lxc/VMID.conf - where VMID is the actual container ID number and NODE_NAME is the name of your PVE node. Here is the relevant line in mine:
And FWIW here's my 'ip addr' output:
:)
TBH networking isn't really my strong suit (I strongly suspect that you have knowledge that I do not), but I can also see that your 'eth0@if5' (which is your 'eth0') only has an IPv6 address?! Ultimately that should work ok, but obviously you'd need to connect to that (rather than an IPv4 address). My guess is that that your DHCP is only handing out IPv6 addresses?
Although I also see quite a few bridge interfaces (i.e the ones that start with 'br...') and quite a few have 192.168.x.x address (the rest are 172.x.x.x which I assume are docker related?), but none of your interfaces appears to have an 192.168.50.x address. Not only that but none that I noticed even have that address within their range (perhaps they do and I missed it - I didn't check super thoroughly, but definitely no 192.168.50.x address). So I'm guessing that's the core of the issue - it's not listening on the address that you're trying to connect to?!
All the bridges seem to have different (and massive) IP ranges though, so I can't even be sure that you'll be able to connect to any of the IPv4 ones that do exist - unless you have a switch and router set up to join all the different networks up?! Even then, that would literally mean that you have thousands of IPv4 addresses in the 192.168.x.x range - which seems like serious overkill for me - unless you're running a large enterprise!?
To be completely honest with you, I'm out of my depth here... I can glean some info from what you've shared (as per above), but beyond the basics, I'm not even sure how useful that will be to you (e.g. I recall, if you are using a bridge to connect, you don't assign an interface an IP, you do that on the bridge). I wouldn't even know where to start to make that all work... Personally I keep things pretty simple locally, as you may have already guessed from the container network info I posted above, I just have a simple 192.168.1.0/24 network.
Add new comment