You are here
lxc tklbam-backup -> tklbam-restore'd in AWS results in broken system
I am trying to use my tklbam backups made in proxmox lxc instances into new EC2 machines.
The restored system gets essentially broken by the restore. Things like resolvconf for example gets broken. The /run/resolvconf dir is missing and /etc/resolv.conf becomes a real file (from the backup origin).
It seems, and makes sense, that the init (systemd) process is quite different in an lxc rather than in a real system running it's own kernel (VM or baremetal). The restore basically changes "things" in /etc and which packages are installed and this basically breaks the system at a very low level (inittab, systemd, etc.)
Are there recipes or excludes to apply to the backup source perhaps which might alleviate the problem?
I am currently testing this addition in my source system /etc/tklbam/overrides:
# These are only necessary for the lxc backup to VM restore -/etc/* /etc/apache2 /etc/default/locale /etc/cron.daily/myapp /etc/init.d/myapp /etc/letsencrypt /etc/logrotate.d/myapp /etc/postfix/main.cf /etc/rc*/*myapp /etc/ssh /etc/tklbam/overrides /etc/locale.gen /etc/mailname /etc/timezone
*replace myapp with the real app name ;)
I am using a turnkey-postgresql-14.0 as source (kept up to date) and a turnkey-postgresql-14.2 in AWS.