You are here
Hi,
I'm newby here in this forum, so forgive when i do not follow the rulez !:)
i'm facing this error while try to mount my webdav share
/sbin/mount.davfs: loading kernel module fuse
modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg)
/sbin/mount.davfs: loading kernel module fuse failed
/sbin/mount.davfs: waiting for /dev/fuse to be created
/sbin/mount.davfs: can't open fuse device
/sbin/mount.davfs: trying coda kernel file system
/sbin/mount.davfs: no free coda device to mount
annyony some idea's hoe to fix ?
Message in the kern.log:
lamp vmunix: [1103953.555278] fuse: Unknown symbol setattr_prepare (err 0)
i'm running:
root@lamp ~# lsb_release -da
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.7 (jessie)
Release: 8.7
Codename: jessie
Thnx
Ebo
What version of TurnKey?
If that's what you've done, then it's possible that the FUSE kernel module is out of sync with the kernel version that you are using. A quick google suggests that that may be your issue. I'm not sure though sorry and TBH I'm not even 100% sure on how you would go about fixing it...
FWIW I just tried loading the FUSE module on a v14.1 WordPress (based on LAMP) server that I had laying around and it "just works"...
Assuming that the issue is loading the FUSE module, then perhaps the simplest resolution is to just use TKLBAM to migrate to a new (v14.1) server?
Also it has just occurred to me that you don't mention where this is running? Mounting kernel modules doesn't work on all TurnKey builds (e.g. LXC or Docker containers). In the case of LXC containers, you need to load the relevant module on the host system. That should then allow the guest to access it. Although I'm not sure if the error you are getting is consistent with that, just throwing ideas around really...
Oh and one more thing, to see if it is a kernel version/FUSE module mismatch, try checking:
See how the kernel version 3.16.0-4-amd64 (from uname) matches the path of the FUSE module /lib/modules/3.16.0-4-amd64/kernel/fs/fuse/fuse.ko
kernel issue
Hi,
Thnx for your response, everything you mentioned is true:
3.16.0-4-amd64
root@lamp ~# modinfo fuse | grep filename
filename: /lib/modules/3.16.0-4-amd64/kernel/fs/fuse/fuse.ko
root@lamp ~# turnkey-version
turnkey-lamp-14.1-jessie-amd64
root@lamp ~# modprobe fuse
modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg)
root@lamp ~# dmesg | tail -1
[1500970.220152] fuse: Unknown symbol setattr_prepare (err 0)
Btw, it's a standalone TK lamp version
Hmmm, very weird...!
Once I know that (and if I have access to something close enough), I'll see if I can recreate your issue. FWIW the tests I ran were on TurnKey v14.1 WordPress (which is based on LAMP) running on Amazon. It shouldn't make too much difference, but perhaps?
kernel issue
Hi Jeremy,
Yes that's what i ment bare metal hosted at my company no vm.
i'm gonna install a new clean version and create a new lamp stack it will be done in approx 1 hour
inclusive the config and extra packages to be installed, thats why i loved TurnKey so much no smuk just do it !!
p.s.
i did a dpkg-reconfigure fuse finaly it was building a new boot image but during bootup i'mentioned some load kernel issue's, so i decided to go sctratch.
gr
Edwin
OK cool.
Have you rebooted?
My suspicion is that the kernel and associated modules have been updated, but because the server hasn't been rebooted, the running kernel is not compatible with the (unloaded) modules. I'm only guessing, but your output suggests I may be right. Here's mine:
Ah ha! I've recreated it! And have a (messy) workaround.
I launched a new server and tried to recreate your issue. I did a clean install of v14.1 TKLDev from ISO (one of the only v14.1 ISO I had handy) and used LVM (I know you weren't using TKLDev but as all images are built on Core, for an issue like this it should be irrelevant). The TL;DR version is that I managed to recreate your issue! However, it wasn't until I updated the kernel from the main repo (i.e. not just the security update kernel). And I have a workaround (without requiring a clean install). I need to do more investigation to see what the minimum workaround is, but it my steps seem to resolve it. It also seems that it may be a combo of a couple of things which are interacting to cause the issue.
FWIW I rarely (if ever) run 'apt-get upgrade' on a production server. In theory, it should work fine, but that hasn't always been my experience. So on production servers I only ever install security updates unless I encounter a bug which is specifically resolved in an updated package (and I only update as little as I have to).
So for completeness, here is the process I took to recreate your issue. As you can see, it only occurs after installing the latest kernel from main. After clean install with LVM, and without installing sec updates:
Then I ran the security updates:
Among other things it updated linux-image-3.16.0-4-amd64. I then re-ran the above checks:
So at this point it has updated the kernel package, but because I haven't rebooted, it's still running the old kernel. Everything is still working. I'm pretty sure that's because I loaded the modules previously. Now to reboot and check again.
TBH this next bit seemed a bit weird. It seems even after a reboot, it's still running the same kernel?! But otherwise everything appears to be working as it should.
So then I thought, perhaps there is a non-security kernel update?! And there was...
Reboot again to update running kernel:
I've now recreated your issue! It seems it's still running the old kernel. Even after an update!?
Then it gets really weird. Following your hint, I thought I'd check what grub is up to. It seems something is really broken with grub?!
Huh?!? No /boot/grub directory?! So I thought I'd try manually updating grub.
No joy. So I retried after manually creating the directory: That seemed to work... But a reboot shows, that didn't fix it... So I tried doing a (re)install of grub as you noted. It probably wasn't necessary, but for good measure I also ran update-grub afterwards. Following (another) reboot, it finally all seems as it should! I'll need to retrace my steps to see where/when the actual issue is occurring but it seems to be a combo of upgrading to the latest kernel from the main repo when LVM is installed?! At this stage, I'm guessing that it's either an LVM bug, or possibly a grub bug that only occurs in conjunction with LVM.You are most welcome!
FWIW what is happening is that when you install to LVM, the installer creates a separate /boot partition (outside the LVM). The separate boot partition was required by legacy grub as it could not boot directly into an LVM.
However, grub2 can happily boot into an LVM. Somewhere along the line (I suspect a grub2 bug in v14.x) we ended up with 2 /boot directories, one within the LVM and one as a separate partition. At boot time, grub2 is using the /boot directory inside the LVM. However as the separate boot partition is explicitly loaded by the /etc/fstab file, it is mounted over the top of the one within the LVM. The result is that within the running system, you don't have access to the LVM /boot partition, which is actually what is used at boot time.
The workaround of re-installing & updating grub, tells your server to use the separate /boot partition as was originally intended, so everything works again.
An alternate workaround is to remove the separate partition from the /etc/fstab file so it's not loaded at all. If you go that way though, you'll need to reinstall the updated kernel.
FWIW we'll be fixing this bug as part of the upcoming v14.2 release, so thanks to the previous posters for bringing it to our attention!
I've lodged it as a bug
I have provided the TL;DR version of the workaround there. Although as I note, it could possibly be slimmed down and we still need to investigate the actual cause.
Add new comment