You are here
Considering getting Turnkey Linux or another such OS/NAS, but hesitant because of needing more info – so I’m hoping someone will answer my query here, please.
Details 1st:
I have a small home office & have been using Ubuntu Mate 18.04 as a file sharing box for about a year - all my PCs use this same OS BTW (Thanks Jeremy Davis).
Every so often it seems to have a belly ache & file permissions sort of go nuts, resulting in crashed document editing, lockfiles & hassles.
There are only me & my assistant using this from our own desktop PCs to edit DOCX files, mostly.
Before this system I had an older win2000 box for the file storage & XP as the desktop PCs OS and…
File sharing was never any problem whatsoever – totally unrestricted.
The problems we have now make us wish for ‘the old days’ !!
My query, please:
Is there any way by which I can deploy & setup Turnkey Linux to function with wide-open access to an entire partition with zero restrictions ??
Thanks for any help !!
I can't see why not?!
TBH I'm a Linux guy so don't really use Windows file sharing and it's worth noting that Samba essentially reverse engineers the (Windows proprietary) SMB protocol. So the risk of issues is probably higher on a Linux based implementation of SMB.
Having said that, it doesn't sound like your requirements are particular high. I suspect that TurnKey would likely fulfil your needs, but it won't work like that OOTB. You will need to configure it like that yourself and I can't personally tell you how to set it up to achieve your ends (although I'm sure there is tons of info online).
However, it it worth noting that TurnKey is based on Debian, as is Ubuntu. So whilst we may provide an alternate config interface (i.e. web based rather than GUI based) and it's probably lighter weight (no GUI, less CPU/RAM usage) ultimately what you are asking for should already be possible on your existing Ubuntu set up! Why things go wrong with your current set up, I can really say without more info, and if it's something to do with your existing Windows workflow and/or Windows network setup, I can't guarantee that you might not hit similar issues using TurnKey.
My advice would be to test out a TurnKey VM. Allow yourself a specific amount of time to set it up and have a play. If you hit your time limit and aren't close to your goal, then bail out and kill the VM. If you get close to hitting your goal within the timeframe, then you may want to spend a little more time (but don't fall into the sunk cost fallacy). Worst case, you've "wasted" a little time; best case you get what you want...
I'm happy to provide as much assistance as I can, although as I say, Windows related stuff is not my strong suit...
Thanks !!
Much Appreciate your thoughtful reply Jeremy Davis !!
I edited the OP to be more clear now.
No windows in use here anymore.
All PCs have Ubuntu Mate 18.04 & other than the (Samba) sharing on the LAN all else works 100% fine.
If there exists some way to literally be completely free of file permissions in a single partition - THAT would be excellent as all that is in there are document files which are re-used for making more and for reference.
Thanks Again !!
Ah ok... That makes things easier then... :)
Ah ok, so if there's no Windows and all clients are Ubuntu certainly makes things better IMO!
So personally I'd say if you're not using Windows and have no need (or intentions) for Windows to access files, then why use a Windows protocol?
If you're happy to step back from SMB, then there are lots of different options you could use to share files. NFS is probably worth consideration as it's probably the one most suited to this job. I don't have much experience with it and can't offer much OOTOMH guidance, but our Fileserver appliance should support it OOTB.
My personal favourite networking filesharing method is via SSHFS. Although I'm not sure how well it would work with multiple users? I use it lots but not with files shared with others; just amongst my devices.
Alternatively, you could look to leverage a filesyncing (Dropbox-like) tool to auto sync files. One of the most popular (of our appliances) is Nextcloud. When peered with a client app, it essentially provides a Dropbox-like experience, plus it has a pretty, extendable webUI that allows you to do tons of stuff with your files.
If that seems a bit overkill (which it possibly is) then we also have a Syncthing appliance. That is much more simple and focuses on just the file syncing part.
Local storage only. Permissions are the only issue IMO.
In my work there are matters of confidentiality - so all my document files must stay 100% in-house.
They need to be accessed as local shares only.
At the start of this adventure I saw that the 2 main avenues are Samba & NFS.
Trying to get NFS set up was hideously complex & just gave me a headache, whereas both Nautilus & Caja have direct options to create shares.
They also both claim to allow 100% access - but that doesn't work 100%.
SO:
Rather than sharing the lowest level directory as well as any others which misbehave - I'd prefer to have the mounted partition just be totally permission-free without adding any cloud stuff - or FTP style stuff - just plain old straight file sharing pretty much like that older & less talented 'other' OS did so easily.
Thanks.
You seem to have a fundamental misunderstanding...
You seem to have a fundamental misunderstanding about what network "file shares" are. Your "FTP style stuff" comment was what alerted me to that, so let me first try to educate you a little. Sorry that this post is quite long and I also apologies in advance if this feels like a bit of a rant, but I'd urge you to read through it all. Hopefully it will help you understand the full complexity and associated trade-offs.
Network "file share" protocols
Network "file shares" are merely the ability to access remote file systems via a network protocol. You could consider a network protocol as being like a language. The only fundamental difference between an FTP share and an SMB (i.e. Windows file sharing protocol) share is the network protocol (i.e. the "language") used. For file shares to work, both the "server" and the "client" need to be able to connect via the same protocol - i.e. "speak the same language".
Whilst the protocol itself is unique, SMB is not fundamentally different to; or better than (S)FTP, NFS, WebDAV (via HTTP or HTTPS) or any other network file sharing protocol. My guess is that your experience with Windows has lead you to the mistaken idea that windows file sharing is "just plain old straight file sharing" - it's not!
It's just the native and default Windows file sharing protocol; just like AFP (Apple Filing Protocol) is the native and default file sharing protocol in the Apple ecosystem and "just works" between Apple devices. But because Windows has the biggest PC market-share, that means that SMB is very common and pervasive (to the point that Apple supports it too). Because Linux is open source, quite fragmented and has a small market-share (at least in the end user PC market), it supports many native (e.g. NFS) and non-native protocols (e.g. SMB and AFP) and it doesn't really have a single "default" protocol.
Possibly one other factor that has lead you to think that SMB is "just plain old straight file sharing" is that unlike many protocols, by default it advertises itself on the network. It essentially yells "hey here I am, come connect to me!". I'll speak to that a little more with below as it's quite relevant to connection via your file manager.
File manager network file share support
Whether or not a particular file manager aka "client" (e.g. Nautilus, Caja, Windows Explorer or whatever file manager you're using) can mount shares of any particular protocol or not depends on the file manager itself. (Obviously it also depends on how the shares have been set up too, e.g. permissions, etc). My guess is that your only experience with "FTP style stuff" has been via a specific FTP client (i.e. a file manager that explicitly supports FTP and often FTP variants only). For what it's worth, Windows Explorer (Windows built-in native file manager) also natively supports vanilla FTP (and WebDAV). It's just not used very often or advertised very broadly, so you may have never come across that...
Nautilus (and possibly Caja too) supports connecting to SFTP and SSHFS just as easily as it supports SMB (or AFP for that matter). So from a user perspective, once it's initially connected and bookmarked, it will act exactly the same. The protocol in use should be completely transparent (and therefore irrelevant) to the user. In fact, you could mount the remote SSHFS fileshare at boot or user login and it would appear as if the files are actually on the local filesystem! (And you could do exactly the same with an SMB share if you wished).
To return to the SMB file manager connection. As I said above, SMB is one of the few protocols where by default the server actively announces itself to the network. As most file managers (i.e. "clients") support SMB, they also "hear" these announcements. That generally means that connecting to a SMB share is really obvious. E.g. in Nautilus, if you go to "Other Locations" you should see any available SMB "domains" (aka Windows Workgroups) and/or SMB servers listed there. So you just need to click them to initiate the connection.
Many other protocols that you can use with Linux don't announce themselves in this same way. You have to actively connect to them and thus know where they are located. So to connect you will need to use a URL which includes the protocol, the specific IP or hostname and the share location. However, once you have connected, then assuming that the IP or hostname doesn't change, you can allow the credentials to be saved and "bookmark" the URL. That makes future connections just as easy as an SMB connection.
As something of an aside, that means that your server should generally be using a static IP. There are ways to work around that (e.g. via "Zeroconf" style network setup) but IMO setting static IPs for "servers" is best practice anyway.
"Cloud stuff"
You also seem to misunderstand what "cloud" actually means. "Cloud" is just a buzzword. It's really just a trendy term for someone else's computer (generally available via the internet and often available to everyone - hence your misunderstanding). So if you run a service on your own computer (i.e. a "server") and don't allow access outside your network (i.e. not available via the internet), then you could consider that as NOT being "cloud stuff". Conversely, you could host an SMB share publicly so it was available via the internet. That could be considered a "Windows cloud share". Note: DON'T ever do that! Unlike protocols like SFTP, SMB is an inherently insecure protocol!
In the case of Nextcloud, IIRC it can also provide access via WebDAV (which Nautilus and probably Caja too support OOTB). Assuming that you don't set up the Nextcloud client software and connect to the files via WebDAV, that may still strictly meet your requirements. So long as you don't have any mobile devices (e.g. laptop, tablet or phone) that need access to the files and are taken offsite, then even with the client apps installed; Nextcloud and Syncthing would both meet your stated requirements.
The nature of these "file sync" applications means that they will have local copies of the files, even if the server is unavailable. If you don't have any devices that go offsite, then the nature of how the file sync works may even make them preferable options!? E.g. the ability to roll back to previous versions of those files could be handy feature.
So what to do?!?
Regardless of all of the above, I'm 99.9% certain that what you want is already possible with your current setup. It seems likely to me that the issue is one of config rather than anything else. I suspect that if you were to adjust your current existing Samba server config and the associated shared file hierarchy appropriately, then it should meet your stated desires (i.e. any user have full access to all files via a single entry point).
Actually, perhaps one of the missing pieces of info for you is that as SMB is not a native Linux protocol, Samba (SMB) users are different to Linux users. So on your server, your Samba (SMB) user(s) need to be mapped to Linux user(s) for the purposes of file access. In effect that means that there are 2 levels of file permissions, the Samba/SMB user level and the Linux user level. FWIW that dual level of permission also exists on Windows, although it's usually hidden by the fact that by default "share permissions" allow everyone and file access is usually controlled at the Windows user permission level.
So as I hinted previously, unless the webUI bundled with TurnKey Fileserver helps you to configure the Samba server setup better, it seems unlikely to me that the TurnKey Fileserver will resolve your issue (at least from the SMB/Samba side of things).
Although it does also have NFS pre-configured. So perhaps NFS might be easier than your previous experiences? But as you'll still need to configure NFS on the guest end (and as you note the guest end of SMB should "just work") I can't guarantee that (plus NFS relies on Linux users on the server which map the to the remote users). If you do try that route, please note that our current Fileserver appliance doesn't include the Webmin NFS config package (see bug #1521 for a workaround).
My recommendation: SSHFS
Essentially SSHFS allows an SFTP share to be mounted locally in the same way that a physical hard drive, USB stick, SD card or CD/DVD can be mounted locally. So whilst technically it does leverage SFTP, it actually works as if the files are local files (it even leverages the FUSE system which is also what USB mounting uses).
Judging from your comments, you don't have any real security requirements (beyond only local access) and don't care about individual user access levels (essentially all PCs and all users have full access to all files). Plus you don't need any access auditing (i.e. you don't need a log of who accessed which file when). Plus you don't have any need to have access to previous versions of files.
Assuming that the above it true, then personally, I'd just use SSHFS. As noted above, SSHFS is a protocol that Nautilus (and likely Caja and other Linux file managers) support OOTB. Although as I also noted above, I'd be inclined to auto mount the SSHFS share(s) either at boot or user login time. Then even your file manager would be unaware that the files are remote. Users would never need to even consider it and the shared files would all appear to be local files. The bonus is that it would also be much more secure than your existing setup! :)
All that this setup would require is an SSH server (e.g. the 'openssh-server' package) installed on your "server" along with a user account which owns the files you wish to share (I suggest not root!). If you configure keypair SSH login access to your server (as the intended user) then no password is required by the PC user. There may need to be some further tweaks to get it working exactly as you want, but it should be a "one off" exercise.
Final word / Summary
Bottom line is that the reason that SMB "just works" between Windows machines is because it's a native proprietary Windows protocol. They've spent a ton of time/money on developing it and streamlining the user experience so that the complexity is hidden and that it "just works" for most users. In my personal experience, despite Microsoft's investment, it still doesn't always "just work" under all circumstance - but YMMV.
If you wish to use SMB between Linux computers, then you need to be aware that you are using a reverse engineered SMB implementation which is never going to be as tailored as the proprietary Windows implementation. It will generally work as desired, but it will require the appropriate configuration.
Ultimately, the "price" you pay is either the Windows licence fee (and associated limitations) or the Samba learning curve and associated complexity (and/or pay a tech that has the knowledge and skills). Implementations such as TurnKey's Fileserver try to make it as easy as possible, but that is essentially just hiding the complexity via a combination of pre-configuration and simplified UI config tools. It doesn't remove the underlying complexity and won't always "just work" under all scenarios.
My suggestions are (not necessarily mutually exclusive and in no particular order):
Regardless of what you choose to do and how you move forward, I would urge you to carefully document what is done (or if you pay someone, get them to document it so you're not at their mercy). E.g. technologies and resources used, commands run, copies of configuration files, etc. As this stuff is not done regularly, having it documented will make life much easier in the future.
When you document, consider future scenarios such as:
I hope I haven't daunted you too much and more than happy to share more thoughts if you need clarification
Wow - huge thanks !!
A truly amazing reply with much good info - VERY APPRECIATED !!
I am no stranger to networking & protocols, but when I set all this up at 1st I had the mistaken idea to just keep using the dear old win2k box in its kind of faux-server role, and found out quickly that was really a total non-option as Linux doesn't play nice with such an older version of SMB anymore.
Silly me figured that Linux <=> Linux should be a snap...hahahahaha - t'was ANYTHING BUT that.
After much searching, reading & trial/errors I got Samba to work via Nautilus' built in ability to handle it & set all the sharing to anonymous, open access.
Later still I found out that Caja had an add-on for that & made sure to go through & check what that looked like as well.
The 1st set of problems seemed to be with Libreoffice itself & after finding its file locking settings that smoothed out...mostly - but LO is very, very crashy.
Discovered Collabora Office, and was advised that just having 2 users was below their notice for licensing - and to just use it endlessly for free as their trial doesn't expire. That solved much of the crashiness.
BUT=>
File permissions remain a problem - not always - just every now & then for a day or 3 and oddly in directories that were previously sharing just fine.
Checking permissions brings odd results like the file being owned by user, but the group having magically shifted to root & became VERY stubborn to change - file by file, sometimes several times before it would 'stick' !!
Basically (I think ??), if ALL the document files could belong to nobody & nogroup it might be a great fix, but whether via CLI or dialogue it still does strange stuff every now & then for no reason that I know of.
Back to choosing a protocol now=>
In my searching most of what is recommended is either NFS or Samba & of course I do need for the 'server' partition to be mounted & directly usable both via the WP app opening a file & via Caja.
When I referred to "FTP style stuff", what I meant was the need for some additional step to get files open more or less as one would do with 'regular' FTP, which my office helper would blow a fuse with & frankly many times I would be in too much of a hurry for extra steps myself.
Someone did tell me at one point to 'just connect via SSH', which had me searching all over creation without getting any specific results - most likely because I needed the pointer you so kindly provided to SSHFS (Thanks !!) which I already looked up with great results. Woof.
Security needs here = none...correct.
Your suggestions are wise & wonderful, thanks again:
Tried, did - found loads of different stuff that didn't work, then what I detailed above which does work...mostly.
No need to do a VM as I have a spare box on hand & can just install whatever into it for testing whenever I wish to. (I do use VMWare so as to keep a VM of XP around for 'just in case' times...)
Uncertain about the Turnkey vs. Ubuntu query in the sense that I was hoping that making a NAS box into a file sharing server might be better/easier, etc. & what you've said in this thread makes me think they are more equal than not ??
Thanks for the giggle here !! That would actually be ME - I'm an old DOS guy who has dabbled in OSes as a hobby since the time before when DOS was a baby, an avid OS/2 user for a while, Linux when it was still CLI only, and other OSes as time has allowed. My college days were back when COBOL & FORTRAN were huge & I refused to get drawn into that....silly me, I'd be wealthy now if I'd chosen that path instead !!
The truth of this matter is that this -is- my 1st experience in this type of file sharing under Linux & I did blunder into it badly at 1st because I didn't figure on it being quite so fussy as it turns out to be, and even asking for help at the U/M forum was not anyplace near as helpful as you've been here so far !!
NFS ran me around in circles - again with loads of krap info that never brought me any useful results, so if there was a clear & definite way to get from point A to point B with it I'd be fine learning that - but not running around in circles as I did initially in trying it out.
Still - back to the beginning query:
With many, many thousands of existing files now stored in a zillion directories on the U/M 'server' in a single EXT3 partition - it worries me that even copying them again may become problematic unless I do it as root and also that their really stinky permissions (per file ??) may remain problematic, hence my original query here.
I have run across where many others have been very, very disappointed with file sharing under Linux in all of my searching - frequently folks have said that there SHOULD be some really easy/simple way to set it up - but there just ain't such a thing...yet ??
Ultimately - in order to do my day-to-day work I would prefer something like that which can just be set up ONCE - have permission problems NEVER - and JUST WORK OOTB !!!
Thanks Again & Again for your patience in guiding me !!
Thanks for your gracious response! :)
Reading your response to my essay almost makes me cringe a little! :) Your response is pretty gracious considering that I essentially tried to teach your grandma to suck eggs...
I'm glad that it was of value...
I'll try to not add too much more to my previous tome, but I can certainly help a little with your Ubuntu vs TurnKey understanding. Obviously I'm a little biased and opinionated, so YMMV.
Firstly, TurnKey is based on Debian (v16.x = Debian 10/Buster under the hood). We provide a library of ~100 minimalist, purpose built "software appliances". As a general rule, each one is trying to fulfil a specific function and/or provide a specific piece of software (and all it's dependencies). The philosophy is that we're trying to make more open source software more accessible to more people.
We use custom, internally developed build infrastructure to build our appliances. We package our build tools as an appliance of it's own; known as TKLDev. All our appliance build code is published on GitHub. The appliances are built from a combination of packages, overlaid files and configuration scripts. The packages are mostly from Debian, plus some custom packages we provide and some particular appliances also install specific software via third party apt repos. The overlays are generally scripts and/or complete config files.
We don't claim (or even aim) to be everything to everyone. But we do aim to provide off-the-shelf products which will be useful as-is to the uninitiated; but simultaneously provide a "better" starting place (in comparison to a vanilla OS) for those more knowledgeable. We probably don't achieve the latter as much as we'd like, although I think we do ok (we have quite a few "IT consultants" among our customers).
Our base appliance (essentially what all others are based on) is called Core. Despite our aim to be minimalist, Core (and therefore the whole library) does include a few pieces of software aimed at trying to be more user friendly. E.g. we package and pre-install Webmin as well as a few other bits and pieces (e.g. Postfix for sending emails)...
Re Ubuntu vs Debian, unless you are in the market for paid support (and even then...) when it comes to servers, personally I think using Debian is a no brainer. My personal experiences have been that Debian is more stable and secure (Ubuntu only guarantee timely security updates to their "main" repo; Debian provides security updates for almost all of the packages available).
Re Desktops, I personally run Debian there too, but I understand those that don't. Debian doesn't release as often, so the software can get more dated than Ubuntu. Although on the upside, I have generally found Debian generally much less buggy than Ubuntu. They call it "stable" for a reason! :)
Debian online documentation is not quite as polished or complete as Ubuntu (although it is getting better IMO) and mostly Ubuntu docs apply to Debian anyway (except if they are recommending installing via a 3rd party repo).
Ubuntu too is based on Debian, but unlike us, they start with a snapshot of Debian "unstable" (we base directly on Debian "stable" - I assume this is part of the reason that Debian tends to be less buggy than Ubuntu). Also unlike us, Ubuntu import the source code and recompile everything themselves, plus add a few programs themselves and maintain their own set of kernel patches (we install most of the OS direct from Debian repos). As such Ubuntu cannot be considered binary compatible with Debian. So generally you shouldn't ever install Ubuntu packages on Debian (or TurnKey) - they will often work fine, but at some point you will encounter packages that will break things and you will have a very bad day...!
It's also worth noting that whilst TurnKey can install to bare metal, you'll find that it doesn't support anywhere near as much hardware OOTB (especially compared to Ubuntu) and it's more often used as a VM.
FWIW my knowledge tends to be broad rather than deep, but I do know TurnKey intimately even if my knowledge of the usage of some of the software is pretty basic. I also have at least basic knowledge of many things Linux and my google-fu is pretty darn good! :)
Anyway, I've done it again (written way more than I intended)... If yo have any specific questions (or even ideas/feedback) please feel free to post back. If you do end up trying TurnKey, I'd certainly love to get your feedback.
A pleasure indeed.
Truly, willingly helpful folks are priceless treasure IMO.
Too much of the larger Linux community can be characterized as snarky & unhelpful with much actual flaming & trolling...too much of it aimed at baffled users needing help with matters which seem too newbie-like to the eyes of the snarky ones.
What I have gratefully received here is just wonderful and very appreciated as such.
Back to my original & baffling situation, sadly=>
Permissions ARE what I need to get working correctly, and they are presently doing something that makes ZERO sense from my POV - they are just shifting (automagically ?!?) from this:
To this mess:
And then giving me HUGE headaches when I -try- to fix this mess !!
Ultimately if there might be an alternate FS or even a tool to recursively FORCE all perms to 777 - well, that would be a beautiful thing for me right now...sigh.
Back to Debian now:
Actually when I was selecting which distro for my office and to help non-techie friends to get into, I tried MANY distros and yes, very early on Debian got a fair shake, BUT:
The goal was ye olde classic desktop style more or less like the lovely plain visual aspect of ye olde win2kpro, and in trying to get that along with the desired features I broke it - badly - like 3 times, at least.
Aiming for that same goal under Ubuntu was basically a no-brainer via Gnome2, but then a terrifying mess with that horrid 'unity' abortion and just as bad with Gnome3, but Ubuntu Mate was right on point like a truly refreshing cool breeze - so it was chosen and continues to please EXCEPT for the crummy LO and this file perms SNAFU, which is still giving me headaches right up to this very moment.
I have begun examining SSHFS to good effect, and will now also seek info (again...) which may assist me in FORCING perms for totally insecure sharing.
Is this just a vestigial ouchy of SMB acting up...? No idea, but it needs to be made into merely historical data - last week !!!
Thanks Again and Huge Kudos Jeremy Davis - IMO you are a very real treasure.
Sorry for slow response on this...
TBH, I have very little experience with Samba and zero using the GUI (my past Samba setups have been with manually editing the smb.conf file and/or using Webmin (web based UI that is packaged in TurnKey).
But since I don't have any Windows PCs on my network anymore, I haven't played with it for years.
Re Desktop, I use Debian and run Gnome3. I didn't like it initially (I was a big fan of Gnome2 back in the day) but stuck with it (mostly because I'm lazy and couldn't be bothered spending time trying to reconfigure it). I used Mate relatively recently and wondered why the hell I ever preferred Gnome2 over Gnome3! After getting used to using Gnome3 navigating through all those menus was such a drag... Everything is at my fingertips with Gnome3 (whether I'm using the mouse or the keyboard). I mean obviously I could have set up keyboard shortcuts for Mate 2 if I wished, but Gnome3 "just works" and works so well... My 2c anyway...
YEOW !!! SSHFS adventure.
Back again after spending almost my entire weekend doing computer fix-it stuff...
Part of that time was used in setting up SSHFS as suggested here - and it -IS- way better than Samba - but can also be much worse & I'll try to 'splain a little.
Getting it up & ready on the LAN's 'server' was indeed quite trivial.
Getting connected to it manually & testing it for ease of use at the 'client' ends also went very smoothly & was quite impressive, BUT:
There is the same lack of succinct & complete info (as Samba...) when it comes to automating it to reconnect at boot time, and I learned the hard way that this can result in the dreaded & inescapable emergency mode.
How did I get so mislead ??
Out of the over a dozen sites I compiled connection info from - not even ONE mentioned that to add the mount points into fstab - you must FIRST have set up the proper key pairs so that it can authenticate - OR ELSE !!!!
Thank goodness for the Ventoy USB tools stick I have & Parted Magic which allowed me to coment out the offending lines & remove the dirty bit markings & to have GParted make sure the partitions were indeed error free.
Then silly old me tried making a bash script to do the mounting...nope, that didn't work either, but at least it also didn't harm anything.
So - for the moment the needed connections are right, tight & do indeed function just like local file sharing & will just require manual reconnections after each reboot.
In the meantime I will try to work out the EXACT & needed details for correctly installing the key pairs - whilst I still really wish there was some other way to make it work like just from a single script & desktop icon to make it even simpler.
Thanks Again !!
Apologies if I wasn't completely clear
And also I must admit that I'm surprised that none of the tutorials you used mentioned 'nofail'. IMO it's also good practice to double check an updated fstab by manually unmounting the updated/new filesystem(s) and then manually running 'mount -a'. Any interaction required will be a deal breaker (as you're now well aware...).
Also with in being a network dependant filesystem, I do vaguely recall that there is an option for explicitly requiring it to wait for that in the fstab. But it's not in the fstab man page and google isn't helping ATM...
Although, these days, in the case of a network dependant filesystem, I'd almost be inclined to not bother with the fstab and do the mount via a systemd '.mount' file. Leveraging systemd to do the mount would allow you to make it 'after=network-online.target' to ensure that it doesn't even try to mount until the networking is set up.
So long as you have the keys set up (which I thought I did mention?!) then one easy way to do the mount that would be reliable would be to run it at login.
To set up the keys (logged in as the user who will have the auto login and assuming that you have already generated a key for the user):
(Where xxx.xxx.xxx.xxx is the FQDN or IP of the server).
Then (assuming that Ubuntu sets it up the same as TurnKey?!) an executable bash script that mounts the remote share could be dropped into ~/.profile.d/. Then the filesystem should be auto mounted on every log in. If you want it to happen for all users, then you could put it in /etc/profile.d/ instead. However, you'll need to either use a shared key (and explictly use that key in the sshfs command) that has already been added to the server, or ensure that all users have their keys copied to the server ahead of time. If you forget then you'll hit that issue again.
As a failsafe it's probably also worth adding a time out, so worst case scenario it takes ages to log in (and the remote filesystem isn't mounted), rather than becoming a new drama just because the server isn't running for some reason (or someone pulled a netwokr cable, or whatever).
Thanks Again !!
Your help is very appreciated Jeremy Davis.
I did quite a lot of digging before trying out SSHFS, and after cross checking compiled my own info from the consistencies I'd found.
Please forgive me for being so literal in my doings, but:
In using an OS wherein 'home' is totally different from 'HOME' I am very careful to look for consistent & matching info that is EXACT, rather than any that may look and/or turn out to be even a wee bit vague, so now with regard to the needed key pairs...
I tend to think the directions would say things like:
Run the keygen HERE - then copy the key as THIS - HERE & HERE or whatever, but all the info I've found is a bit too vague to just jump into with full trust of my own comprehension.
I did try fstab with a bunch of variations & found that none helped & it finally became clear that it was barfing due to the LACK of keys causing a security type error which brought the panic into play.
An interesting reference (IMO) is here:
https://unix.stackexchange.com/questions/347013/etc-fstab-meaning-of-nof...
The wait thing is among those I tried, and it still barfed with that as well...
Your suggestion of a bash script along with 'after=network-online.target' sounds promising & I'll look more into how exactly to do that as I haven't ever done it before.
At this time - and until I have very explicit info compiled for the making & placement of the key pairs and similarly very clear info about using the script for it - the systems are all getting along just fine.
Since I can manually reinstate the shares by running just 4 commands, it is liveable & went an entire day with zero locked file baloney, so I'm pretty happy thus far.
Please note that my success at this was 100% possible due to your generous help here, for which I am very grateful.
'nofail' is the option that should stop it erroring with no keys
As per the fstab man page - see under "The fourth field (fs_mntops)" (and comments in that stackexchange Q&A you linked to) the option that should stop it from failing on boot is 'nofail'.
Having said that, I would have expected 'noauto' to also allow it to boot ok (even if not available). Although I also note that one (downvoted) answer on SE says that 'noauto' will still cause boot errors?!
Anyway, it sounds like you've progressed past that point(?) so moving on...
To clarify, I gave 2 different "auto mount" suggestions:
Option 1a - Use a systemd '.mount' file
My first thought was to use a systemd mount file (see also upstream docs).
FYI systemd is the init system which manages hardware and software initialisation, including "services". AFAIK your version of Ubuntu should be using it too (prior to implementing systemd, Ubuntu had their own init system called upstart; but these days pretty much all Linux OS use systemd by default).
Ensure that 'After=network-online.target' is in the '[Unit]' section. You'll possibly see in those docs I linked above, that network filesystems implicitly inherit this dependency. However technically SSHFS is NOT a network filesystem, it's a FUSE filesystem - like a USB, except instead of using hardware device, it leverages the network.
Option 1b - New idea... use systemd options in /etc/fstab
Behind the scenes, the /etc/fstab file is read by systemd. Systemd then auto generates a '.mount' file (called '-.mount') for all the fstab mounts noted there. FWIW, you can check the fstab mount unit status like this:
So another alternative (that essentially creates the same scenario as my "Option 1a") is to use the /etc/fstab file, but use the 'x-systemd.XXX=YYYY' options. For more inof on the options avaialble see the relevant Debian man page.That's probably easier (without having to learn all about systemd) and under the hood will have the same effect.
Or...
Option 2 - Use a profile.d bash script
The other option was to create a(n executable) bash script; in ~/.profile.d/ or /etc/profile.d
The bash script would just run whatever commands you run now to do the SSHFS mount. You might also want to include some error checking and some retry logic, but that's up to you...
I hope that makes sense?!
PS thanks for your kind words... :)
Just very quickly...
I'm right in the middle of this, but it looks like a really simple quickie fix...
Postulating thusly=>
- Since I can easily restore the shares via 4 terminal commands executed singly;
- Taking a page from that 'other' OS - why not just record/replay a terminal session ??
- Looks like TermRecord can do this quite easily, and if so...
Problem solved very simply ??
More later, after I test this theory...
Too silly - my bad.
That is literally an app to make a screencast type of thing to be shown via browser rather than a macro recording type of thing - too funny !!
Sorry...ignore my prior entry, please !!
Just make a script...! :)
As per my 'option 2' above, my inclination would be to make a script and drop it straight into one of the profile.d directories (either system wide, or individual depending on your use case and/or preference).
If that doesn't work as expected, then fix that so that it does (I'm happy to point out how if that's the case).
Alternatively (perhaps as an intermediate step?), you could just create a standalone bash script (not in profile.d) to run your 4 commands. Essentially creating a new single command that will run your 4 commands. You could then setup the automation (i.e. profile.d) as a separate step (and instead of the whole script in that file, point it to this script you've made).
You could no doubt do this via some GUI app, but IMO it's easier to use the terminal. Assuming you're running as a sudo user and you're happy to use nano (personally I'm a vim user, but nano is easy to use for the uninitiated):
Note that as per the FHS (File Hierarchy Standard) /usr/local/bin is a good place to put executable local system scripts generated and maintained by you. The location should already be in your PATH (i.e. so once we're done, this script should just work when you type 'mount-sshfs'). If it's not, that's easy to fix too, but lets cross that bridge if we get to it...
The first line needs to be a shebang (the first line so the system knows it's a bash script) and then just add your 4 commands on separate lines. E.g. so it looks something like this (note the '-e' is good bash script practice so that the script will exit if one of the commands fails):
Save and exit, then make it executable:
Now to test, try running it:
Hopefully it "just works"...
If you haven't already set up keys, I would strongly reommend that, as then there will be no need for it to be interactive (which will save headaches later if/when you automate it).
No keys yet...my tangent looks like...
The very simplest solution requiring no more research seems to be via just using script and scriptreplay, which can be stuck into a launcher for later re-use.
I just tested this idea & made then replayed a terminal session in under 2 minutes.
If this turns out to be another dead end somehow, I'll next push into the key pair realm followed by what you've kindly suggested.
I do hope this will suffice though as it is utterly simple !!
Thanks Again...most likely I'll be baaa-aaack here with more, later...
Ok...
IMO keys and a simple profile script is the easy answer and would take less than 2 minutes...
Follow the prompts, including logging in via password. Then double check the passwordless connection via SSH:
(Then exit assuming that it works).
Then mount the remote filesystem (with no password required) like this:
But it's your system you do what works for you... ;)
Thanks Again Jeremy !!
There are a couple of factors in play which I must keep in mind in choosing PC solutions:
1 - The number of times in which I must be far away from the office;
2 - The helper I have who is wonderful, and also totally bereft of any tech skills.
Thus, if anything bad happens when I am away, it is amplified - so having a simple & bulletproof thing that works by just running a shortcut from a desktop icon fits right into the K.I.S.S. philosophy that seems to work best here.
I have tested using scriptreplay that way already and it really is simple enough for me to just tell her=> 'OK, now use that icon & all will be back to normal !!'
I do appreciate your wonderful assistances here and will indeed plunge forward & do more with the info you have so generously gifted me with, as time passes.
Right now what is in place works & I do have some other pressing matters to get into - like pre-winter prep chores & car concerns which simply will not wait !!
Thanks and Best Wishes.
FYI, it is possible to do
FYI, it is possible to do what you wanted with Samba, I've done it in the distant past. You can enable guests, and keep the permissions in check by using the "force user" and "force group" options.
Add new comment