Jeremiah's picture

I know TKLBAM is best used with S3 storage but for now I'm using an scp:// address appended to the tklbam-backup command to target an in-house backup server.  The only thing is I don't think other commands work well with this such as tklbam-list, tklbam-status.  Is there a way for me to get a list of my backups?

Forum: 
Tags: 
Liraz Siri's picture

tklbam-list and tklbam-status query the Hub API for your backup records. When you use non-S3 storage address you are putting TKLBAM into manual mode. No backup record is created in the Hub so there is nothing for tklbam-list or tklbam-status to show. If you think about it this makes sense as only S3 is guaranteed to be globally accessible.

For example, you wouldn't expect to be able to restore a backup that was saved to file:///root/backup on one machine from another.

Jeremiah's picture

Right, but I would expect to be able to restore to a different machine a backup that was saved to scp://backup.server.ip/backup.

I understand the limitations though given that I'm not using S3.  I just thought there might be a way to add the --address to tklbam-list.  I feel like I'm reaching in the dark when I do a restore.  I can't see all the backups so I just grab the latest.  However, if I understand the syntax correctly it seems I can specify a time period when I perform a tklbam-restore.  Is that correct?  That would certainly be helpful even though I would prefer to explicitly select a backup label or something.

Ryan's picture

 

That is a complete wrong statement "For example, you wouldn't expect to be able to restore a backup that was saved to file:///root/backup on one machine from another."
 
Simply put ---  It's completely reasonable to expect to be able to use a local file system,  or,  remote file system on a local network,  to do something as simple as backups and restores.  
 
Duplicity at its basic level, as far as I can see, is simply manifests, files (which make up volumes), signature files and algorithms used to send only bit level changes of files.  It shouldn't matter where the source and destinations are, as long as they are accessible.  And it should all be "intelligent" enough to read and write from very standard systems (file, ssh, etc.)  
 
It's as simple as that.  At the end of the day duplicity and tkl as an extension uses files and file i/o for all this, and file i/o is pretty fundamental in computing and systems, and if there is ANY issue with doing something as rudimentary as that, then it's broken. 
Jeremy Davis's picture

Actually I don't completely disagree with you, but it made a nice heading! :)

I get what you are saying and in essence i agree that it would be nice. But I also see Liraz's point - wherre are you going to centrally store the list/status data? And where/how else can you store it in such a way that it will work where ever you are? As such I certainly couldn't agree with you when say that "it's broken" - it just doesn't currently support your usage scenario in the way that you'd like! 

As with any other open source software you are always free to tinker with the code (or pay someone else to) and adjust it to suit your desires. The source is hosted on GitHub (although perhaps that's a little out of date, I don't know...) or the deb can be found (and downloaded and pulled apart) in (from) the TKL repo: http://archive.turnkeylinux.org/debian/pool/wheezy/main/t/tklbam/ (Obviously that's the latest release for Wheezy based v13RC)

Jeremiah's picture

Yes, you can backup without S3.  In fact, you can pretty much backup to any target that duplicity can backup to since it is the underlying open source backup utility used by TKLBAM.  But you still have to go to hub.turnkeylinux.org and create an account to use TKLBAM.  Then follow the links for TurnKey Backup and Migration which will ask you too create an Amazon account with a credit card.  You won't get charged for anything unless you actually use Amazon services like EC2 or S3 so don't worry about the credit card part.

This all has to be done because TKLBAM needs to be initialized with the API key that you get after creating the Amazon account even if you don't use S3.  You can find your API key by browsing to https://hub.turnkeylinux.org/profile/ .  TKLBAM also gets an up to date TKLBAM profile for your TurnKey Appliance from TKLHUB.

The first command to run on your TurnKey appliance will be 'tklbam-init [API KEY]'.

Then you can follow the instructions for local backup.

Jeremiah's picture

Your scp target is correct except for the // before mnt.  It needs to be a single /

I tried Don's suggestion to replace that // with :/ and tklbam-backup failed with an error so don't use that.  The : should only be put after the IP address if you are specifying a different port to connect to.

Don Sanderson's picture

Thanks for sorting that for him, I've been using the colon for years with scp, straight from the man page. indecision

Wonder if it's a Duplicity thing?

Jeremiah's picture

I agree. I thought I had used a : before too.  It turns out Duplicity does have a specific scp URL Format where the : is only used before specifying a port.

Don Sanderson's picture

That would be a good link to post in an easily found spot on this website.

Maybe here somewhere: http://www.turnkeylinux.org/docs/tklbam/faq/usage

Jeremiah's picture

I agree but even though the Documentation homepage says "The following documentation is a wiki maintained by the TurnKey community. You need to be logged in to edit." I don't actually see any "Edit" button so I can't add that link.  Feel free to add it if you can.

Don Sanderson's picture

If you don't have the "Keys to the Kingdom" I certainly don't. wink

Perhaps Liraz or Alon could add it if they feel it's appropriate.

Jeremy Davis's picture

 

I'm not sure whether that was intentional or not. There had been some discussion about making the onsite docs (http://www.turnkeylinux.org/docs) the "official docs", and thus only editable by a select few (I'd assume currently it's only devs). And the "dev wiki" (http://wiki.turnkeylinux.org/) was to be for 'draft' & community provided docs, hints, tips and from time to time added to the official docs.

But I'm not sure whether that was what happened (in which case that doc page about it being a community wiki needs to go) or whether it was an unintended consequence of something Liraz did when he was tidying up the site.

Don Sanderson's picture

Corrected, thanks to Jeremy.smiley

While I've not used tklbam-backup with scp the basic remote location format for scp is:

your_username@remotehost/some/remote/directory

You have a double slash (//) in place of the single slash (/), ths may be confusing things.

Ryan's picture

 

Hello,
 
 
The dilemma with TKLBAM and it being so closely tied/dependent on AWS/S3 is limitations with bandwidth.  (At least here in the US.)  TKLBAM to the Hub is AWESOME, and a great solution for lower volumes of data (i.e. web pages, wiki's, things in the 100's of MB's.)  But in business environments, for example with the File Server appliance, it's pretty common for my customers to have 100's of GB's of data, and there is just no way to overcome the speed limitation of trying to dump 100's of GB's of full  backup data into the hub over a pretty standard broadband connection.  And to make matters worse, best practice is to repeat full backups once/month.  It impacts business and production when a broadband circuit is being saturated with backup data during the workday.  Last, there's no way to seed the hub with initial data.
 
 
This is the dilemma that I have yet to find a way to overcome.  Is what I would like to do is to store TKLBAM backups to local onside storage, which then also gets written to removable storage for off-site protection.  (Unless someone has already solve the days and sometimes weeks it would take to do a full backup of say 100-200GB of data to the Hub.)
 
 
I'm using a File Server appliance.  I've taken these steps:
 
 
1. I've initialized the appliance w/ TKLHUB. [SUCCESS]
 
2. I've ran a simulation of a TKLBAM to local file:// [SUCCESS]
 
3. I've tried to run a simulation of TKLBAM to storage on another server on the same LAN using these commands: [FAIL]
 
a. tklbam-backup --address=scp://root@pve2:/var/lib/vz/backups/tklbam/ -s --disable-resume [FAIL]
 
b. tklbam-backup --address scp://root@pve2:/var/lib/vz/backups/tklbam/ -s --disable-resume [FAIL]
 
c. tklbam-backup --address=scp://root@pve2/var/lib/vz/backups/tklbam/ -s --disable-resume [FAIL]
 
d. tklbam-backup --addressscp://root@pve2//var/lib/vz/backups/tklbam/ -s --disable-resume [FAIL]
 
I cannot succeed with SCP and get:
 
------------------------------------------------------
 
CREATING /TKLBAM
FULL UNCOMPRESSED FOOTPRINT: 30.31 GB in 199 files
 
# duplicity --verbosity=5 --archive-dir=/var/cache/duplicity cleanup --force scp://root@pve2/var/lib/vz/backups/tklbam/
 
# PASSPHRASE=$(cat /var/lib/tklbam/secret) duplicity --verbosity=5 --archive-dir=/var/cache/duplicity --volsize=25 --full-if-older-than=1M --include=/TKLBAM --gpg-options=--cipher-algo=aes --include-filelist=/TKLBAM/fsdelta-olist --exclude=** --archive-dir=/var/cache/duplicity --s3-unencrypted-connection --allow-source-mismatch --dry-run / scp://root@pve2/var/lib/vz/backups/tklbam/
User error detail: Traceback (most recent call last):
  File "/usr/lib/tklbam/deps/bin/duplicity", line 1405, in <module>
    with_tempdir(main)
  File "/usr/lib/tklbam/deps/bin/duplicity", line 1398, in with_tempdir
    fn()
  File "/usr/lib/tklbam/deps/bin/duplicity", line 1249, in main
    action = commandline.ProcessCommandLine(sys.argv[1:])
  File "/usr/lib/tklbam/deps/lib/python2.6/dist-packages/duplicity/commandline.py", line 1007, in ProcessCommandLine
    backup, local_pathname = set_backend(args[0], args[1])
  File "/usr/lib/tklbam/deps/lib/python2.6/dist-packages/duplicity/commandline.py", line 900, in set_backend
    globals.backend = backend.get_backend(bend)
  File "/usr/lib/tklbam/deps/lib/python2.6/dist-packages/duplicity/backend.py", line 156, in get_backend
    raise UnsupportedBackendScheme(url_string)
UnsupportedBackendScheme: scheme not supported in url: scp://root@pve2/var/lib/vz/backups/tklbam/
 
UnsupportedBackendScheme: scheme not supported in url: scp://root@pve2/var/lib/vz/backups/tklbam/
Traceback (most recent call last):
  File "/usr/bin/tklbam-backup", line 366, in <module>
    main()
  File "/usr/bin/tklbam-backup", line 321, in main
    b.run(opt_debug)
  File "/usr/lib/tklbam/backup.py", line 237, in run
    backup_command.run(passphrase, self.credentials, debug=debug)
  File "/usr/lib/tklbam/duplicity.py", line 78, in run
    raise Error("non-zero exitcode (%d) from backup command: %s" % (exitcode, str(self)))
duplicity.Error: non-zero exitcode (23) from backup command: duplicity --verbosity=5 --archive-dir=/var/cache/duplicity --volsize=25 --full-if-older-than=1M --include=/TKLBAM --gpg-options=--cipher-algo=aes --include-filelist=/TKLBAM/fsdelta-olist --exclude=** --archive-dir=/var/cache/duplicity --s3-unencrypted-connection --allow-source-mismatch --dry-run / scp://root@pve2/var/lib/vz/backups/tklbam/
 
------------------------------------------------------
I've made sure the remote host RSA key is added.  I have no name resolution issues.  I have no permission issues.
 
 
As mentioned, my constructive feedback is that being so strictly tied to the Hub is very limiting, because there is just no way to overcome the issues I described above with speed.  It's not acceptable in business and systems management for a backup to run days or week(s) -- that creates a huge window of vulnerability, it creates risk and uncertainty in how to recover that data if you have a failure (i.e. if it took that long to send up, it'll take similarly long to bring back down, meaning -- a customer has to wait days/week(s) for a recovery which is not acceptable.)
 
*TKLBAM really would benefit by having the same automation and simplicity to point both to local storage/remote on-site storage and S3 storage. *
 
Until that day comes --- can anyone please, please help me learn how to do TKLBAM backups to another physical server on the same network?  I just can't make it work for some reason.
 
I would greatly appreciate help in making work:
tklbam backup --address=scp:/root@somelocalhost/var/lib/whateverdir
Eric (tssgery)'s picture

I might be able to take a look at this in a day or two. 

I'd REALLY like this ability myself. I've got 20 terabytes of NAS storage sitting about 2 feet from my VMware ESXi instances... I see no reason to backup date to Amazon.

Ryan's picture

Thanks Eric, I am glad to know someone else shares my logic.

In an attempt to overcome the issue with 'tklbam backup --address......", I figured I would try to mount an NFS share to the TKL appliance that I wish to back up, hoping that TKLBAM would see it as a file system and I could use 'tklbam backup file://....." (since I know file backups work.)

So I added nfs-server-kernel and got NFS setup on my server, and nfs-commons is added on the file server appliance.

Denied!  

I can't get a remote volume to mont on the file server appliance either using NFS; after wasting a considerable amount of MORE time, I can't get past the client-side error: mount.nfs: No such device

!! RANT: Does this really have to this hard???!!!  How does anyone get anything done in the world when so much time is wasted just trying to make simple simple things work!!?

!! UPDATE: Trying to change f/w settings, but get errors on TKL appliance in Webmin.  

!! UPDATE: Trying to install NFS server on TKL file appliance and get the following:

root@svr-tkl-audit ~# apt-get install nfs-kernel-server

Reading package lists... Done
Building dependency tree
Reading state information... Done
nfs-kernel-server is already the newest version.
The following packages were automatically installed and are no longer required:
  turnkey-sslcerts
Use 'apt-get autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 48 not upgraded.
1 not fully installed or removed.
After this operation, 0B of additional disk space will be used.
Setting up nfs-kernel-server (1:1.2.0-4ubuntu4.2) ...
 * Exporting directories for NFS kernel daemon...                                                                                                                                                        [ OK ]
 * Starting NFS kernel daemon                                                                                                                                                                            [fail]
invoke-rc.d: initscript nfs-kernel-server, action "start" failed.
dpkg: error processing nfs-kernel-server (--configure):
 subprocess installed post-installation script returned error exit status 1
Counting objects: 9739, done.
Compressing objects: 100% (2433/2433), done.
Writing objects: 100% (9739/9739), done.
Total 9739 (delta 6676), reused 9739 (delta 6676)
E: Sub-process /usr/bin/dpkg returned an error code (1)
 
AWESOME!!!
Eric (tssgery)'s picture

I setup my NFS server to allow all hosts on my local network to be able to mount exports, and then on the TKL client:

# install the nfs client package

sudo apt-get install nfs-client

# make the mount point (in my case /mnt/ds212/vm)

sudo mkdir /mnt/ds212/vm

# add the share to /etc/fstab

echo "ds212:/volume1/vm       /mnt/ds212/vm nfs rsize=8192,wsize=8192,noexec,nosuid" >> /etc/fstab

 

IMHO, the hard part is setting up the authorization on the NFS server

Ryan's picture

 

Server:
 
#apt-get install nfs-kernel-server and portmapper or WTH it's called.
 
#mkdir /var/nfs
 
#nano /etc/exports
 
##reboot or restart nfs server service
 
 
 
Client:
 
#apt-get install nfs-commons
 
#mkdir /mnt/nfs
 
#mount host:/var/nfs /mnt/nfs
 
 
It’s not telling me there is an authentication problem – it’s telling me “mount.nfs unknown” which if that somehow translates to “access denied” then both the system itself is broken PLUS the response it gives to users which should tell them what the issue is so that they can fix it.
 
Also checked to make sure all the services were running, making sure they're listening on the right ports, making sure the nfs server knows about nfs by looking in file system file, etc etc.
 
I can mount the nfs share on the server via local host 127.0.0.1 and it works fine.  Have *(rw,sync) set in my exports file, tried all variations of all this nonsense from all the different docs spread out across the world, tried turning up new tkl appliances to install the server there, blah blah etc etc 
 
THIS IS INSANE!!!  What a completely broken system, and this is all as a result of simply wanting to use the highly touted tklbam-- (which I use a lot for small little servers), over ssh because a physicist has yet to help me figure out how to overcome the laws of physics and pump 100's of gigs of data into  paid Amazon S3.  I'd gladly pay for it if it worked!!!!!!!!!  This is broken system, it should never ever ever be so hard that I waste so much time trying to make something work and end up spending more in time (my dollar amount) then I would of if I just bought freaking Windows licenses.   Which is exactly where I’m probably going to have to go back to.  Here’s a tip – want adoption of these technologies and you believe (like I USE TO!!!!!!!!!!) that they are viable reliable and stable solutions in today’s world – MAKE THEM WORK!
 
Don’t FORCE me to have to use S3 (which again I would gladly pay for if it was practical because I think it’s a great solution) but make it work for real-world uses.  Why would you have a file server appliance with the capacity to store massive amounts of files, but make the backup work reliably well only on S3, but not allow it to work reliably well on file or local ssh and then not be able to mount nfs and to overcome that challenge … you must be kidding me!
 
What a joke!  In a recent blog on TKL it was noted that you wanted constructive criticism well there it is!!!  It’s BROKEN!! These are super simple things that can be done easily, or should be!!!  Since when in 2013 had it taken hours and hours and hours to not be able to do a simple damn backup????  How is that even possible in today’s world???  
 
Everyone goes on and on about how Microsoft sucks Windows blows blah blah – and I’m usually one of them, but then I try to implement more stable solutions through FOSS and waste so much damn time it’s not even funny----- and I’m not the only one!!!  
 
Here’s a tip--- File server appliance – it should be able to share and store and backup files.  Pretty common stuff in file servicing.  In Windows, takes 30 secs to get it all setup and done.  TKL/FOSS, I’ve wasted my entire day just today for this one simple damn task!!!!!  Devs--- ask yourself---- is that even remotely acceptable as a real world solution?????????? And if not, and you believe this the project, focus on real world solutions!!!!  TKL BAM was super useful--- UNTIL I DISCOVERED IT’S LOCKED IN TO S3!
 
You can say that is not the case, but read every “official” doc on here from the devs’ re: TKl and you will see exactly what I mean.  That is the source of this entire nightmare of a day I have had with this!!!!!
Eric (tssgery)'s picture

I would never say that TKL is perfect nor would I say that it can do everything I want (that blog post you mentioned wanting criticism was a reaction to a post that I made with some frustrations I was having).

But, I would also never say it's a completely broken system. I can't backup my system with TKLBAM to any place but S3, but I can use good old fashioned rsync to back it up wherever i want to. I do use TKLBAM with S3 for 2 of my systems, but not the other 5. I have been using Linux for a LONG while so find doing things second nature; setting up my nfs clients take about 5 minutes just becuase I have done it any times. From my experience, backing up a Windows system is more problematic than TKL.

Don't take my comments the wrong way, I feel your frustation. There are some things that I think TKL should do that it can't that make me frustrated as well, but overall... the flexibility it gives me is worth the few frustrations.

Jeremy Davis's picture

I'm with Eric (although sometimes I get a little defensive about TKL because I have invested so much in it being successful) - It's not perfect and there are somethings that it doesn't do that I wish it would. There are other things that it does, but I think it could do better. But overall I find it generally works well for me.

I have spent a number of years administering a Windows network and my personal experience has been that doing things with Windows is often much more convaluted and painful (and harder to repeat your steps when you are successful). And finding help with issues beyond me can sometimes be damn near impossible. Google seems to be able to find plenty of other people having the same issues as me but more often than not there seem to be few if any answers to the more complex problems I have had. On a few occasions starting again from scratch (ie clean install) has been the only solution that I could get results from.

OTOH I have come across issues with Linux too, but 99% of the time I can find answers to my issues with Google.

Back to your problems at hand though...:

As you may be aware, although TKLBAM is a custom TKL package, it is basically a front end for Duplicity and AFAIK TKLBAM utilises the stock standard Debian Squeeze (aka Debian 6 - the basis of v12.x TKL appliances) version of Duplicity - straight from the Debian repos. I could be wrong, but I suspect that the NFS issues you are having are more to do with Debian generally rather than Turnkey specifically. Google may be able to assist you and point you in the right direction if you keep in mind that TKL is basically Debian Squeeze under the hood.

Also I have successfully used TKLBAM to backup to the local filesystem and as Eric suggests, rsyncing between servers is relatively painless (in my experience anyway...). From my previous reading Duplicity happily uses rsync directly (although I've never tested it).

Eric (tssgery)'s picture

I have a bit of experience with duplicity (which is technology underneath TKLBAM) and I've succesfully used scp/ssh targets quite a bit.. so it should be doable. I can't look at it though, until tomorrow or Sunday.

 

I've also been able to mount NFS shares to my TKL images as well, if you're getting authorization errors then the NFS server is likely not configured to allow access to that host. 

Add new comment