• 4 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle



  • Hmm, I’ll have to look into the external library. I must not have paid attention to that as an option or at least didn’t understand it if I did read about it.

    Syncthing would be great, but since my siblings are all out of state, I’d have to walk them through configuring it on their computers and I’ll be honest, I still struggle with adding new peers and folders with that app. That’s a “me” problem, and I’m willing to admit it. But I can look into other file upload options out in the Self Host world.









  • I mentioned rclone and its mount function as it’s an alternate method of accessing Seafile’s backend. So if Seafile clients and web interface are somehow inaccessible, you can use rclone mount to “reassemble” the chunked data and then recover or copy to another location as needed.

    The best way I can describe the phone example is that each Seafille client is a portal to the data on the Seafile server. I have it setup like this:

    • Documents - MBP (macbook pro)
    • Documents - Note10Plus (my phone)
    • Documents - Pop (primary desktop running Pop!_OS)

    From my phone I can pull any data in any of the 3 libraries without needing to sync the entire thing to each device, which is what Syncthing wants to do by default. I understand there is an ignore function but from what I can tell you’d have to manually mark quite a few folders as such so you don’t sync all data to each client.

    One scenario I tested last night was using rclone mount on the server, which “un-chunks” the data back into whole, flat files and mounts it in a temporary folder. I then used rclone to copy it to a Backblaze B2 bucket. Which now has fully assembled flat files sitting as a backup in B2 storage. My thought is to script that function because damned if I can’t seem to get database dumps to work properly when performing backups on pretty much any self hosted product that uses them. Still learning though.

    That is probably way more info than you needed to answer your question, sorry about that.


  • I mentioned it in another comment but you can use rclone to mount the seafile data structure. And at least in my testing it works really well. I’ll have to test with more data and of course remote data. If I ever get the Backblaze B2 backend working then I could more easily test a use case where I didn’t have access to the server like you’re talking about. I have had great success with rclone mount with Dropbox, but those are not chunked files. :)

    I do wonder if folks who are hesitant to use it because of the chunked files are also not using apps like Borg backup or Duplicacy. Both of which also chunk the data. I believe in both cases you can still leverage rclone to mount them as whole files for retrieval.








  • I get that hesitancy. But I see two ways of addressing it. They have their own FUSE mount and it also works with Rclone’s Mount function. But the way I’ve been doing it is pointing my iDrive account on my Windows desktop at the SeaDrive client. Since each client gets fully assembled files vs the git-like chunks that are server side, it backs up the flat files to my iDrive account without pulling every single file down to the Windows client. Note I’m not trying to convince you, just letting it be known there are options and they work. I did have a cronjob tht was using Rclone to mount then backup the data from the server running Seafile to my Backblaze buckets, but I want to address it and look at something like Borg to back it up first. My hope is to take up less space in the B2 side of things.

    EDIT: I just had a look again because I started doubting myself that Rclone mount worked for this purpose. I have a bit of a bad memory and apparently didn’t write this down. But yes it does work. Rclone config is pointed at your seafile domain (even on the same server as is the case with mine). Then rclone mount : /path/to/mount/location. I’ll have to double check once I get more than a few gigs in my seafile libraries but it works so nicely in this case. Kinda defeats the purpose of the chunking though, doesn’t it? My understanding is that is for effective deduplication.