Error while backing up to StorJ

Hi,

I just installed Duplicacy on my Unraid server (ver 7.1.3). I tried a small (i.e. three files < 10MB) backup and it worked fine. I then tried my full backup (1.8 TB) and it failed. The log contents is pasted below. I noticed in the log it mentioned not being able to reach us1.storj.io. I checked and can ping it just fine. Unfortunately, that’s the only information in the logs I could understand as I am brand new to Duplicacy.

Is there something obvious that I’m missing?

Thanks!

Running backup command from /cache/localhost/0 to back up /data
Options: [-log backup -storage StorJ -threads 1 -stats]
2025-06-16 23:30:02.678 INFO REPOSITORY_SET Repository set to /data
2025-06-16 23:30:02.679 INFO STORAGE_SET Storage set to storj://[email protected]:7777/unraid/nightly-backup/
2025-06-16 23:30:03.249 INFO BACKUP_START No previous backup found
2025-06-16 23:30:03.250 INFO BACKUP_LIST Listing all chunks
2025-06-16 23:30:03.295 INFO BACKUP_INDEXING Indexing /data
2025-06-16 23:30:03.295 INFO SNAPSHOT_FILTER Parsing filter file /cache/localhost/0/.duplicacy/filters
2025-06-16 23:30:03.296 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2025-06-17 02:51:12.829 ERROR UPLOAD_CHUNK Failed to upload the chunk e2ac8fb9f1bf9601ae88fc42da21dc683433d54c221f2fba4c64bf949f370706: uplink: failed to upload enough pieces (needed at least 49 but got 15); piece limit exchange failed: metaclient: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io on 1.1.1.1:53: no such host;

This is a DNS issue. Can you resolve us1.storj.io from inside the docker? That is, open the docker console of Duplicacy and run this command:

nslookup us1.storj.io 1.1.1.1

Thank you for the quick response! I ran the command and it seems to work. Here is the output:

It’s very possible your gateway is dying under the load: storj native client is creating quite a number of connections, and unless your network stack is solid it will not survive.

If this is the case, switch to accessing storj via S3.

Hmm. That would explain why it worked with a smaller test sample. I will try that. Thanks!

From your screenshot I see you have a duplicacy in linux container, in docker, which runs in a VM on windows – each layer comes with its’ own issues, restrictions, resource constrains, and bugs.

If you want to use native integration – reduce number of layers; e.g. run duplicacy directly in your OS.

However, if getting maximum performance is not a priority – after all, it’s’ a backup application – then S3 will yield better experience. This is also true if your upstream bandwidth is limited: native integration will consume 2.7x of upload bandwidth compared to S3.

Generally, native integration is best if you have a (very) beefy CPU, and massive upload bandwidth (gigabits) and you need absolutely the max performance at any cost. Storj provides that.

But if you are on a residential internet (such as cable or dls) then native integration will be counterproductive – most modems/home routers just die, and 2x7 upstream amplification does not help either.

BTW, when you say, “unless your network stack is solid…” what do you mean by “solid”? My current configuration is I have AT&T Fiber with 2.5GBps into the house. I then use a Unifi Cloud Fiber Gateway as my router and the server is wired into the router. Is there something else needed or something I need to configure in the router? Thanks!

I’m actually running Duplicacy in a Docker container on Unraid which is running on a physical machine (Ugreen DXP4800+).

That’s pretty solid! I’d say it’s pretty close to perfect :smiley:

So your issues are likely not due to Unifi or AT&T but some shenanigans within your windows host/docker/bridging.

I would try running duplicacy directly on windows with native integration. I’m fairly sure this will yield much better results (assuming the NIC can handle that - most intel ones do, most realtek cant’ – at least in my limited experience )

Ah, got it, it wasn’t obvious from your screenshot.

Try running it natively (with systemd) on unraid. Duplicacy does not really benefit from containerization – it’s already self-contained executable without dependencies.

Ok. I’ll give that a try. I’m not totally sure how to do that - yet. But I’ll read through the docs and do some “google-fu” to figure it out. I will also try using the S3 connection. Thanks so much for your help!!

1 Like

Have a look at this blog post as a starting point: Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends

It describes how to do it with upstart, but if you click “Click here to load legacy comments from Disqus” at the bottom of the page, in the first comment there is a sample systemd configuration.

Got it! Thanks again!!

Another thing – I’m not very familiar with unraid, but perhaps the default ulimit (in /boot/config/go) is set too low. You can check it with cat /proc/sys/fs/file-max.

Consider increasing it, including for docker: docker container run | Docker Docs

Each TCP connection is counted as a file, so if that’s too low – it may be the actual culprit.

OSZAR »