this post was submitted on 26 Nov 2023
145 points (95.6% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54424 readers
435 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
145
Arrs Feedback (lemmus.org)
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

Context

Having started out in the world of Napster & Limewire, I've always relied on public sources. It wasn't until in the early '10s that I lucked into a Gazelle-based tracker that was started by some fellow community members. Unfortunately, I wasn't paying enough attention when they closed shop and didn't know how to move elsewhere. Combined with some life circumstances I gave up the pursuit for the time being.

It wasn't until recently that a friend was kind enough to help me get back and introduced me to current state of automation. Over the course of a few months, I've since built up the attached systems. I've been having an absolute blast learning and am very impressed with all of the contributions!

After all of the updates due to BF deals, I put together the attached diagram as it was starting to get too complex to keep all of the interactions in my head. 😅

Setup

  • All of the services run in Docker containers.
  • Each container is a separate Compose file managed by Systemd.
  • The system itself is in a VM running on my home server (both Arch, btw).
  • Tailscale is used for remote access to the local network.
  • ProtonVPN is managed by Gluetun and uses a separate network for isolating services.

Questions

  • What am I missing or can be improved?
  • Is there a better way to document?
  • What do you do differently that might be beneficial?

Thoughts

  • I had Calibre set up at one point, but I really don't like how it tracks files by renaming them. I have been considering trying to automate with the CLI instead, but haven't gotten around to it yet.
  • I've been toying with the idea of creating a file-arr for analyzing disk usage, performing common operations, and exposing a web-based upload/download client so I don't have to mount the volume everywhere.
  • Similarly, I'm interested in a way to aggregate logs/notifications/metrics. I'm aware of Notifiarr, but would prefer a self-hosted version.
  • I just set up Last FM scrobbling so I don't have any data yet. I'm hoping to use that for discovery and if possible, playlist syncing or auto-generation.

Notes

  • Diagram was made using D2lang.
  • Some of the connections have been simplified to improve readability / routing.
  • Some services have been redacted out of an abundance of caution.
  • I know VPN with Usenet isn't necessary, but it's easier to keep it consistent.

Also, thanks for the recommendations to check out deemix/Deezer. That worked really well! 😀

Edit: HQ version of diagram

top 39 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 11 months ago (1 children)

Very nice. Can you share a docker-compose.yml for others to replicate this? Also your diagram could be a bit higher quality.

[–] [email protected] 9 points 11 months ago* (last edited 11 months ago) (1 children)

Each service is a separate docker-compose.yml, but they are more-or-less the same as the example configs provided by each service. I did it this way as opposed to a single file to make it easier to add/remove services following this pattern.

I do have a higher quality version of the diagram, but had to downsize it a lot to get pictrs to accept it...

[–] [email protected] 5 points 11 months ago (1 children)

Ah your instance must be limiting the size. lemmy.dbzer0.com allows you to upload anything and just downscales to 1024px max dimention. You can also just host on imgur etc.

[–] [email protected] 4 points 11 months ago

Good point, updated with HQ link.

[–] [email protected] 11 points 11 months ago (1 children)

sheeeeesh.

Reminds me of factorio

[–] [email protected] 8 points 11 months ago

The factory must grow

[–] [email protected] 7 points 11 months ago

Gosh, a dream setup. I’m so far yet…

[–] [email protected] 7 points 11 months ago (1 children)

#humblebrag lol

Seriously tho, this is super awesome. I was gifted an 8 bay NAS several months ago and caught the bug again too. I’ve been slowly changing out the 4TB drives with the 16TB ironwolf pro’s and downloading all the things. I have sonarr, prowlarr, and syncthing working so far, but I have to say, that was a pretty big pain in my assholes.

I have been running my server from an old 2018 Mac mini that I had laying around and just the other day found a good deal on a nicer NUC for Black Friday. I’d like to take it up a notch when I do the migration & add radarr, overseerr, and it sounds like dockerr and some others as well. This post was just the inspiration I needed!

Do you have any resources you could share that you used, or at least that you wish you would’ve used to educate yourself and/or simplify things? Most of what I’ve accomplished so far has just been through random discoveries in forums & research I’ve done from there. It feels a bit amateur and I’m wondering whether or not I should just start from scratch. I’m assuming there has to be a site where I can read about all my options & how they interact.

Cheers man, thanks!

[–] [email protected] 2 points 11 months ago

The wiki is a great place to start. Also, most of the services have pretty good documentation.

The biggest tip would be to start with Docker. I had originally started running the services directly in the VM, but quickly ran into problems with state getting corrupted somewhere. After enough headaches I switched to Docker. I then had to spend a lot of time remapping all of the files to get it working again. Knowing where the state lives on your filesystem and that the service will always restart from a known point is great. It also makes upgrades or swapping components a breeze.

Everyone has to start somewhere. Just take it slow and do be afraid to make mistakes. Good luck and have fun! 😀

[–] [email protected] 5 points 11 months ago

I don't see Watchtower in there anywhere. Even just used as a simple on-demand updater, it's worth the time to set it up. (Which is pretty minimal anyhow.) But it can also just run automatically and keep things up to date all the time.

[–] [email protected] 5 points 11 months ago (1 children)

I'm a little lost on what each of these components are. I see .sh files so I'm assuming you're mostly writing these with Bash?

With this level of complexity I wonder if you'd benefit from running a k8s server. Just food for thought.

Looks like you're having a good time for it. I always laugh at the similarity with this system building and the BUS designs of Factorio.

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago) (1 children)

The systemd.timers are basically cronjobs for scripts I wrote to address a few of the pain points I've encountered with the setup. They're either simple curl or wget and jq calls or use Python for more complex logic. The rest are services that are either a part of or adjacent to *arrs.

As for k8s, personally I feel that would add more complexity than it's worth. I'm not looking for a second job. 😛

[–] [email protected] 3 points 11 months ago (1 children)

“But Kubernetes will simplify everything!!!!!1”

[–] [email protected] 5 points 11 months ago (1 children)

🤷‍♂️

I mean all problems are solved with another layer of abstraction right?

[–] [email protected] 2 points 11 months ago

We need to go deeper

[–] [email protected] 4 points 11 months ago (1 children)

While there's nothing particularly wrong with putting everything through a vpn, you could use a qbittorrentvpn docker image which runs a wireguard client with a kill switch which the torrent client can tunnel through.

https://github.com/DyonR/docker-qbittorrentvpn

[–] [email protected] 6 points 11 months ago* (last edited 11 months ago)

The problem I've found is that the services will query indexers and that not all of the trackers allow you to use multiple IPs. This is where I found it easier to make all outbound requests go through the VPN so I didn't get in trouble. It's also why I have the Firefox container set up inside the network with it exposed over the local network as a VNC session. So I can browse the sites while maintaining a single IP.

I do have qbittorrent set up with a kill switch on the VPN interface managed by Gluetun.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago) (1 children)

If you don’t already, you can setup “healthchecks” for your containers, specially useful for qbit and Gluetun. That way, you may restart one if any condition fails using Autoheal.

Also check qbitmanage to setup seeding goals.

And best of all, where is Recyclarr? Sync that bitch right into your arrs to get consistently only the very best out there.

[–] [email protected] 1 points 11 months ago

There's some overlap with my torrrents.py and qbitmanage, but some of its other features sound nice. It also led me to Apprise which might be the notifications solution I've been looking for!

Some of the arr-scripts already handle syncing the settings. I had to turn them off because it kept overwriting mine, but Recyclarr might be more configurable.

Thanks!

[–] [email protected] 3 points 11 months ago

This guy automates.

[–] [email protected] 3 points 11 months ago

If you want something for managing all your containers, consider Portainer. I've been using it with my homelab for a while and it's invaluable for quickly dealing with issues that crop up.

[–] [email protected] 2 points 10 months ago (1 children)

Is tailscale private and safe? I would also like to use it for my homeserver?

[–] [email protected] 1 points 10 months ago

It's based on WireGaurd with some added benefits. Free for up to 3 users. I've had no issues with it and even use it for corporate networks. An alternative is ZeroTier, while I haven't used it I hear a lot of people recommend it too.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

Given what you've got running I only really recommend, as other have, portainer. It's made my life so much easier. Edited this since I saw you have homarr and I must've missed it the first time.

[–] [email protected] 1 points 11 months ago (1 children)

just an fyi, DO NOT put your arr's behind a VPN it will cause issues https://wiki.servarr.com/radarr/faq#vpns-jackett-and-the-arrs

[–] [email protected] 5 points 11 months ago (1 children)

I get what they're saying and it may be 'technically correct', but the issue is more nuanced than that. In my experience, some trackers have strict requirements or restricted auth tokens (e.g. can't browse & download from different IPs). Proxying may be the solution, but I'd have to look at how it decides what traffic gets routed where.

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago)

https://trash-guides.info/Prowlarr/prowlarr-setup-proxy/ is useful when setting up the proxy in prowlarr for your indexers

Also we say don't put the arr's behind a VPN because cloudflare likes to just ban IP's at times which will result in the arr's not being abloe to access the arr metadata layers

[–] [email protected] -1 points 11 months ago (1 children)

You’re running docker inside a vm? Why?

The first thing I would do is learn the 5-layer OSI model for networking. (The 7-layer is more common, but wrong). Start thinking of things in terms of services and layers. Make a diagram for each layer (or just the important layers. Layers 3 and up.)

If you can stomach it, learn network namespaces. It lets you partition services between network stacks without container overhead.

Using a vm or docker for isolation is perfectly fine, but don’t use both. Either throw docker on your host or put them all in as systemd services on a vm.

[–] [email protected] 4 points 11 months ago (1 children)

The server itself is running nothing but the hypervisor. I have a few VMs running on it that makes it easy provision isolated environments. Additionally, it's made it easy to snapshot a VM before performing maintenance in case I need to roll back. The containers provide isolation from the environment itself in the event of a service gone awry.

Coming from cloud environments where everything is a VM, I'm not sure what issues you're referring to. The performance penalty is almost non-existent while the benefits are plenty.

[–] [email protected] 3 points 11 months ago

I recently rebuilt my home server using containers instead of (qemu/KVM) VMs and I notice a performance benefit in some areas. Although I just use systemd-nspawn containers rather than docker as I don't really see the need to install 3rd party software for a feature already installed on my OS.

I handle snapshots by using btrfs. Works great