this post was submitted on 13 Jun 2023
11 points (92.3% liked)

Selfhosted

39950 readers
717 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
11
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I have played around with yunohost and other similar tools. I know how to open ports on router, configure port forwarding. I am also interested on hosting my own stuff for experiments, but I also have a VPN enabled for privacy reasons on my router at all times. If you haven't guessed already, I am very reserved on revealing my home IP for selfhosting, as contradictory as it sounds.

I am aware that it's better to rent a VPS, not to mention the dynamic IP issues, but here it goes: assuming my VPN provider permits port forwarding, is it possible to selfhost anything from behind a VPN, including the virtual machine running all the necessary softwares?

edit: title

edit2: I just realized my VPN provider is discontinuing port forwarding next month. Why?!

top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 year ago

At the end of the day, packets need to get from whatever your DNS points, to the server that's running. Depending on your tolerance for jank, and as long as a route actually exists for this, you can run the server anywhere you want. Renting a VPS does offer a lot more freedom in how your are routing, and where.

[–] [email protected] 4 points 1 year ago

If you haven't checked out Tailscale, you should!

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

Absolutely possible.
The key to simple self hosting is to have a dns record that points to your externally accessible IP, whether that be your real one or an external one hosted at a VPN provider.
If that IP changes, you'll need to update it dynamically.

It's becoming increasibly common to be a requirement to do so as CGNat becomes more widespread.

One of the newer ways to do that is with a Cloudflare Tunnel, which whilst technically is only for web traffic, they ignore low throughput usage for other things like SSH.

[–] [email protected] 1 points 1 year ago (1 children)

My knowledge is a little dated and I remember messing around dyndns or noip to update my IP many years ago. I guess a simple script running on the router or the host should suffice?

[–] [email protected] 2 points 1 year ago

I personally use a bash script triggered by cron on my server to first determine my external IP address: curl http://v4.ident.me/ then if it differed from the last check, would update one of my dns entries via the godaddy API.

This can be a simple or as complicated as you like.

[–] [email protected] 3 points 1 year ago

https://ovpn.se has static IPs and port forwarding.

[–] [email protected] 3 points 1 year ago

Hopefully this will help someone. This seems to work for me. Subscribed communities update, I am able to post. I'm the only user right now on my server. NPM took me a bit of messing around with the config but I think I have everything working, some of this may be redundant / non functional but I don't have the will to go line by line to see what more I can take out. Here is how I have it configured. Note that some things go to the Lemmy UI port and some to the Lemmy port. These should be defined in your docker-compose if you're using that. (Mine is below)

On the first tab in NPM, "Details" I have the following:

 Scheme: http
 Hostname: <docker ip>
Port: <lemmy-ui port>
Block Common Exploits and Websockets Support are enabled.

On the Custom Locations page, I added 4 locations, you have to do one for each directory even though the ip/ports are the same.

Location: /api
Scheme: http
Hostname: <docker ip>
Port: <lemmy port>

Repeat the above for "/feeds", "/pictrs", and "/nodeinfo". The example file they give also says to have ".well_known" in there but as far as I know that's just for Let's Encrypt which NPM should be handling for us.

On the SSL tab, I have a Let's Encrypt certificate set up. Force SSL, HTTP/2 Support, and HSTS Enabled.

On the Advanced tab, I have the following:

 location / {

   set $proxpass "http://<docker ip>:<lemmy-ui port>";
   if ($http_accept = "application/activity+json") {

     set $proxpass "http://<docker ip>:<lemmy-ui port>";`
   }
   if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
     set $proxpass "http://<docker ip>:<lemmy-ui port>";
   }
   if ($request_method = POST) {
     set $proxpass "http://<docker ip>:<lemmy-ui port>";
   }
   proxy_pass $proxpass;
   
   rewrite ^(.+)/+$ $1 permanent;
    # Send actual client IP upstream
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 }

I probably should add in my docker compose file as well... I'm far from a docker expert. This is reasonably close to their examples and others I found. I removed nginx from in here since we already have a proxy. I disabled all the debug logging because it was using disk space. I also removed all the networking lines because I'm not smart enough to figure it all out right now. If you use this, look out for the < > sections, you need to set your own domain/hostname, and postgres user/password.

version: "3.3"

services:
  lemmy:
    image: dessalines/lemmy:0.17.3
    hostname: lemmy
    restart: always
    ports:
      - 8536:8536
    environment:
      - RUST_LOG="warn"
      - RUST_BACKTRACE=full
    volumes:
      - ./lemmy.hjson:/config/config.hjson:Z
    depends_on:
      - postgres
      - pictrs

  lemmy-ui:
    image: dessalines/lemmy-ui:0.17.4
    # use this to build your local lemmy ui image for development
    # run docker compose up --build
    # assuming lemmy-ui is cloned besides lemmy directory
    # build:
    #   context: ../../lemmy-ui
    #   dockerfile: dev.dockerfile
    ports:
      - 1234:1234
    environment:
      # this needs to match the hostname defined in the lemmy service
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      # set the outside hostname here
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=< domain name>
      - LEMMY_HTTPS=false
      - LEMMY_UI_DEBUG=true
    depends_on:
      - lemmy
    restart: always

  pictrs:
    image: asonix/pictrs:0.4.0-beta.19
    # this needs to match the pictrs url in lemmy.hjson
    hostname: pictrs
    # we can set options to pictrs like this, here we set max. image size and forced format for conversion
    # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp
    environment:
      - PICTRS_OPENTELEMETRY_URL=http://otel:4137
      - PICTRS__API_KEY=API_KEY
      - RUST_LOG=debug
      - RUST_BACKTRACE=full
      - PICTRS__MEDIA__VIDEO_CODEC=vp9
      - PICTRS__MEDIA__GIF__MAX_WIDTH=256
      - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
      - PICTRS__MEDIA__GIF__MAX_AREA=65536
      - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
    user: 991:991
    volumes:
      - ./volumes/pictrs:/mnt:Z
    restart: always

  postgres:
    image: postgres:15-alpine
    # this needs to match the database host in lemmy.hson
    # Tune your settings via
    # https://pgtune.leopard.in.ua/#/
    # You can use this technique to add them here
    # https://stackoverflow.com/a/30850095/1655478
    hostname: postgres
    command:
      [
        "postgres",
        "-c",
        "session_preload_libraries=auto_explain",
        "-c",
        "auto_explain.log_min_duration=5ms",
        "-c",
        "auto_explain.log_analyze=true",
        "-c",
        "track_activity_query_size=1048576",
      ]
    ports:
      # use a different port so it doesnt conflict with potential postgres db running on the host
      - "5433:5432"
    environment:
      - POSTGRES_USER=< dbuser >
      - POSTGRES_PASSWORD=< dbpassword>
      - POSTGRES_DB=lemmy
    volumes:
      - ./volumes/postgres:/var/lib/postgresql/data:Z
    restart: always
[–] [email protected] 2 points 1 year ago

Re your vpn question - I have a number of services on my home server, some of which are exposed via reverse proxy (eg. Nextcloud) and others which are only accessible internally or via my wireguard vpn. Setting up a dedicated vpn server on raspberry pi is very simple to do.

[–] [email protected] 2 points 1 year ago (1 children)

PureVPN supports port-forwarding over the VPN and is frequently used for making port-forwarding work through janky ISPs that don't support it properly. Some details at: https://www.purevpn.com/blog/how-to-port-forward-your-web-development-server-remotely/

There are many other ways to do similar things, but this is one possible approach.

[–] [email protected] 2 points 1 year ago

Nice to know there's an actual justifiable use case for what I am describing!

[–] [email protected] 2 points 1 year ago

I just use wireguard with VyOS. Simple and efficient

[–] [email protected] 2 points 1 year ago

I currently host a few services (including the lemmy instance I am replying from) behind a commodity $5 VPS, while the services are actually hosted locally. I setup WireGuard to have a simple hard-coded peer-to-peer VPN connection from my local client to the remote VPS, and then setup some iptable rules on the VPS to redirect traffic to the VPN network. This allows me to host behind a NAT (my biggest issue), but also handles IP changes and hides your home's public IP. I am no networking engineer, so I am not sure how safe this is, manually routing packets can be tricky.

There are also a few services this does not work for. So far I've found CS:GO dedicated servers (if the public IP of the local machine differs from the VPS) and email servers cannot be behind a NAT to function properly. Other services likely exist, but you'll be able to run most services. You do lose the originating IP addresses in this case, which can complicate things (the case for email servers).

This process is explained in detail on wickedyoda.com and with a video tutorial.

[–] lodion 2 points 1 year ago

At home I'm running proxmox on a Ryzen 9 3900X, 96GB of RAM with 4TB of NVME storage. I have VMs for a bunch of stuff, most importantly Unraid which is passed a SATA controller with 8 drives. Storage from unraid then mounted as NFS/SMB shares to various VMs.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

What hardware do you run on? Or do you use a data center/cloud?

I have 2 home servers - an Intel NUC running Ubuntu and a Raspberry Pi running Raspberry Pi OS. The NUC is my main server and the rpi is a dedicated wireguard/pivpn.

Do you use containers or plain packages?

On the main server I use docker containers almost exclusively. I find them easier to stand up and tear down, particularly using scripts, without worrying about the broader OS.

I have the following services on the NUC -

  • Nginx Proxy Manager (for https proxy)
  • Nextcloud
  • Airsonic
  • Calibre-web
  • Invidious
  • h5ai
  • transmission

I did play around with my own Lemmy instance but that was not successful and I found beehaw :-)

Orchestration tools like K8s or Docker Swarm?

No

How do you handle logs?

Badly. I lost a server due to root filling up a couple years back. Now I monitor disk space (see below) and prune logs as required.

How about updates?

OS updates I push daily. I don't regularly update my docker containers. I did use Watchtower for a while but found it broke stuff a little too often.

Do you have any monitoring tools you love?

Just some custom batch scripts (disk space, backups etc) which send me regular emails. I also have conky running on a small screen 24x7

load more comments
view more: next ›