this post was submitted on 14 Jun 2024
36 points (100.0% liked)

Selfhosted

39987 readers
458 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am planning to build a multipurpose home server. It will be a NAS, virtualization host, and have the typical selfhosted services. I want all of these services to have high uptime and be protected from power surges/balckouts, so I will put my server on a UPS.

I also want to run an LLM server on this machine, so I plan to add one or more GPUs and pass them through to a VM. I do not care about high uptime on the LLM server. However, this of course means that I will need a more powerful UPS, which I do not have the space for.

My plan is to get a second power supply to power only the GPUs. I do not want to put this PSU on the UPS. I will turn on the second PSU via an Add2PSU.

In the event of a blackout, this means that the base system will get full power and the GPUs will get power via the PCIe slot, but they will lose the power from the dedicated power plug.

Obviously this will slow down or kill the LLM server, but will this have an effect on the rest of the system?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 months ago (1 children)

Nope. I actually did that unintentionally on a PC I built. I only used one power wire when the GPU needed 2 so it couldn't use all the power it needed when running 100%. My understanding was PCI doesn't support disconnecting devices so the system expects all components it starts up with to be available all the time. Lose one and the system goes down.

[–] [email protected] 6 points 4 months ago* (last edited 4 months ago) (3 children)

PCIe absolutely does support disconnecting devices. It is a hot swap bus, that's how ExpressCard works. But it doesn't mean that the board/uefi implements it correctly.

[–] [email protected] 3 points 4 months ago

in other words: OP either needs to get a thunderbolt dock or straight up have 2 computers. The latter should not even consume that much more power if the PC gets shut down in the evening and woken up using wakeonlan in the morning.

[–] [email protected] 1 points 4 months ago

Oh nice! I knew hot swapping was supported on many other devices but not PCIe itself. Feels wrong to rip a card out while the system is powered up.

[–] [email protected] 1 points 4 months ago

Also some GPUs support running without the external power connectors/not all of them. My old GTX 1080 ran for about 3 months off of just the PCIe slots power because I forgot to plug them in. Newer GPUs are FAR more power hungry though and not all newer cards support that. Plus I've never tried yoinking the power cables while it's on. That can't be good.