unRAID

1114 readers
1 users here now

A community for unRAID users to discuss their projects.

founded 1 year ago
MODERATORS
1
 
 

Hi guys,

I run immich in docker on Unraid and absolutly love it. I am a bit confused tho about the updates. I always check before updating to see if there are any breaking changes involved. Most of the time when an update in docker drops however there is no new release on the github release page. https://github.com/immich-app/immich/releases Does anyone know what that is about?

Thanks for your help.

2
3
 
 

My setup: I have a PC (Linux) that I also use for gaming, but I want to connect it to a TV that's in a different room. Now, I know I could just get something like a Shield TV or Apple TV and use Moonlight and be done, but I'd have to buy that.

Since I also have a server that's basically right next to the TV, I was thinking I might be able to (a) stream steam games from my PC to my server (and then connect to the TV) and (b) send the input signals of my peripherals to my PC. I have two controllers, a Bluetooth one and one with a USB dongle. It would be nice to connect both, but that's really not a priority for a start.

Asking here because I'm new to unRAID and was hoping someone has experience with game streaming and can tell me if this would work and is worth it, or whether I should just buy a dedicated device for this. In the second case, do you have recommendations?

4
 
 

No issues upgrading either my main server or my backup (both are supermicro platforms).

5
6
12
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 
 

I got sick of having a way too big server chassis in my short depth rack so decided to split my server into two short depth chassis. I put the motherboard in a sliger 17" 3U case, then built up this 2U 17" case I got off of amazon.


jbod chassis
here is the case with some noctua fans, power supply, a cable adapter card and some led strips I had from a different build that I thought might be cool to include


8x 6TB drives
these are my 8 drives that are in my array, the two parity drives stayed in the motherboard chassis


rear view of jbod
rear shot


front of rack
here is how my rack looks now with the JBOD sitting above the sliger motherboard case so that the top has some space for the psu to get some are since there is a patch panel above it


top shelf
top shelf of the rack with three fans I added on the rear to help keep everything cool and all the air moving from front to back. I had to remove the back plate of the rack because the large chassis I was using before was sticking out and now with the ups, jbod and server they are too close to the back so I just kept it off


3U case before being built up
forgot to take more photos of the main server build

7
11
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 
 

Kind of a dumb question but I’m genuinely curious: is this a worthwhile upgrade? I run a server with a single vm and a few dozen docker containers. I’m intrigued by performance and efficiency cores and whether there would be a noticeable effect on both energy efficiency and performance vs the 12th gen. I’ve looked at the cpu benchmark comparison and see the numbers and there looks like a big increase in performance with little downside other than the cost of doing it. Any input is appreciated.

8
 
 

Hi All,

I was following this guide by SpaceInvader One on how to set up Guacamole to remotely access VM's. When I got to the section about wake on lan, he advised to use the Virtual Machine Wake on Lan plugin. This appears to be deprecated and requires Python 2 to use going forward.

Can anyone suggest an alternative to the Virtual Machine Wake on Lan plugin?

9
9
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 
 

Hey all, Happy seasons greetings to you all!

So I've just got hold of one these from a friend who was just going to throw it away and thought it's going to be better than my i5 based rig I'm using at the minute.

The spec is: HP Z800
Processor #1: 2.66 GHz Hex-Core Xeon (X5650) - 6.40 GT/12MB
Processor #2: 2.66 GHz Hex-Core Xeon (X5650) - 6.40 GT/12MB
Memory: 24GB - (6 x 4GB) - DDR3 ECC Reg - 1333Mhz
Hard Drive #1: ATA Crucial_CT250mX2 SCSI Solid State Disk - Actual size 231GB
Hard Drive #2: ATA CT1000MX500SSD1 SCSI Solid State Disk - Actual size 931GB
Hard Drive #3: No Hard Drive #3
Hard Drive #4: No Hard Drive #4
Graphics Card: nVidia Quadro 4000 - 2GB GDDR5 - 1 x Dual DVI & 2 x DisplayPort
Optical Drive: 5.25" DVD-RW
Operating System: Windows 10 Pro
WiFi Adapter: TP-Link 300Mbps Wireless N Adapter

I'm not really worried about the SSD, optical drive or the WiFi card so they will likely go the journey.

My thoughts were yes it's got a few years on it but it's a lot more expandable than what I'm currently using from 2015 which has an i5 CPU and 16GB of ram.

Since it has two NICs on it I was thinking of running Frigate as a docker which I'm guessing I could direct cuts ait to use one NIC and everything else to the other, does that sound a reasonable plan?

So, is this a worthy replacement for my old server or should I send it to server heaven?

Not likely to start any works on it until the new year so have a little time and was considering buying one of these anyway to remove the data from the onboard sata ports on my current server anyway. https://www.ebay.co.uk/itm/325677573475?_trkparms=amclksrc%3DITM%26aid%3D777008%26algo%3DPERSONAL.TOPIC%26ao%3D1%26asc%3D20221115143302%26meid%3D831118de48074e6baa2613a2932530b2%26pid%3D101613%26rk%3D1%26rkt%3D1%26mehot%3Dag%26itm%3D325677573475%26pmt%3D1%26noa%3D1%26pg%3D4375194%26algv%3DRecentlyViewedItemsV2Mobile%26brand%3DDell&_trksid=p4375194.c101613.m146925&_trkparms=parentrq%3Aca49357918b0a8d34d37089efffff9e6%7Cpageci%3Adf4c5619-825f-11ee-9639-aefcf921261f%7Ciid%3A1%7Cvlpname%3Avlp_homepage

Thanks everyone and Happy Christmas to you all.

10
11
12
 
 

The drives passes all SMART checks and has nothing flagging as a warning or fault, yet when I do a read test, I get all the errors.

13
 
 

So I've recently had a power cut that caused a few issues though I had just bought a new 8TB ironwolf to replace a couple of older 1TB drives from the dawn of time.

Typically though, this drive was DOA and the issues have taken out one of the 1TB drives.

I've bought a new 8TB WD Red that works great. I've used unbalance to migrate the old drives data to the new drive so I can get rid of them as they were starting to show smart errors.

My question is this.

The seemingly dead 1TB looks like it's working through a USB to SATA adaptor on unassigned devices and I would like to get the data from it back on my array but I'm unsure of the command to use with rsync to copy it from the old drive to the new array drive.

Obviously unbalance can't see it as it's an unassigned device.

Any help would be really appreciated.

Thanks

14
9
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Appdata, isos, system and domains are the only things on the Cache drive, mover has been run and there is nothing on the cache drive that shouldn't be. I only have 1 docker installed currently (plex) and no VMs or ISOs downloaded or(no VMs). I have tried to delete old files that docker apps left behind, but some of the folders won't delete. ( I also have a share thsts empty but there are folders I can't delete (no files in them that I can see) and therfor can't remove the share)

I'm at a point where I want to restart my server from scratch, but don't want to loose the data on the drives in the array, can this be done? And if so, how?

15
 
 

Has anyone successfully deployed something like Subs AI within unraid?

Basically I'd like to use this to grab all the missing subtitles that Bazarr isn't able to grab.

PS: If any one knows of a similar app with a scheduler built into the webui, please let me know

16
 
 

As my bug reports explains: The Video-Guide and Script from SpaceinvaderOne for ZFS Snapshot Backups cannot safely backup live databases, it'll corrupt them. Please make sure, that you either stop all dockers using a database before running the script or you copy the live database into a db-dump beforehand, as long as you use this script.

17
16
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

So I'm trying to get unraid set up for the first time, I'm still running the free trial and assuming I can get this set up, I do plan on purchasing it but I'm starting to get frustrated and could use some help.

I previously had a drobo, but that company went under and I decided to switch to an unraid box because at least as far as I can tell, it's the only other NAS solution that will let me upgrade my array over time with mixed drive capacities.

Initially everything was fine, I popped in all my empty drives, set up the array with 1x20Tb drive set up as parity, 1x20tb and 5x14TB drives set up as data disks and I started to move stuff over from the backups that I had from the previous Drobo NAS using the unassigned devices plugin and a USB 3 Hard drive dock.

Well, what had happened twice now is that randomly a device will go disabled with no indication of what's wrong. The blank drives are only about a year old and have before this shown absolutely no signs of failing. They are also still passing the SMART tests with no errors. The first time this happened it was the brand new 20 terabyte drive and the parity slot. I was able to resolve this by unassigning that drive starting the array and then stopping to reassign the drive. That's what a forum post. I was able to find suggested and that seemed to have worked, but it started a whole new parity sync That was estimated to take a whole week. The thing is I don't have a week to waste so I went ahead and started moving files back onto the system again, but now the same thing has happened to disc 3. One of the 14 terabyte drives.

I'm at my wits end, the first time it happened I couldn't figure it out. So I just wiped the array reinstalled on raid and try it again because I just couldn't figure it out. Are there any kind of common pitfalls than anyone could recommend me checking or anyone in general? Just willing to help?

My hardware: Ryzen 7 5700G 64GB of 3200 ECC DDR4 (2x16gb is currently installed, but I've got another 2x16 that just arrived yesterday. I need to install that was just shipped late) 8 NAS bays in the front case

Blank Drives: 2x20TB, 5x14TB, 1x8TB Drives with Data on them: 2x20TB, 1x10TB, 1x8TB (totaling around 40 TB of data across them)

Once the data is moved off of the drives that have data on them, I do intend to add them to the array. My NAS case has eight bays, and two internal SSDs that are separate from the NAS bays, One sata ssd set up as a cache, the other an NVME m.2 set up as a space for appdata.

As of last night before I went to bed I had about 3 terabytes of data moved onto the array, but during the overnight copy, something happened to my disc 3 which made the device marked as offline. I couldn't find any error messages informing me why the disk was offline but it was marked as offline.

The Parity drive was already invalid because I was copying data in while the parity sync was happening, and now I can't get the array to start at all.

I tried doing something that a forum post recommended which was to start the array with the disc unassigned in maintenance mode, then stop the array and then restart it with disc 3 Reassigned to the correct drive, but it refuses to do so. It tells me that there are too many wrong or missing disks.

The weirdest part is that I know that disc 3 still has the data on it because if I unassigned 3 and then mount that drive using the unassigned devices plugin I can see all the data is still browsable and there.

I'm starting to feel real dumb cuz I don't know what I'm doing wrong. I feel like there's got to be something simple that I'm doing wrong and I just can't figure out what it is.

18
 
 

Hello!

I’ve been running unRAID for about two years now, and recently had a thought to use some spare parts and separate my server into two based on use. The server was used for personal photos, videos, documents, general storage, projects, AI art, media, multitude of docker containers, etc. But I was thinking, it’s a bit wasteful to run parts that I use once or twice a week or less 24/7, there is just no need for the power use and wear and tear on the components. So I was thinking to separate this into a server for storage of photos, videos and documents powered on when needed, and then a second server for the media which can be accessed 24/7.

Server 1 (photos, videos, documents, AI experiments): 1 x 16TB parity, 2 x 14TB array. I7 6700k, 16GB ram

Server 2 (media, docker): 1 x 10TB parity, 1 x 10TB and 2 x 6TB array. Cheap 2 core skylake CPU from spare parts, 8GB ram.

With some testing, server 2 only pulls about 10w while streaming media locally, which is a huge drop from the 90+ watts at idle that it was running when I had everything combined.

I was hoping to use an old laptop I have laying around for the second server instead, which has an 8 core CPU, 16GB ram, and runs at 5w idle. I have a little NVMe to SATA adapter that works well but the trouble is powering the drives reliably.

Anyways, pros of separating it out, lower power usage, less wear and tear on HDDs so I will have to replace them less frequently.

Cons, running and managing two servers.

Ideally, I’d like to run server 1 on the cheap 2 core skylake CPU (it’s only serving some files after all), server 2 on the laptop with 8 cores (but still have the issue of powering the drives), and then take the i7 6700 for a spare gaming PC for family.

Alternative would be to just combine everything back into one server and manage the shares better, have drives online only when needed, etc. But I had issues with this, and would sometimes log into the web ui to find all drives spun up even though nothing was being accessed.

Anyways, I hope all of that makes sense. Any insight or thoughts would be appreciated!

19
20
5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

So my SWAG docker can't see other containers on the same docker network, all the conf files need the IP and Port to work.

The other containers can see each other (sonarr and sab for example) and they are all on the same network.

Anyone know why?

Found the fix:

21
 
 

Anyone know of a USB 3.2 gen2 or lower 2.5gbe adapter that works well with Unraid? I know it's my old friend Slackware under the hood but I'm not sure how far that will get me since it seems to be pretty well stripped down. I'm fresh out of PCIe slots in my little box but have plenty of USB 3.2 gen2 ports to go around since the only USB device I use is my UPS.

22
 
 

Hi guys, looks like the used dell 2080 ti I bought off of reddit died after a couple of months of life.

I have been throwing some AI workloads at it (image generation, speech to text, etc) and it looks like the Nvidia driver randomly stopped seeing it. Tried downgrading the driver version and rebooting but as soon as I started throwing some AI workloads at it, the same thing happened.

Can anyone suggest a good dual slot GPU? Doesn't really need to be one of the consumer cards as I'll only be using this for AI workloads and transcoding via tdarr and Plex.

Thank you!

23
 
 

As the title said, I updated to 6.12 and suddenly a new share called "appdata_cache" appeared. I have my appdata share living on the cache (primary cache, secondary array, mover cache <-- array)

Anyone know what and way that is ?

24
 
 

I’m new to the Unraid scene, after putting off doing something other than Windows-based serving and sharing for about.. oh, about 14 years. By “new to the scene”, I mean: “Trial expires in 28 days 22 hours 6 minutes” :-)

Anywho, I ran into an issue with a disabled drive. The solution was to rebuild it. I solved it thanks to a post by u/Medical_Shame4079, on Reddit.

That made me think about the whole “losing stuff on Reddit” maybe problem of the future. While this post isn’t much, maybe it will be helpful to someone else, sometime else.

The issue? A drive ha a status of disabled, and it has a message of “device is disabled contents emulated unraid.”

The fix:

Stop the array, unassign the disk, start the array in maintenance mode, stop it again, reassign the drive to the same slot. The idea is to start the array temporarily with the drive “missing” so it changes from “disabled” to “emulated” status, then to stop it and “replace” the drive to get it back to “active” status.

Looking forward to more time with Unraid. It’s been easy to pick up so far.

25
27
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
view more: next ›