corroded

joined 1 year ago
 

At least in this post, I'm not advocating for any particular political position; I mean for this to be a more generalized discussion.

I have never understood what prompts people to attend political rallies. None of the current US political candidates 100% align with my views, but I am very confident that I made the right choice in who I voted for. That is to say, I'd consider myself a strong supporter of [name here].

To me, it feels like attending a political rally is like attending a college lecture. You have a person giving you information, but you don't gain anything by hearing it in-person as opposed to reading it or watching a recording. If I want to learn something, it's much more comfortable for me to read and article or watch a video in the comfort of my own home. If I want to understand what a political candidate stands for, I'd much rather watch a recording of a town-hall meeting or read something she (oops) wrote rather than taking the time to drive to a rally, get packed in with a bunch of other people, and simply stand and listen.

I understand concerts. Hearing live music sounds vastly different than listening to a recording. Same with movies; most of us don't have an IMAX theater at home. When you're trying to gather information, though, what's the draw in standing outside in a crowd at listening to it in person?

[–] [email protected] 16 points 5 days ago (9 children)

Why is kernel-level anti-cheat even a thing?

If I was trying to prevent cheating, I'd hash the relevant game files, encrypt the values, and hard-code them into the executable. Then when the game is launched, calculated the hash of the existing files and compare to the saved values.

What is gained by running anti-cheat in kernel mode? I only play single-player games, so I assume I'm missing something.

[–] [email protected] 21 points 1 week ago* (last edited 1 week ago) (2 children)

I use Jellyfin heavily, and it's a fantastic project, but I really wish they would address the issues with transcoding, specifically the ability to force it on.

My library contains a decent amount of HDR (lots of DV) content. On my TVs (using Nvidia Shield), it will direct play the DV content, resulting in a green picture. If I turn on burned-in subtitles or drop the bitrate and FORCE it to transcode, it's looks perfect. I've resorted to just setting a low bitrate on clients so it always transcodes.

I'm really hoping a future version gives us the ability to set more fine-grained transcoding settings per-client. Even the ability to disable direct-play completely would be fantastic.

[–] [email protected] 55 points 1 week ago (8 children)

Something isn't adding up here. The first article I read about this said that there were employees nearby who saw her but were unable to open the door. If I see someone being literally cooked, I'm going to grab the closest metal object and smash the fuck out of the door. I would imagine most people would have the same reaction. Even if it's a metal door, 4 or 5 people could almost certainly pry it open.

[–] [email protected] -4 points 2 weeks ago* (last edited 2 weeks ago) (13 children)

If you're sick with something that's non-transmissible, then it's on you to decide if you want to go to work or not.

If you're sick with something contagious, then I don't care who you are, you're a horrible excuse for a human being if you go to work.

[–] [email protected] 8 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Maybe I'm totally wrong, but doesn't EGR stand for EXHAUST Gas Recirculation? Is the Volt a hybrid? I thought it was an EV and thus had no exhaust.

Edit: This was a joke, wasn't it?

[–] [email protected] 50 points 2 weeks ago (6 children)

I'm going to go ahead and say convicted felons probably shouldn't be eligible for the country's highest office, either.

[–] [email protected] 73 points 2 weeks ago (5 children)

I really believe that in order to subscribe to the SovCit bullshit, you have to be mentally challenged to some degree. None of these idiots have ever won a court case or a civil suit against a bank. Even if everything they said was true, it's a blantly obvious fact that our legal system doesn't agree.

Suspend disbelief for a moment and give them the benefit of the doubt. They're still clinging to beliefs that have been proven time and again not to work with our modern legal and financial systems. Nobody with a properly-functioning brain could possibly still think any of this is a good idea.

[–] [email protected] 4 points 2 weeks ago (1 children)

At least for me, the whole "made by devs for devs" isn't really the major downfall. It's the fact that it can't be trusted to remain functional in a dynamic environment. I like using the command line, but sometimes that's just not enough.

If I need a specific software package, I can download the source, compile it, along with the 100 of libraries that they chose not to include in the .tar.gz file, and eventually get it running.

However, when I do an "apt update" and it changes enough, then the binary I compiled earlier is going to stop working. Then I spend hours trying to recompile it along with it's dependencies, only to find that it doesn't support some obscure sub-version of a package that got installed along with the latest security updates.

In a static environment, where I will never change settings or install software (like my NAS), it's perfect. On my desktop PC, I just want it to work well enough so I can tinker with other things. I don't want to have to troubleshoot why Gnome or KDE isn't working with my video drivers when all I want to do is launch remote desktop so I can tinker with stuff on a server that I actually want to tinker with.

[–] [email protected] 14 points 2 weeks ago* (last edited 2 weeks ago) (14 children)

I can only speak for myself, but I have always had bad luck with Linux on desktop. Something always breaks, isn't compatible, or requires a lengthy installation process involving compiling multiple libraries because no .deb or .rpm is available.

On servers, it's fantastic. If you count VMs, I have far more Linux installations than Windows. In general, I use Win10 LTSC for anything that requires a GUI and Ubuntu Server for anything that only needs CLI or hosts a web interface.

[–] [email protected] 25 points 3 weeks ago (34 children)

Win10 LTSC still has quite a few years left.

[–] [email protected] 4 points 3 weeks ago

I was born in the 1980s. I remember growing up, I always had the impression that by this time in the 21st century, we'd have figured out some way to break the established laws of physics. Maybe it was because of watching so much sci-fi, but I feel like I'm not alone in this. The media seemed to reflect the same line of thinking. "Back to the Future 2" with its hoverboards and flying cars is now set several years in the past.

Be it anti-gravity, interstellar travel, teleportation, whatever, I always kind of assumed that by now, we'd at least have a working theory of how we might implement it in the next few decades. I think a lot of that has to do with the start of the "information age." Computers and the way they could connect us were so revolutionary, it seemed like "magic" to the layperson. More "magic" would only be a few years away, right? If we could fit all this power into a box that sits on your desk, then it wasn't beyond the scope of reason to think that anything was possible; it'd just take a few more years for us to figure it out, then we'd be planning the first NASA mission to another solar system.

What I never would have predicted is just how rapidly computer technology would advance. We now have supercomputers in our pockets, powered by CPUs that are well into the realm of nanotechnology and are now starting to run into limitations imposed by quantum physics. As a technological society, we've probably progressed farther than I would have ever imagined, just not in the way I expected.

[–] [email protected] 1 points 3 weeks ago

If I want to connect my laptop to a code reader for my car, I just plug my OBD2 reader into one of the USB type A ports. If I want to connect it to a monitor, I have USB-C, DP, and HDMI ports available, depending on what the monitor supports. I want to transfer some movies from my NAS, I can plug the ethernet port right into a switch or a wall jack.

I use every single port on my laptop, and I don't have to buy an adapter to connect to anything.

 

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

 

I generally try to stay informed on current events. With the exception of what gets posted here, I normally get my news from CNN. I tend to lean left politically, but not always.

The problem I always run into is that every news site I read, regardless of where they stand on the political spectrum, is always filled with pointless bullshit. Specifically, sports, celebrity news, and product placement. "Some shitty pop singer is dating some shitty actor" or "These are our recommendations for the best mass-produced garbage-quality fast fashion from Temu" or "Some overpaid dickhead threw a ball faster than some other overpaid dickhead."

What I'd love to find is a news source that's just news that matters. No celebrity gossip, sports, opinion pieces, etc. Just real events that have an impact on some part of the world. Legislation, natural events, economic changes, wars, political changes, that kind of thing.

Does this exist, or is all journalism just entertainment?

 

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

 

I have been using the BlueIris NVR integration (from HACS) for quite some time, and it works great for triggering BI from HA. I've trying to do the opposite now: Fire off automations in HA whenever BI detects motion on one of my cameras.

I've never used MQTT before, so I'm learning as I go, but I think I have most of my setup configured properly. I've installed Mosquitto and the MQTT integration in HA. I've configured BI to connect to HA, and running "Test" in the "Edit MQTT Server" menu in BI shows a good connection and no errors. I've set my cameras to post an MQTT event when the alert is triggered (and I've verified that the alerts are in fact being triggered).

Nothing happens in HA, though. The "Motion" sensor for my camera in HA stays at "Clear." In fact, the history shows no change at all, ever.

I have the events in BI set up as follows: On Alert: MQTT Topic - BlueIris/&CAM/Status and Payload - { "type": "&TYPE", "trigger": "ON" } On Reset: Exactly the same, but change ON to OFF.

I've tried change the MQTT autodiscovery header in HA from "homeassistant" to "BlueIris," and it made no difference. The Mosquitto logs show a login from HA, so I feel like I'm close, but I'm not sure where else to look.

Edit: I installed MQTT explorer, and I've verified that the messages are making it to Mosquitto, and they appear to be correctly formatted.

UPDATE: I set the MQTT integration to listen to the MQTT messages coming from BI, and sure enough, they were coming through just fine. For some reason, the BI integration just wasn't seeing them. Digging through the system logs, I saw some errors "creating a binary sensor" coming from the BI integration. The only thing I can think is that because I didn't have MQTT set up when I first installed the BI integration, something went wrong with the config (although I had already rebooted the system several times). I re-downloaded the BI integration and re-installed it, and now everything works perfectly.

 

This isn't strictly "homelab" related, but I'm not sure if there's a better community to post it.

I'm curious what kind of real-world speeds everyone is getting over their wireless network. I was testing tonight, and I'm getting a max of 250Mbit down/up on my laptop. I have 4 Unifi APs, each set to 802.11ac/80Mhz, and my laptop supports 2x2 MIMO. Testing on my phone (Galaxy S23) gives basically the exact same result.

The radio spectrum around me is ideal for WiFi; on 5Ghz, there is no AP in close enough range for me to detect. With an 80Mhz channel width, I can space all 4 of my APs so that there's no interference (using a non-DFS channel for testing, btw).

Am I wasting my time trying to chase higher speeds with my current setup? What kind of speeds are you getting on your WiFi network?

22
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

I have been programming in C++ for a very long time, and like a lot of us, I have an established workflow that hasn't really changed much over time. With the exception of bare-metal programming for embedded systems, though, I have been developing for Windows that entire time. With the recent "enshittification" of Windows 11, I'm starting to realize that it's going to be time to make the switch to Linux in the very near future. I've become very accustomed to (spoiled by?) Visual Studio, though, and I'm wondering about the Linux equivalent of features I probably take for granted.

  • Debugging: In VS, I can set breakpoints, step through my code line-by-line, pause and inspect the contents of variable on-the-fly, switch between threads, etc. My understanding of Linux programming is that it's mostly done in a code editor, then compiled on the command line. How exactly do you debug code when your build process is separate from your code editor? Having to compile my code, run it until I find a bug, then open it up in a debugger and start it all over sounds extremely inefficient.
  • Build System: I'm aware that cmake exists, and I've used it a bit, but I don't like it. VS lets me just drop a .h and .cpp file into the solution explorer and I'm good-to-go. Is there really no graphical alternative for Linux?

It seems like Linux development is very modular; each piece of the development process exists in its own application, many of which are command-line only. Part of what I like about VS is that it ties this all together into a nice package and allows interoperability between the functions. I can create a new header or source file, add some code, build it, run it, and debug it, all within the same IDE.

This might come across as a rant against Linux programming, but I don't intend it to. I guess what I'm really looking for is suggestions on how to make the transition from a Visual Studio user to a Linux programmer. How can I transition to Linux and still maintain an efficient workflow?

As a note, I am not new to Linux; I have used it extensively. However, the only programming I've done on Linux is bash scripting.

 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

 

In c++17, std::any was added to t he standard library. Boost had their own version of "any" for quite some time before that.

I've been trying to think of a case where std::any is the best solution, and I honestly can't think of one. std::any can hold a variable of any type at runtime, which seems incredibly useful until you consider that at some point, you will need to actually use the data in std::any. This is accomplished by calling std::any_cast with a template argument that corresponds to the correct type held in the std::any object.

That means that although std::any can hold a type of any object, the list of valid objects must be known at the point that the variable is any_cast out of the std::any object. While the list of types that can be assigned to the object is unlimited, the list of types that can be extracted from the object is still finite.

That being said, why not just use a std::variant that can hold all the possible types that could be any_cast out of the object? Set a type alias for the std::variant, and there is no more boilerplate code than you would have otherwise. As an added benefit, you ensure type safety.

 

I'm looking for a portable air conditioner (the kind with 1 or 2 hoses that go to outside air). The problem I'm running into is that every single one I find has some kind of "smart" controller built in. The ones with no WiFi connectivity still have buttons to start/stop the AC, meaning that a simple Zigbee outlet switch won't work. I could switch the AC off, but it would require a button-press to switch it back on. The ones with WiFi connectivity all require "cloud" access; my IoT devices all connect to a VLAN with no internet access, and I plan to keep it that way.

I suppose I could hack a relay in place of the "start" button, but I'd really rather just have something I can plug in and use.

I can't use a window AC; the room has no windows. I'll need to route intake/exhaust through the wall. So far, I can't find any "portable" AC that will work for me.

What I'm looking for is a portable AC that either:

  • Connects to WiFi and integrates with HA locally.
  • Has no connectivity but uses "dumb" controls so I can switch it with a Zigbee outlet switch.

Any ideas?

 

Yesterday, Brian Dorsey was executed for a crime he committed in 2006. By all accounts, during his time in prison, he became remorseful for his actions and was a "model prisoner," to the point that multiple corrections officers backed his petition for clemency.

https://www.cnn.com/2024/04/09/us/brian-dorsey-missouri-execution-tuesday/index.html

In general, the media is painting him as the victim of a justice system that fails to recognize rehabilitation. I find this idea disgusting. Brian Dorsey, in a drug-induced stupor, murdered the people who gave him shelter. He brutally ended the life of a woman and her husband, and (allegedly) sexually assaulted her corpse. There is an argument that he had ineffective legal representation, but that doesn't negate the fact that he is guilty.

While I do believe that he could have been released or had his sentence converted to life in prison, and he could have potentially been a model citizen, this would have been a perversion of justice. Actions that someone takes after committing a barbaric act do not undo the damage that was done. Those two individuals are still dead, and he needed to face the ramifications for his actions.

Rehabilitation should not be an option for someone who committed crimes as depraved as he did. Quite frankly, a lethal injection was far less than what he deserved, given the horror he inflicted on others. If the punishment should fit the crime, then he was given far more leniency than was warranted.

 

I just set up a local instance of Invidious. I created an account, exported my YouTube subscriptions, and imported them into Invidious. The first time I tried, it imported 5 subscriptions out of 50 or so. The second time I tried, it imported 9.

Thinking there might be a problem with the import function, I decided to manually add each subscription. Every time I click "Subscribe," the button will switch to "Unsubscribe," then immediately switch back to "Subscribe." If I look at my subscriptions, it was never added.

My first thought was a problem with the PostgreSQL database, but that wouldn't explain why some subscriptions work when I import them.

I tried rebooting the container, and it made no difference. I'm running Invidious in a Ubuntu 22.04 LXC container in Proxmox. I installed it manually (not with Docker). It has 100GB of HDD space, 4 CPU cores, and 8GB of memory.

What the hell is going on?

view more: next ›