I haven't gone through all their work, but some of the delisted maintainers were working on driver support for Baikal, a Russia based electronics company. Their work includes semiconductors, ARM processors. Given the sanctions against Russia, especially for dual use stuff like domestic semiconductors, I would expect that Linus and other maintainers were told or concluded that by signing off and merging their code they'd be personally violating sanctions.
antihumanitarian
I recently removed in editor AI cause I noticed I was acquiring muscle memory for my brain, not thinking through the rest past the start of a snippet that would get an LLM to auto complete. I'm still using LLMs, particularly for languages and libraries I'm not familiar with, but using the artifacts editors in ChatGPT and Claude.
I really don't blame them, security and privacy minded folk are more likely to use niche configs. Feels like for Linux stuff companies may be better served making APIs and letting the community handle it. Rclone for example implements a bunch, and last I knew had an unstable Proton plugin.
The comments from that article are some of the most vitriolic I've ever seen on a technical issue. Goes to prove the maintainer's point though.
Some are good for a laugh though, like assertions that Rust in the kernel is a Microsoft sabotage op or LLVM is for grifters and thieves.
FOSS in general needs better means of financial support. While the software is free and libre, developer time is not, and ultimately they gotta eat and pay bills. I hope they get positive results and don't catch much unnecessary flak.
Given the ease of implantation of end to end encryption now, it's a reasonable assumption that anything not e2ee is being data mined. E2ee has extensive security benefits, for example even if your data is dumped the info is still useless. So, there has to be a compelling reason to not use it.
People haven't really changed. As always, power corrupts. When the rewards are great enough, it seems people are often enough willing to compromise their integrity.
My first programming experience, an online class, was in a Linux VM. Linux made programming easy and delightful, Windows always made it a huge pain. As time went on, more of what I did was easier on Linux, and now everything is.
Key detail in the actual memo is that they're not using just an LLM. "Wallach anticipates proposals that include novel combinations of software analysis, such as static and dynamic analysis, and large language models."
They also are clearly aware of scope limitations. They explicitly call out some software, like entire kernels or pointer arithmetic heavy code, as being out of scope. They also seem to not anticipate 100% automation.
So with context, they seem open to any solutions to "how can we convert legacy C to Rust." Obviously LLMs and machine learning are attractive avenues of investigation, current models are demonstrably able to write some valid Rust and transliterate some code. I use them, they work more often than not for simpler tasks.
TL;DR: they want to accelerate converting C to Rust. LLMs and machine learning are some techniques they're investigating as components.
I have LTS and zen kernels installed in addition to the default Arch one, that should prevent this yes?
What do you mean by "this stuff?" Machine learning models are a fundamental part of spam prevention, have been for years. The concept is just flipping it around for use by the individual, not the platform.
Simply become older. The older I get the less I want to do hours long sessions. But practically, I have a hard bedtime alarm on my phone, which works cause sleep deprivation and work suck.