Actually Useful AI
Welcome! ๐ค
Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.
Be an active member! ๐
We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.
What can I post? ๐
In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.
What is not allowed? ๐ซ
- ๐ Sensationalism: "How I made $1000 in 30 minutes using ChatGPT - the answer will surprise you!"
- โป๏ธ Recycled Content: "Ultimate ChatGPT Prompting Guide" that is the 10,000th variation on "As a (role), explain (thing) in (style)"
- ๐ฎ Blogspam: Anything the mods consider crypto/AI bro success porn sigma grindset blogspam
General Rules ๐
Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.
While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.
Related Communities ๐
General
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Chat
Image
Open Source
Please message @[email protected] if you would like us to add a community to this list.
Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient
view the rest of the comments
TL;DR: (AI-generated ๐ค)
This article discusses the issue of the security and trustworthiness of large language models (LLMs). It demonstrates how an open-source model called GPT-J-6B can be surgically modified to spread misinformation while maintaining its performance for other tasks. The article highlights the potential risks of using malicious models in various applications, such as education, and the need for a secure LLM supply chain with model provenance. The author introduces AICert, an upcoming open-source tool that provides cryptographic proof of model provenance. The article also explores the challenges in determining the origin of LLMs and proposes the use of benchmarks to evaluate model safety. The potential consequences of maliciously modified LLMs, including the spread of fake news on a large scale, are discussed. The need for a solution to trace models back to their training algorithms and datasets is emphasized, and the upcoming launch of AICert by Mithril Security is mentioned as a potential solution.
Under the Hood
gpt-3.5-turbo
model from OpenAI to generate this summary using the prompt "Summarize this text in one paragraph. Include all important points.
"How to Use AutoTLDR