behohippy

joined 1 year ago
MODERATOR OF
[–] [email protected] 1 points 1 year ago (1 children)

Ahh that sucks. It's been a very mild summer up here with almost no "hot" days. I think 28C is about as much as we're seeing lately.

 
[–] [email protected] 2 points 1 year ago

The advancements in this space have moved so fast, it's hard to extract a predictive model on where we'll end up and how fast it'll get there.

Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.

We're going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it'll seem normal to have a conversation with your shoes?

[–] [email protected] 2 points 1 year ago

I'm not sure either, Win 10/11 are pretty quick to get going and Ubuntu is not much longer than that. If I have to hard reset the mbp for work, it's a nice block of slacker time :)

[–] [email protected] 2 points 1 year ago

Halls of Torment. $5 game on steam that is like a Vampire Survivors clone, but with more rpg elements to it.

[–] [email protected] 4 points 1 year ago

These are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi's during the shortage. Plus they're incredibly fast in comparison.

[–] [email protected] 4 points 1 year ago

I've got a background in deep learning and I still struggle to understand the attention mechanism. I know it's a key/value store but I'm not sure what it's doing to the tensor when it passes through different layers.

[–] [email protected] 4 points 1 year ago

Subscribed. That last episode of AAA was heartbreaking.

 

We used to ride the heavy dual sports through pretty much everything, but this mud hole got him good. He ended up trying to wedge out with a dead tree, but it knocked his chain off, making the situation much worse. Eventually we pulled it out with a z-line and got the chain back on.

If you're in a situation like this, and shit ain't moving no matter what you do, lie the bike over on it's side (yes in the mud) and pull the front and rear until you're on something more solid. Your paint will not thank you, but it's better than leaving it there to get recovery tools.

[–] [email protected] 2 points 1 year ago (1 children)

I'm on lemmy.world and the sidebar shows 401 subscribers. Is that just a sub count from the local instance or global?

[–] [email protected] 4 points 1 year ago (2 children)

Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it'll be stupid slow. Even now with llama.cpp, it's annoying when it pauses to do the context window shuffle thing.

[–] [email protected] 3 points 1 year ago (1 children)

Same. I loved the idea of what VE does but playing the game was just a confusing mess for me. I stick to the same 8 mods I always use.

 

Any warm day in the winter, I'll hit the trails behind my house.

[–] [email protected] 3 points 1 year ago (1 children)

Still had some reasoning issues, but looking forward to the fine tunes!

[–] [email protected] 1 points 1 year ago

Bad article title. This is the "Textbooks are all you need" paper from a few days ago. It's programming focused and I think Python only. For general purpose LLM use, LLaMA is still better.

 

I host a ton of services running behind my nginx reverse proxy (basic auth + lets encrypt). On the whole it works really well with nearly everything I throw at it. Lately, there's been a lot of gradio/websocket/python stuff coming from the AI community like the local llama and stable diffusion stuff. Not sure what's causing it but there's always weird issues when I try to reverse proxy them.

Does anyone have some magic settings that "just work" with these weirdo web apps?

 

He's 5 today

 

Ryzen 5900X, 64 gig DDR4-3200, 2tb ssd,10tb hdd and an RTX2070. Hosting Stable Diffusion, various llama.cpp instances with python bindings, jellyfin, sonarr, multiple modded minecraft servers, and a network file share.

 

They look kinda weird at this age. The blue egg under them eventually hatched.

 

She's mostly good. Mostly.

view more: next ›