this post was submitted on 17 May 2024
202 points (93.5% liked)

Technology

59111 readers
3902 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 49 points 5 months ago (4 children)

Who's ignoring hallucinations? It gets brought up in basically every conversation about LLMs.

[–] [email protected] 62 points 5 months ago (2 children)

People who suggest, let's say, firing employees of crisis intervention hotline and replacing them with llms...

[–] [email protected] 12 points 5 months ago

"Have you considered doing a flip as you leap off the building? That way your death is super memorable and cool, even if your life wasn't."

-Crisis hotline LLM, probably.

[–] [email protected] 10 points 5 months ago (1 children)

Less horrifying conceptually, but in Canada a major airline tried to replace their support services with a chatbot. The chatbot then invented discounts that didn't actually exist, and the courts ruled that the airline had to honour them. The chatbot was, for all intents and purposes, no more or less official a source of data than any other information they put out, such as their website and other documentation.

[–] [email protected] 1 points 5 months ago

i approve of that. it is funny and there is no harm to anyone else other than the shareholders, so... 😆

[–] [email protected] 5 points 5 months ago

The part that's being ignored is that it's a problem, not the existence of the hallucinations themselves. Currently a lot of enthusiasts are just brushing it off with the equivalent of ~~boys will be boys~~ AIs will be AIs, which is fine until an AI, say, gets someone jailed by providing garbage caselaw citations.

And, um, you're greatly overestimating what someone like my technophobic mother knows about AI ( xkcd 2501: Average Familiarity seems apropos). There are a lot of people out there who never get into a conversation about LLMs.

[–] [email protected] 3 points 5 months ago

It really needs to be a disqualifying factor for generative AI. Even using it for my hobbies is useless when I can't trust it knows dick about fuck. Every time I test the new version out it gets things so blatantly wrong and contradictory that I give up; it's not worth the effort. It's no surprise everywhere I've worked has outright banned its use for official work.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

Maybe on Lemmy and in some pockets of social media. Elsewhere it definitely doesn't.

EDIT: Also I usually talk with IRL non-tech people about AI, just to check what they feel about it. Absolutely no one so far knew what hallucinations were.