this post was submitted on 15 Sep 2024
666 points (98.3% liked)

Technology

59091 readers
4849 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 1 month ago* (last edited 1 month ago) (3 children)

Where’s the articles about humans doing the exact same shit for the last 40-50 fucking years and no one bats an eye. Looks at the prompts from people complaining about ai responses and see they don’t know how to use this shit any better than my grandparents can use a touchtone phone.

“Build an app”

Fails

“This ai is shit”.

Just like ever other piece of technology. Garbage in garbage out. If you can’t reliably describe what you want then no one is going to be able to do it. AI just blatantly points out your descriptive failures.

[–] [email protected] -4 points 1 month ago (2 children)

I've yet to see generative AI make an error that a human couldn't make. Maybe that's why people seem so hateful of it; they were expecting it to be superhuman but instead it's too much like us.

[–] [email protected] 10 points 1 month ago

Ai llms have learned from us. Good and bad. It doesn't know the difference between good and bad unless you tell it.

So you have to know what's good or bad from the get go before using it and trusting it yet.

And some blindly trust ai already... Which its far from that level of trust

load more comments (1 replies)
load more comments (1 replies)