this post was submitted on 19 Mar 2024
348 points (97.5% liked)

Technology

59151 readers
3554 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 7 months ago (1 children)

what’s wrong with lumping a lot of things with different substrate together if, as you admit yourself, there’s still no evidence any of them work well?

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (2 children)

LLMs are the current big buzzword and the main ones that "don't work", because people assume and expect them to be intelligent and actually know and understand things, which they simply do not. Their purpose is to generate text in a way that a human would and for that they actually work perfectly - get a competent LLM and a human and ask them to write about something, and you are very unlikely to spot which one is the machine unless you can catch them lying, and even then it might just be a clueless human talking about things he kinda understands but isn't an expert of. Like me.
But they are constantly being used for all kinds of purposes that they really don't yet fit well, because you can't actually trust anything they say.

Image generation mainly has issues with hands and fingers so they aren't bullet proof at making fake realistic imagery, but for many subjects and style they can create images that are pretty much impossible to identify as being generated. Civit.ai is full of examples. Most people think it doesn't work yet because they mostly see someone throwing simple prompts into midjourney and taking the first thing it generates for an article thumbnail.

And image identification definitely works, but it's... Quirky. I said it can't be used to identify mushrooms, because nothing can identify two things that look exactly the same from one another. But give one enough photos of every single hot wheels car that exists, and you can get one that will perfectly recognize which one you have. But it will also tell you that a shoe or a tree is one of them, because it only knows about hot wheels cars.
Making one that is trying to identify absolutely everything from a photo, like Google Lens, will still misidentify some things as the dataset is so enormous, but so would a human. Just that for an AI, "I don't know" is never an option, it always says the most likely answer it thinks is right.

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago)

edit: oops accidental post

[–] [email protected] 0 points 7 months ago

okay? so i am quite aware of all of this already; none of this info is new.

my question is still, “what’s wrong with lumping all of these technologies together as ‘AI’ when all of them are ineffective at identifying mushrooms (and certain other tasks)?”