484
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis
(www.theverge.com)
This is a most excellent place for technology news and articles.
I don't know how you'd solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.
But that's because the AI doesn't know how to solve the problem.
Because the AI doesn't know anything.
Real intelligence simply doesn't work like this, and every time you point it out someone shouts "but it'll get better". It still won't understand anything unless you teach it exactly what the solution to a prompt is. It won't, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.
I'll get the usual downvotes for this, but:
is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that's what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).
But there's no way to argue that AI doesn't know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I'm not trying to argue that "it will necessarily get better". But there's no argument that labels current AI technology as "not understanding" without resorting to a "special human sauce" argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.
Edit: yeah, this went about as expected. I don't know why the Lemmy community has so many weird opinions on AI topics.
I think you might be confusing intelligence with memory. Memory is compressed knowledge, intelligence is the ability to decompress and interpret that knowledge.
You mean like create world representations from it?
https://arxiv.org/abs/2210.13382
(Though later research found this is actually a linear representation)
Or combine skills and concepts in unique ways?
https://arxiv.org/abs/2310.17567
No. On a fundamental level, the idea of "making connections between subjects" and applying already available knowledge to new topics is compression - representing more data with the same amount of storage. These are characteristics of intelligence, not of memory.
You can't decompress something if you haven't previously compressed the data.
Our current AI systems are T2, and T1 during interference. They can't decide how they represent data that'd require T3 (like us) which puts them, in your terms, at the level of memory, not intelligence.
Actually it's quite intuitive: Ask StableDiffusion to draw a picture of an accident and it will hallucinate just as wildly as if you ask a human to describe an accident they've witnessed ten minutes ago. It needs active engagement with that kind of memory to sort the wheat from the chaff.
Where do you get this? What kind of data requires a T3 system to be representable?
I don't think I've made any claims that are related to T2 or T3 systems, and I haven't defined "memory", so I'm not sure how you're trying to put it in my terms. I wouldn't define memory as an adaptable system, so T2 would by my definition be intelligence as well.
I just did this:
Where do you see "wild hallucination"? Yeah, it's not perfect, but I also didn't do any kind of tuning - no negative prompt, positive prompt is literally just "accident".
It's not about the type of data but data organisation and operations thereon. I already gave you a link to Nikolic' site feel free to read it in its entirety, this paper has a short and sweet information-theoretical argument.
I'm trying to map your fuzzy terms to something concrete.
My mattress is an adaptable system.
All of it. Not in the AI but conventional term: Nothing of it ever happened, also, none of the details make sense. When humans are asked to recall an accident they witnessed they report like 10% fact (what they saw) and 90% bullshit (what their brain hallucinates to make sense of what happened). Just like human memory the AI is taking a bit of information and then combining it with wild speculation into something that looks plausible. But which, if reasoning is applied, quickly falls apart.