this post was submitted on 25 Oct 2024
77 points (100.0% liked)
TechTakes
1373 readers
120 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't understand the title. LLM hallucinations have nothing to do with JAQing off.
Problem it wasn't a hallucination - it was referencing a paper that has been debunked. These aren't made up numbers, they're VERY specific numbers that come from a VERY specific paper.
This one: https://www.sciencedirect.com/science/article/abs/pii/S0160289610000450 -- If I'm not mistaken. Created by a Nazi Sympathizer Richard Lynn and the Pioneer Fund
The problem is that this also managed to get cited more than 22,000x creating a feedback effect that reinforced the AI's learning.
Okay, but it's still got nothing to do with the dishonest rhetorical technique called "JAQing off" (a.k.a. "Just Asking Questions," a.k.a. "sealioning").
It's kind of a ... symptom ... of the community we're in. I wouldn't read into it too deeply.
I think the usual output from the AI Overview (or at least the goal) is to give a long and ostensibly Fair and Balanced summary. So in this case it would be expected to throw out "some say that people from Australia are extra dumb because of these studies, but others contend that those studies were badly performed" or whatever. Asking the question on more words to represent both sides so that it can pretend not to be partisan.
Let me be more clear about this: an LLM trying to answer a question (successfully or otherwise) is doing basically the opposite of a human asking questions (disingenuously, as in "JAQing off," or otherwise).
I wasn't trying to solicit comments trying to explain what the LLM was doing; my point was simply that OP is confused and used a term incorrectly in the title.
i like turtles
It's a reference to the fact that the kind of person who would try and justify this sort of race science is also the kind of person who is "just asking questions." Combined with the tech industry's tepid "it's just a tool, it's not inherently evil" bullshit, I think OPs point is obvious to anyone who isn't a pedant, deliberately acting in bad faith.
you may wish to read the sidebar