jasperty

joined 1 year ago
[–] [email protected] 12 points 1 year ago (2 children)

wrong place for this. joint probabilities joke was kinda fire though

1.

Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

There is no set of domains over which we can quantify to make statements like this. "at least 25% of the domains that humans can do" is meaningless unless you willfully adopt a painfully modernist view that we really can talk about human ability in such stunningly universalist terms, one that inherits a lot of racist, ableist, eugenicist, white supremacist, ... history. Unfortunately, understanding this does not come down to sitting down and trying to reason about intelligence from techbro first principles. Good luck escaping though.

Rest of the questions are deeply uninteresting and only become minimally interesting once you're already lost in the AI religion.

149
what (i.imgur.com)
 
[–] [email protected] 6 points 1 year ago (2 children)

I am just now learning about Urbit 🤔

[–] [email protected] 5 points 1 year ago (3 children)

Omfg I have a coworker who writes stuff like this it's actually uncanny

[–] [email protected] 8 points 1 year ago

not gonna lie i clicked to see if they mentioned any games im good at. ended up both disappointed they didn't and disappointed at wtf im even reading.

i propose we measure people based on how much they are able to enjoy sims 4.

[–] [email protected] 9 points 1 year ago (2 children)

The next comment is so peak tech hubris to me.

It's "just" predicting the next token so it means nothing

This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.

This is the part of the AGI Discourse I hate because anyone can approach this with aesthetic and analogy from any field at all to make any argument about AI and its just mind-grating.

This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

I've never seen a non-sequitur more non. The argument is that predicting the next term is categorically not what language is. That is, it's not that there is nothing emerging, but that what is emerging is just straight up not language.

The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.

"Look! This person thinks predicting the next token is not consciousness. I bet they must also not believe that humans are made of cells, or that many small things can make complex thing. I bet they also believe the soul exists and lives in the pineal gland just like old NON-SCIENCE PEOPLE."

[–] [email protected] 9 points 1 year ago (4 children)

There isn’t a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn’t also fail

Man it's so sad how this is so so so so close to the point-- they could have correctly concluded that this means GI as a concept is meaningless. But no, they have to maintain their sci-fi web of belief so they choose to believe LLMs Really Do Have A Cognitive Quality.

[–] [email protected] 5 points 1 year ago

boooooo fucking hooooooooooooooooo

[–] [email protected] 6 points 1 year ago

I'm just as clueless. I think there are three syllogisms that tech brains orbit around.

  • AI will improve society, a better society would face climate change more effectively, so AI will help us face climate change.
    • Basically an extension of the milder claims with regards to AI improving education, health care, research, the economy, etc.
    • Very vague and feel-good and I think more a reaction to distrust of AI than an assertion of anything.
  • AI will help us better understand complex systems like climate, understanding the complexity of climate change helps us, so AI will help us face climate change.
    • Stemming from skepticism of current climate science methodology that doesn't fit with what they think science should be.
    • Secretly hope that AI will show us climate change isn't actually even real and the fact we think it is is some byproduct of our feeble minds trying to understand something so dynamic and complex.
  • There's some magic bullet technological solution to climate change that is outside of current human ability space to invent. AI can potentially eclipse these limits and invent things we can't. So AI can invent this magic solution.
    • Hardcore AI singularity takeoff yadda yadda folks. Goes hand in hand with the ideas AI will invent microorganisms or nanobots that will take over the entire biosphere or thinking it will find a new theory of physics that lets it teleport places or shit like that.
    • In this POV climate is even a non-issue since AI could easily solve it but we can't easily solve how to not make this AI kill us.
 

this almost deserves to be on that website with graphs of funny correlations

[–] [email protected] 11 points 1 year ago (2 children)

dont worry once we get AGI it'll figure out how to run itself on an intel 8080 trust me i thought about it really hard

view more: next ›