LittleLordLimerick

joined 1 year ago
[–] [email protected] 1 points 1 year ago (1 children)

You seem to have the assumption that they’re not. And that “helping society” is anything more than a happy accident that results from “making big profits”.

It's not an assumption. There's academic researchers at universities working on developing these kinds of models as we speak.

Are you asking me whether it’s a good idea to give up the concept of “Privacy” in return for an image classifier that detects how much film grain there is in a given image?

I'm not wasting time responding to straw men.

[–] [email protected] 2 points 1 year ago

It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

That is a gross oversimplification. LLM's operate on much more than just statistical probabilities. It's true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

For example: Imagine you give an LLM the prompt, "Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream." Now, if you ask the model, "who got chocolate ice cream from the store?" it doesn't just blindly rely on statistical likelihood. There's no way you could argue that "Dumbledore" is a statistically likely word to follow the text "who got chocolate ice cream from the store?" Instead, it uses its understanding of the specific context to determine that "Dumbledore" is the one who got chocolate ice cream from the store.

So, it's not just statistical probabilities; the models' have an ability to comprehend context and generate meaningful responses based on that context.

[–] [email protected] 0 points 1 year ago

If enforcement means big tech companies have to throw out models because they used personal information without knowledge or consent, boo fucking hoo

A) this article isn't about a big tech company, it's about an academic researcher. B) he had consent to use the data when he trained the model. The participants later revoked their consent to have their data used.

[–] [email protected] 1 points 1 year ago (1 children)

How is “don’t rely on content you have no right to use” litteraly impossible?

At the time they used the data, they had a right to use it. The participants later revoked their consent for their data to be used, after the model was already trained at an enormous cost.

[–] [email protected] 0 points 1 year ago (3 children)

ok i guess you don’t get to use private data in your models too bad so sad

You seem to have an assumption that all AI models are intended for the sole benefit of corporations. What about medical models that can predict disease more accurately and more quickly than human doctors? Something like that could be hugely beneficial for society as a whole. Do you think we should just not do it because someone doesn't like that their data was used to train the model?

[–] [email protected] 1 points 1 year ago (3 children)

There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can't train it at all. There's simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?

[–] [email protected] 1 points 1 year ago (1 children)

A ghoul isn’t “an attractive and intelligent person” you absolute pumpkin.

[–] [email protected] 1 points 1 year ago (3 children)

That's not how the world works in the year 2023. Isolationism just isn't a conceivable possibility. All countries are interconnected, and what's happening in one country influences what's happening in other countries in major ways.

[–] [email protected] 1 points 1 year ago

I don't think you said anything meaningfully different from what I already said.

You do not consider the abhorrent unethical nature of certain actions as being a valid argument against taking those actions in the pursuit of establishing a communist society. The only criticism you'll entertain is that certain actions may be ineffective or inefficient at accomplishing that goal.

[–] [email protected] 1 points 1 year ago (5 children)

No worries, that’s not what makes someone a tankie

[–] [email protected] 4 points 1 year ago (7 children)

I don’t call someone a tankie based on where they post, but in what they post. If you don’t want to be called a tankie, then don’t post tankie shit.

[–] [email protected] 2 points 1 year ago (1 children)

Kind of rude of you to agree with me

view more: next ›