this post was submitted on 09 Feb 2024
111 points (74.9% liked)
Technology
59132 readers
2910 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They don't discuss it here, but it's most likely a reinforcement model that operates different generations of learned behavior to decide if it's improving or not.
It would know that the ball going in the hole is "bad", and then try to avoid that happening. Each move that is "good' is then kept in a list of moves it should perform in the next generation of its plan to avoid the "bad" things. Loop -> fail -> logic build -> retry. After 6 hours, it has mapped a complete list of "good" moves to affect it's final outcome.
The answer your question: no, it would not be able to use what it learned here on a different map of the board. It's building reactions to events based on this one board, and bound by rules. You could use the ruleset with another board, but it would need to learn it all again just as a human would.
The thing about these models is less that they will work (it is assumed they eventually will through trial and error), but how efficiently they will work. The number of generational cycles and retries is usually the benchmark when dealing with reinforcement, but they don't discuss that data here either.
Yes, but that's kind of my point
We see it learn something with insane precision but most often it is almost an effect of over-training. It probably would require less time to learn another layout but it's not learning the general rules (can't go through walls, holes are bad, we want to get to X), it learns the specific layout. Each time a layout changes, it would have to re-learn it
It is impressive and enables automation in a lot of areas, but in the end it is still only machine learning, adapting weights to specific scenario