this post was submitted on 07 Dec 2023
620 points (96.1% liked)
Technology
59132 readers
3677 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This (sensor fusion) is a valid issue in mobile robotics. Adding more sensors doesn't necessarily improve stability or reliability.
After a point, yes. However, that point comes when the sensor you are adding is more than the second type in the system. The correct answer is to work into your algorithm a weighting system so the car can decide which sensor it trusts to not kill the driver, i.e. if the LIDAR sees the broadside of a trailer and the camera doesn't, the car should believe the LIDAR over the camera, as applying the brakes and speeding into the obstacle at 60mph is likely the safer option.
Yes the solution is fairly simple in theory, implementing this is significantly harder, which is why it is not a trivial issue to solve in robotics.
I'm not saying their decision was the right one, just that his argument with multiple sensors creating noise in the decision-making is a completely valid argument.
Doesn't seem too complicated... if ANY of the sensors see something in the way that the system can't resolve then it should stop the vehicle/force the driver to take over
Then you have a very unreliable system, stopping without actual reason all the time, causing immense frustration for the user. Is it safe? I guess, cars that don't move generally are. Is it functional? No, not at all.
I'm not advocating unsafe implementations here, I'm just pointing out that your suggestion doesn't actually solve the issue, as it leaves a solution that's not functional.
If they're using such unreliable sensors that they're getting false positives all the time the system isn't going to be functional in the first place.
All sensors throw a shitload of false positives (or negatives) when used in the real world, that's why the filtering and unification between sensors is so important, but also really hard to solve, while still getting a consistent and reliable solution.
"seeing an obstacle" is a high level abstraction. Sensor fusion is a lower level problem. It's fundamentally kinda tricky to get coherent information out of multiple sensors looking partially at the same thing in different ways. Not impossible, but the basic model is less "just check each camera" and more sheafs