- cross-posted to:
- energia
- cross-posted to:
- energia
Wapo journalist verifies that robotaxis fail to stop for pedestrians in marked crosswalk 7 out of 10 times. Waymo admitted that it follows “social norms” rather than laws.
The reason is likely to compete with Uber, 🤦
Wapo article: https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/
Cross-posted from: https://mastodon.uno/users/rivoluzioneurbanamobilita/statuses/113746178244368036
Training self driving cars that way would be irresponsible, because it would behave unpredictably and could be really dangerous. In reality, self driving cars use AI for only some tasks for which it is really good at like object recognition (e.g. recognizing traffic signs, pedestrians and other vehicles). The car uses all this data to build a map of its surroundings and tries to predict what the other participants are going to do. Then, it decides whether it’s safe to move the vehicle, and the path it should take. All these things can be done algorithmically, AI is only necessary for object recognition.
In cases such as this, just follow the money to find the incentives. Waymo wants to maximize their profits. This means maximizing how many customers they can serve as well as minimizing driving time to save on gas. How do you do that? Program their cars to be a bit more aggressive: don’t stop on yellow, don’t stop at crosswalks except to avoid a collision, drive slightly over the speed limit. And of course, lobby the shit out of every politician to pass laws allowing them to get away with breaking these rules.
According to some cursory research (read: Google), obstacle avoidance uses ML to identify objects, and uses those identities to predict their behavior. That stage leaves room for the same unpredictability, doesn’t it? Say you only have 51% confidence that a “thing” is a pedestrian walking a bike, 49% that it’s a bike on the move. The former has right of way and the latter doesn’t. Or even 70/30. 90/10.
There’s some level where you have to set the confidence threshold to choose a course of action and you’ll be subject to some ML-derived unpredictability as confidence fluctuates around it… right?
In such situations, the car should take the safest action and assume it’s a pedestrian.
But mechanically that’s just moving the confidence threshold to 100% which is not achievable as far as I can tell. It quickly reduces to “all objects are pedestrians” which halts traffic.
This would only be in ambiguous situations when the confidence level of “pedestrian” and “cyclist” are close to each other. If there’s an object with 20% confidence level that it’s a pedestrian, it’s probably not. But we’re talking about the situation when you have to decide whether to yield or not, which isn’t really safety critical.
The car should avoid any collisions with any object regardless of whether it’s a pedestrian, cyclist, cat, box, fallen tree or any other object, moving or not.