Okay? It's not like these systems are actually intelligent. Anything different from the majority of cases is going to be at an inherent disadvantage in being detected, right? At the volume of data used for their models, surely it's just a matter of statistics.
Maybe I'm wrong (and I'm surely using the wrong terminology), but it seems like that must be the case. It's not some issue of human racial bias, just a bias based on relative population. Or is my understanding that flawed?
Mind you, I'm not saying it doesn't need to be remedied posthaste.