- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on.
AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on.
Isn’t the whole point of AI decision making to provide plausible deniability for these sort of things?
Yes, but if you train an AI on racist/sexist data, it will naturally do the same.
Depends how the law is applied…
Kinda like if a self driving car kills someone, who is liable, driver, manufacturer, seller?
I guess you pay insurance and they take on liability is another option.