On Wed, 29 Jul 2020 11:35:20 +1200, I wrote:
Well, if the raw data being used to train the
“algorithms” are biased
against those social groups, then naturally the decisions made by
those “algorithms” will be similarly biased.
Another example comes from Twitter’s new autocropping algorithm
There is a link to a rather dramatic, if NSFW, test, where two versions
of an image of a certain prominent US politician are presented, one
original, the other with his anatomy distorted in a particular way. Two
composites are created, with exactly the same component images, just
ordered differently. In each case, Twitter’s algorithm unerringly zooms
in on ... guess which version ...