How can the ‘Pandemic of Racism’ be minimized in Facial Recognition System?

Facial recognition is a form of artificial intelligence. For facial recognition, algorithms are written to measure the geometry of someone’s face, compare those unique measurements to a database of faces and return potential matches with varying degrees of certainty.
                                            
Facial recognition can offer convenience, such as the millions of people who use their face to unlock their iPhones. It can also be used as a surveillance tool and in the retail shops to recommend products based on the demographic profiles. But one false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests or worse.
As AI embeds itself into daily technological life, privacy activists and technology enthusiasts agree that the powerful tool is here to stay but the implementation of the technology has complex problems as well as disagreements.

Facial recognition systems tend to exhibit the same prejudices and misperceptions held by human programmers. Many results have revealed that algorithms are no less biased than humans.
Privacy activists and artificial intelligence experts all across the world are creating solutions to address racial and gender bias in facial recognition surveillance.

Some of the problems in the Facial Recognition system that are prevailing all across the world:
· An African-American man, George Floyd, was pinned down on his neck with the police officer’s knee for several minutes because he was suspected to pay for cigarettes using a counterfeit $20 bill and he died eventually. He was suspected because he was black. Racism is not endurable.
· Biased facial recognition system disproportionately labelled minority UCLA students and faculty as criminals.


· According to a press release, ‘The vast majority of incorrect matches were of people of colour. In many cases, the software matched two individuals who had almost nothing in common beyond their race and claimed they were the same person with 100% confidence.’
· One of the obvious problems with facial recognition systems is the tendency of the algorithms to exhibit the same prejudices and misperceptions held by human programmers.
· Facial-recognition systems misidentify people of colour more often than white people. Asian and African-American people are up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search.
· Native Americans have the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
                                            
How Facial Recognition adopted its Racial Bias?
· Oftentimes, researchers who create facial recognition models only have access to open-source collections of images -it can be time-consuming and costly to create their own. But open-source collections are often limited in diversity, so researchers are limited on the diverse datasets to train and test their models.
· When the distribution of the training data is dissimilar to the testing or real-world or testing data distribution, then the model won’t give as accurate results as when training on a more diverse dataset that better represents real-world conditions.
· Lack of training data diversity has meant facial recognition model overfitting and built-in racial biases.
· Models’ bias can mean non-offending dark-complexioned citizens will be incorrectly classified and possibly arrested.
· Police officers increasingly use facial recognition for surveillance and identification to apprehend those with warrants out for their arrest. But police facial recognition systems don’t work as well on dark-complexioned people’s faces. The police department also has access to DMV images, so their software can also try to match against non-offending citizens.
  
The technology has been continuously accused of being biased against certain groups of people. It's important to define mechanisms to make sure that if the technology is going to be used, it’s used fairly and accurately.

Resolving the Bias in the Technology:
The current facial recognition technology available has racial and gender bias. It is often trained and tested on non-diverse datasets. Following the below-mentioned methods, bias can be minimized to an extent:
· Use diverse training sets:  Skin type classification system should be used rather than classifying by racial and ethnic labels because of the wide variety of skin types within those labels which would not be accounted for. Collecting images more diverse in gender and skin type and using this dataset to test the models for racial bias.
· Creating our own training sets: Bias can be minimized by extracting own training sets but carefully by the researchers. Thinking through the extraction methods to minimize the potential for homogeneous datasets and evaluating how to collect a training dataset.
· Boosting available databases to counter the bias: Identifying the types of data for which training data is limited and boosting the weights of or the occurrences of these examples. Additional perturbations or noise addition can be done to the rarer types of data in order to account for the variations that may exist within this dataset.
· Build diverse teams: Oftentimes research teams and developers test on their own images, so having a diverse team will allow racial bias to be detected early on. Having a diverse team also increases the unique perspectives in the room to combine and have better-informed applications and models. 

Key Takeaways:
· The racial bias that exists in facial recognition negatively impacts people of colour and could potentially lead to arrests of law-abiding black citizens. ‘Pandemic of Racism’ led to George Floyd’s death in Minneapolis, USA just because he was black and he was suspected to pay a counterfeit $20 bill.
· Racial bias in facial recognition software is largely due to non-diverse training datasets. Researchers can combat this by being intentional about diversity in the training sets they use.
· Existing open-source datasets and facial recognition models usually are not diverse or are implemented using non-diverse datasets, so they should be evaluated carefully.
· One key learning is to be able to identify that the system is biased. This requires an understanding of what the source of biases can be and then validating whether the trained system does have this bias or not. Also, if the bias is acceptable in society or not. For example, bias based on the ethnicity is an unwanted one while bias based on hair colour is required to be able to identify individuals more accurately.

The technology’s flaws are only one concern. Face recognition technology accurate or not - can enable undetectable, persistent and suspicionless surveillance on an unprecedented scale. None of the leading companies’ systems performs with 100% accuracy. All are experimenting in real-time with real humans and so are we. Our systems have been deployed at various stores across the country and we have got good results as far but we are working continuously to improve the algorithms to reduce bias in recognition and be more inclusive.

Post a comment

0 Comments