Ever watch a movie or show where robbers had pulled off one of the biggest heists in history and they’re on the run, then we get a transition to the command centre where the operation is taking place and the officers are on the computers scrambling/mashing the keyboard trying to pinpoint the locations of where they could be hiding out. They usually go through a “deep” scan using their state-of-the-art facial recognition software. We all know that for entertainment purposes, that would be too easy for the program to actually work. Or so we think, I strongly believe that even the latest up-to-date facial recognition software still fails to do the job it was assigned.
A facial recognition program aims to track or attain information about a person to keep track of minorities or groups for advertising, creating laws, or helping the police force. Facial recognition serves multiple purposes depending on the holder’s motives and who they work for. For something like this to fully function it has to have enormous amounts of data stored in its servers/database. The more data it has, the stronger the results come out, but for the data to be reliable it must have data from all aspects/categories that would describe a person. With these external data sets to achieve maximum results you have to include internal factors such as gender, sexuality, and race.
Programs rely on the data they are given but they don’t, they should add. The study conducted by Wenying Wu, Pavlos Patropapas, Zheng Yang, and Panagiotis Michalatos titled “Gender Classification and Bias Mitigation in Facial Recognition” was the testing of an existing database for gender classifications and facial recognition. The purpose of the testing was to determine whether facial recognition programs can determine these kinds of factors so that officials would not offend any of these minority groups’ other parties. The solution to this was to simply add more information to the data set such as adding more people of color and other groups that were represented poorly, they even had their own category for those of other genders, these included gender fluid people, non-binary and gender queer. To no surprise the results showed that the program with more information had sharper accuracy to what a person’s respected group would have them fall under.
Why is this a problem? The problem is that government forces rely on these kinds of programs and for it to be a liability at the same time is a thing that could endanger one’s life. Let’s say that a killer is on the loose and the FBI is using facial recognition to find the person, so they use the person’s picture to try to scan all over the city for anyone that resembles them. They find a match but then they come to the realization of them getting the wrong person, now they had not only ruined or scared someone but they had also misidentified a person who had nothing to do with the crime. Lack of data that goes into these servers are for the most part the same pictures, they rarely have diversity and when they are included it is by a small percent.
You would think an invention that could find you just by having your face on the database would protect you. When in reality it could do more harm than protect. Especially if you are someone of the lgbtq+ community where you are being misidentified or misgendered. An error like this can be an offence/insult to one respected group, that could start a revolt for gender equality or equal rights for one’s identity. it’s the little errors that could start an entire domino effect of people being treated like they are either inferior or looked at as trash.
Say we do add more information about every kind of person from different races, sexualities or gender, will this solve the lack of data? Perhaps, but this won’t stop other factors such as bias to a race or gender, there’s always that one person who thinks whatever they are is superior when compared to others. This is a big problem in facial recognition because one’s personal opinions could slip into programming if someone goes based on what they believe and not what’s right then they could contradict results when used.
Daniel C Wood’s article “Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias” claims that “if society works together to work on race and equality then the steps towards AI bias could be changed”. The author opens up the topic of how apps such as Twitter reveal some form of bias towards race. In one case an African American was noticing that he and a coworker were getting their faces cropped out of the picture and when they took it to Twitter, they tried to talk their way out of it along with providing no evidence of there being no racial bias. A power company that runs twitter have these data sets and they allow these kinds of problems to occur and provide no response where they own up to their mistakes, and to also provide a solution that would satisfy both parties. some said the best route to take around facial/race bias would be to take better pictures where the lighting confuses these softwares when its time to record what the persons information may be. This would be a decent idea except for the fact that if the lighting is too bright then their true skin color would not be revealed, except it would be someone of a brighter tone. Can there truly be a solution to end AI bias or is it something that will always remain.
Programs that provide high expectations usually fail, to do what it’s meant to do, so we usually have to deal with it hoping that one day there would be a solution. With this conclusion, it only means that we have to deal with the conflicts of being misidentified or misgendered due to the lack of data feed in these programs’ databases making these programs less reliable by the second.
Reference
Wu W, Protopapas P, Yang Z, Michalatos P. | Gender Classification and Bias Mitigation in Facial Recognition | Published online July 06, 2020
Daniel C. Wood | Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias Robotics, Artificial Intelligence & Law | May–June 2021, Vol. 4, No. 3