What went wrong
Ever watch a movie or show where robbers had pulled off one of the biggest heists in history and they’re on the run, then we get a transition to the command centre where the operation is taking place and the officers are on the computers scrambling/mashing the keyboard trying to pinpoint the locations of where they could be hiding out. They usually go through a “deep” scan using their state-of-the-art facial recognition software. We all know that for entertainment purposes, that would be too easy for the program to actually work. Or so we think, I strongly believe that even the latest up-to-date facial recognition software still fails to do the job it was assigned.
A facial recognition program aims to track or attain information about a person to keep track of minorities or groups for advertising, creating laws, or helping the police force. Facial recognition serves multiple purposes depending on the holder’s motives and who they work for. For something like this to fully function it has to have enormous amounts of data stored in its servers/database. The more data it has, the stronger the results come out, but for the data to be reliable it must have data from all aspects/categories that would describe a person. With these external data sets to achieve maximum results you have to include internal factors such as gender, sexuality, and race.
Programs rely on the data they are given but they don’t, they should add. The study conducted by Wenying Wu, Pavlos Patropapas, Zheng Yang, and Panagiotis Michalatos titled “Gender Classification and Bias Mitigation in Facial Recognition” was the testing of an existing database for gender classifications and facial recognition. The purpose of the testing was to determine whether facial recognition programs can determine these kinds of factors so that officials would not offend any of these minority groups’ other parties. The solution to this was to simply add more information to the data set such as adding more people of color and other groups that were represented poorly, they even had their own category for those of other genders, these included gender fluid people, non-binary and gender queer. To no surprise the results showed that the program with more information had sharper accuracy to what a person’s respected group would have them fall under.
Why is this a problem? The problem is that government forces rely on these kinds of programs and for it to be a liability at the same time is a thing that could endanger one’s life. Let’s say that a killer is on the loose and the FBI is using facial recognition to find the person, so they use the person’s picture to try to scan all over the city for anyone that resembles them. They find a match but then they come to the realization of them getting the wrong person, now they had not only ruined or scared someone but they had also misidentified a person who had nothing to do with the crime. Lack of data that goes into these servers are for the most part the same pictures, they rarely have diversity and when they are included it is by a small percent.
You would think an invention that could find you just by having your face on the database would protect you. When in reality it could do more harm than protect. Especially if you are someone of the lgbtq+ community where you are being misidentified or misgendered. An error like this can be an offence/insult to one respected group, that could start a revolt for gender equality or equal rights for one’s identity. It’s the little errors that could start an entire domino effect of people being treated like they are either inferior or looked at as trash.
Say we add more information about every kind of person from different races, sexualities or gender, will this solve the lack of data? Perhaps, but this won’t stop other factors such as bias to a race or gender, there’s always that one person who thinks whatever they are is superior when compared to others. This is a big problem in facial recognition because one’s personal opinions could slip into programming if someone goes based on what they believe and not what’s right then they could contradict results when used.
Daniel C Wood’s article “Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias” claims that “if society works together to work on race and equality then the steps towards AI bias could be changed”. The author opens up the topic of how apps such as Twitter reveal some form of bias towards race. In one case an African American was noticing that he and a coworker were getting their faces cropped out of the picture and when they took it to Twitter, they tried to talk their way out of it along with providing no evidence of there being no racial bias. A power company that runs twitter has these data sets and they allow these kinds of problems to occur and provide no response where they own up to their mistakes, and to also provide a solution that would satisfy both parties. Some said the best route to take around facial/race bias would be to take better pictures where the lighting confuses these softwares when it’s time to record what the persons information may be. This would be a decent idea except for the fact that if the lighting is too bright then their true skin color would not be revealed, except it would be someone of a brighter tone. Can there truly be a solution to end AI bias or is it something that will always remain.
Programs that provide high expectations usually fail to do what it’s meant to do, so we usually have to deal with it hoping that one day there will be a solution. With this conclusion, it only means that we have to deal with the conflicts of being misidentified or misgendered due to the lack of data feed in these programs’ databases making these programs less reliable by the second.
It’s the year 2023 and the idea of people coming out or wanting to be identified as something else other than a man or woman to even transition has increased. It is said that about 5 percent of young adults ages ranging from 18-29 are either transgender or of a different sexual orientation. This means that with changes being made around society, different parties are being created and communities continue to grow in the sex/gender party, but with these changes you have to be careful when identifying one. You don’t want to offend someone by getting either gender wrong or sexual orientation, that’s why facial recognition and the advancement of this tool can not only be threatening towards one identity but it could change a life for the worse.
In Wenying Wu, Pavlos Protopapas and Zheng Yang’s “Gender Classification and Bias Mitigation in Facial Images” study, they thoroughly explain the experiment conducted in 2017 where deep neural networks would be used to detect white male sexuality.This controversial research implied that the facial images of the LGBTQ population had distinct characteristics when compared to the heterosexual groups. They pushed the idea that by misgendering/misidentifying one can increase one’s perception of being socially marginalized. To be misidentified by something like facial recognition or programs of AGRS (Automated Gender Recognition System) not only follows the stigma of there only being two genders, overall reinforcing the gender/sexuality standards.
They saw the programs were faulty and having issues they decided to train a biased binary gender classifier baseline doing so they used sets of different datasets along with ensembling a transfer learning model using logistic regressions and adaboost. The results were shocking, the algorithm has mitigated algorithmic biases from the baseline and the ensemble model achieved a selection rate of 98.46%. The program proved that facial recognition can’t be 100% accurate and will have limitations when trying to guess one’s orientation but it could be worked on and if more can be fed into the database then they would be able to get stronger results.
The most common scenario where facial recognition can be tested to see whether it can be accurate and find someone based on their face/photo would be at a crime scene. Cops use the software and go through surveillance cameras and scan their database of mugshots to pinpoint a prime suspect. Even though that’s an option that could cut down hours of going through evidence it can create more problems by matching the wrong person just because of the lighting of the crime scene or the quality of the picture captured by the surveillance camera. Something like this can put one through trauma when police officers show up at their doorstep and they were innocent the whole time. Misidentification can cause trauma just because the robot assumed the facial features of the suspect.
When you look at facial recognition you have two common errors, false positive and false negative. according to Brian E. Finch’s “Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies” stating “A false negative occurs when an algorithm fails to return a matching image despite being in the defined set…. The rate of false negatives varies greatly among proprietary algorithms.” imagine this happening where officials rely on programs as such and make terrible calls based on what the computer said, this not only opens windows to miscarriages of justice. The other error is known as a false positive, “A “false positive” occurs when the image of one individual is matched to the biometric characteristics of an entirely different person, resulting in a misidentification. The consequences of a false positive in a one-to-many system can be especially serious, including leading to the mistaken arrest of an innocent person based largely, if not entirely, on the misidentification.” What needs to be done so we cant have the same problems when using advanced technology ?
Could the errors of faulty facial recognition softwares be all technological issues or is there a deeper meaning? Halley Sutton did a background check on Police officers in NYC that would tamper with facial recognition to get their criminal. In the paper Halley explains, “The report found that the department was editing photographs and uploading photographs of celebrity look-alikes into the software in order to find suspects.” The case further revealed that “The report also found that police officers edited photographs to make them appear more like a mugshot by replacing facial features with those of a model found during a Google search.” Not only is this unethical but it’s also unlawful to do something like this. imagine being photoshopped to look like a criminal and to also find out the officers used unreliable references to make the search more accurate. The worst part is yet to come, even if you get misidentified by facial recognition and you get in trouble, there is no possible way to be helped.
In a text from Kaitlin Jackson from the NACDL it is said, “The police could rely on a psychic, take tips from unreliable informants, or pull photos out of mug shot books at random. All of those methods would pass constitutional muster because a defendant has no legal right to keep his likeness out of an identification procedure.” If you were to go to court due to a misidentification then you would simply start off. with a disadvantage because according to the law anything the officers conclude are weighed more than your word.
The procedure to testify against this is would be much of a hassle for not only the person’s sake but for the case of the crime. The procedure goes along the lines of, “the court would need to test the scientific validity of FRS at a hearing. At the end of the hearing, if the court found FRS to be scientifically reliable, then the eyewitness identification should be admitted…. the outcome of the hearing might be that FRS is unreliable. If FRS frequently selects look-alikes instead of the true perpetrator, then a real danger of misidentification exists in presenting those look-alikes to human eyewitnesses for identification. In that scenario, the remedy the defense should seek is suppression of the eyewitness identification because the risk of misidentification is so great.” there’s so many thing to follow up when put in this situation, first you get misidentified by faulty technology with lack of data, then you get wrongful put in jail, get sent to court to appeal and regain innocence, but doing so you have to comply with all the procedures to prove you weren’t there at the crime scene. We should not rely on technology that’s recently been introduced to the fields, and if they are being added then they should be filled with information and not just pictures of the same people with similar features and to have a category identifying different orientations.
What if facial recognition actually worked and could properly record or relay information without it being able to misidentify what’s in front of the camera. Would this be beneficial or would nothing change? Facial recognition has a constant problem of not being able to make out what it’s looking at, so when it tries to give an answer it tends to give a response that is not accurate to what the person may go by. One could make the claim that facial recognition in fact is pretty stable and could be reliable and safe, famous companies such as Amazon support this idea.
Amazon has a subsection dedicated to facial recognition. It talks about how it works, and the use of the programs or devices, on its website they state “Facial recognition algorithms have near-perfect accuracy in ideal conditions. There is a higher success rate in controlled settings but generally a lower performance rate in the real world. It is difficult to accurately predict the success rate of this technology, as no single measure provides a complete picture.” There’s a flaw with this statement, not only did they make the claim facial recognition has near-perfect accuracy they then disprove their claim by saying it has a low-performance rate in the real world. Does this imply that it only works in certain locations and could be a liability if used in the real world. For example if used on security cameras and they were used to find anyone who either went past the speed limit or involved in a crime would it just be a blur due to the lighting and resolutions being bad or will it be able to have a clear picture of the person. A flaw like this really defeats the purpose of having facial recognition, I mean it could really be something incredible for the future but if a simple problem such as poor lighting or understanding facial features can be detrimental for safety or data collecting.
Developers say that they have made changes and improvements to these kinds of programs, but could they be lying or just simply trying to impress those companies who use the program. According to an article from Innovatrics titled “How the accuracy of facial recognition technology has improved over time” in this article they claim, “The first is “education” – the better the dataset that the neural network is trained on, the better are the results. To help the machine learn better, the datasets have to be labeled correctly and checked for mistakes. The other avenue is to rely on improving computing power. Neural networks can be more precise and their outputs can be tested repeatedly to find and repair their blind spots.” This makes sense to a certain degree. If you add more diverse pictures about people from all parts of the world and store them in a dataset, they mention education but they didn’t include what groups or other forms of information they would add to strengthen the accuracy. One main problem of facial recognition and trying to identify a person is they don’t have groups that involve societal changes such as people part of the LGBTQ or other respected groups. To be accurate must mean you have to include everything, inclusion presents external problems such as gender or racial bias, which have been proven to be an issue for these datasets as some had pictures with those of similar features and no diversity. It has even brought up the talk of either banning these kinds of programs.
Activists have been demanding lawmakers put a ban on facial recognition. The reason why they feel like Facial recognition should be removed is that they claim that photo-matching technology is inaccurate with respect to photos of women and minorities. The argumentative article by the security industry association titled “What Science Really Says About Facial Recognition Accuracy and Bias Concerns” tried to disprove a paper published by a grad student researcher at MIT Media Lab in 2018 who would disprove the function of facial recognition and misidentifies mainly people of color. The article tries to disprove their claim by saying, “the paper is frequently cited as showing that facial recognition software “misidentifies’ ‘ dark-skinned women nearly 35% of the time. But there’s a problem: Gender Shades evaluated demographic-labeling algorithms, not facial recognition.” Race has as much to do with facial recognition, facial recognition can’t go off facial features they have to base them off skin color because people could have similar shades of skin color but they could be from any part of the world, information such as region is crucial when determining one’s identity. Identity isn’t just gender or height, it’s everything used to categorize one, this includes sex, race, orientation and more.
Furthermore, in the article, the author tries to provide more information regarding facial recognition improvement as a whole and how the research paper had flaws regarding the experiment run on the accuracy of the software program.The American Civil Liberties Union conducted a test on amazon’s recognition program on the people of congress back in 2018 it was stated, “it created a database of 25,000 publicly available images and ran a search against official photos of the 535 members of Congress, returning “false matches” for 28 of them. The ACLU claimed that since 11 of these matches, or 40%, were people of color, and only 20% of Congress overall are people of color, this is evidence of racial bias in facial recognition systems” this proves facial recognition is unreliable in some instance but after the publishment of the experiment amazon reran the test but this time they had 850,000 images to run for matches and they found zero “false matches”. This simply means that with more images they could be able to pinpoint an accurate match.
Finally, in the year 2019, statistics regarding facial recognition came out and the stats were different compared to before. In the statistics, it declared that “According to data from the most recent evaluation from June 28, each of the top 150 algorithms are over 99% accurate across Black male, white male, Black female and white female demographics. For the top 20 algorithms, the accuracy of the highest performing demographic versus the lowest varies only between 99.7% and 99.8%” This would be interesting to see but if you think about it, they never included or mentioned other groups. They failed to mention how people from Asia or South America compare when out into these facial recognition programs. Overall leaving with the idea that facial recognition could be something if they either had more information or it could just be that facial recognition isn’t trusting and has too many flaws to correct.
References
Wu W, Protopapas P, Yang Z, Michalatos P. | Gender Classification and Bias Mitigation in Facial Recognition | Published online July 06, 2020
Daniel C. Wood | Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias Robotics, Artificial Intelligence & Law | May–June 2021, Vol. 4, No. 3
Brian E.Finch | Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies | Published October 30, 2020 | Via The Heritage Foundation
Halley Sutton | Report finds department abused facial recognition software | published 2019 | Wiley Periodicals, Inc., A Wiley Company
Kaitlin Jackson | Challenging Facial Recognition Software in Criminal Court | July 2019 | provided by NACDL
Amazon | What is Facial Recognition? | published online | Amazon
Innovatrics | How the accuracy of facial recognition technology has improved over time | Author: Barbara Rusnáková | Published online
Jake P., Senior Director, Government Relations, SIA, and David R., Chief Operating Officer & General Counsel, Rank One Computing| What Science Really Says About Facial Recognition Accuracy and Bias Concerns | July 23, 2022 | Security Industry Association