Causal Claim – Water

You got it wrong

It’s the year 2023 and the idea of people coming out or wanting to be identified as something else other than a man or woman to even transition has increased. It is said that about 5 percent of young adults ages ranging from 18-29 are either transgender or of a different sexual orientation. This means that with changes being made around society, different parties are being created and communities continue to grow in the sex/gender party, but with these changes you have to be careful when identifying one. You don’t want to offend someone by getting either gender wrong or sexual orientation, that’s why facial recognition and the advancement of this tool can not only be threatening towards one identity but it could change a life for the worse.

In Wenying Wu, Pavlos Protopapas and Zheng Yang’s “Gender Classification and Bias Mitigation in Facial Images” study, they thoroughly explain the experiment conducted in 2017 where deep neural networks would be used to detect white male sexuality.This controversial research implied that the facial images of the LGBTQ population had distinct characteristics when compared to the heterosexual groups. They pushed the idea that by misgendering/misidentifying one can increase one’s perception of being socially marginalized. To be misidentified by something like facial recognition or programs of AGRS (Automated Gender Recognition System) not only follows the stigma of there only being two genders, overall reinforcing the gender/sexuality standards.

They saw the programs were faulty and having issues they decided to train a biased binary gender classifier baseline doing so they used sets of different datasets along with ensembling a transfer learning model using logistic regressions and adaboost. The results were shocking, the algorithm has mitigated algorithmic biases from the baseline and the ensemble model achieved a selection rate of 98.46%. The program proved that facial recognition can’t be 100% accurate and will have limitations when trying to guess one’s orientation but it could be worked on and if more can be fed into the database then they would be able to get stronger results.

The most common scenario where facial recognition can be tested to see whether it can be accurate and find someone based on their face/photo would be at a crime scene. Cops use the software and go through surveillance cameras and scan their database of mugshots to pinpoint a prime suspect. Even though that’s an option that could cut down hours of going through evidence it can create more problems by matching the wrong person just because of the lighting of the crime scene or the quality of the picture captured by the surveillance camera. Something like this can put one through trauma when police officers show up at their doorstep and they were innocent the whole time. Misidentification can cause trauma just because the robot assumed the facial features of the suspect.

When you look at facial recognition you have two common errors, false positive and false negative. according to Brian E. Finch’s “Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies” stating “A false negative occurs when an algorithm fails to return a matching image despite being in the defined set…. The rate of false negatives varies greatly among proprietary algorithms.” imagine this happening where officials rely on programs as such and make terrible calls based on what the computer said, this not only opens windows to miscarriages of justice. The other error is known as a false positive, “A “false positive” occurs when the image of one individual is matched to the biometric characteristics of an entirely different person, resulting in a misidentification. The consequences of a false positive in a one-to-many system can be especially serious, including leading to the mistaken arrest of an innocent person based largely, if not entirely, on the misidentification.” What needs to be done so we cant have the same problems when using advanced technology ?

Could the errors of faulty facial recognition softwares be all technological issues or is there a deeper meaning? Halley Sutton did a background check on Police officers in NYC that would tamper with facial recognition to get their criminal. In the paper Halley explains, “The report found that the department was editing photographs and uploading photographs of celebrity look-alikes into the software in order to find suspects.” The case further revealed that “The report also found that police officers edited photographs to make them appear more like a mugshot by replacing facial features with those of a model found during a Google search.” Not only is this unethical but it’s also unlawful to do something like this. imagine being photoshopped to look like a criminal and to also find out the officers used unreliable references to make the search more accurate. The worst part is yet to come, even if you get misidentified by facial recognition and you get in trouble, there is no possible way to be helped.

In a text from Kaitlin Jackson from the NACDL it is said, “The police could rely on a psychic, take tips from unreliable informants, or pull photos out of mug shot books at random. All of those methods would pass constitutional muster because a defendant has no legal right to keep his likeness out of an identification procedure.” If you were to go to court due to a misidentification then you would simply start off. with a disadvantage because according to the law anything the officers conclude are weighed more than your word.

The procedure to testify against this is would be much of a hassle for not only the person’s sake but for the case of the crime. The procedure goes along the lines of, “the court would need to test the scientific validity of FRS at a hearing. At the end of the hearing, if the court found FRS to be scientifically reliable, then the eyewitness identification should be admitted…. the outcome of the hearing might be that FRS is unreliable. If FRS frequently selects look-alikes instead of the true perpetrator, then a real danger of misidentification exists in presenting those look-alikes to human eyewitnesses for identification. In that scenario, the remedy the defense should seek is suppression of the eyewitness identification because the risk of misidentification is so great.” there’s so many thing to follow up when put in this situation, first you get misidentified by faulty technology with lack of data, then you get wrongful put in jail, get sent to court to appeal and regain innocence, but doing so you have to comply with all the procedures to prove you weren’t there at the crime scene. We should not rely on technology that’s recently been introduced to the fields, and if they are being added then they should be filled with information and not just picture of the same people with similar features and to have a category identifying different orientations. 

References

Wu W, Protopapas P, Yang Z, Michalatos P. | Gender Classification and Bias Mitigation in Facial Recognition | Published online July 06, 2020

Brian E.Finch | Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies | Published October 30, 2020 | Via The Heritage Foundation

Halley Sutton | Report finds department abused facial recognition software | published 2019 | Wiley Periodicals, Inc., A Wiley Company

Kaitlin Jackson | Challenging Facial Recognition Software in Criminal Court | July 2019 | provided by NACDL

This entry was posted in Causal Argument, Portfolio WaterDrop, Waterdrop. Bookmark the permalink.

Leave a comment