Causal Rewrite – Water

You got it wrong

It’s the year 2023 and the idea of people coming out or wanting to be identified as something else other than a man or woman to even transition has increased. It is said that about 5 percent of young adults ages ranging from 18-29 are either transgender or of a different sexual orientation. This means that with changes being made around society, different parties are being created and communities continue to grow in the sex/gender party, but with these changes you have to be careful when identifying one. You don’t want to offend someone by getting either gender wrong or sexual orientation, that’s why facial recognition and the advancement of this tool can not only be threatening towards one identity but it could change a life for the worse.

In Wenying Wu, Pavlos Protopapas and Zheng Yang’s “Gender Classification and Bias Mitigation in Facial Images” study, they thoroughly explain the experiment conducted in 2017 where deep neural networks would be used to detect white male sexuality.This controversial research implied that the facial images of the LGBTQ population had distinct characteristics when compared to the heterosexual groups. They pushed the idea that by misgendering/misidentifying one can increase one’s perception of being socially marginalized. To be misidentified by something like facial recognition or programs of AGRS (Automated Gender Recognition System) not only follows the stigma of there only being two genders, overall reinforcing the gender/sexuality standards.

They saw the programs were faulty and having issues they decided to train a biased binary gender classifier baseline doing so they used sets of different datasets along with ensembling a transfer learning model using logistic regressions and adaboost. The results were shocking, the algorithm has mitigated algorithmic biases from the baseline and the ensemble model achieved a selection rate of 98.46%. The program proved that facial recognition can’t be 100% accurate and will have limitations when trying to guess one’s orientation but it could be worked on and if more can be fed into the database then they would be able to get stronger results.

The most common scenario where facial recognition can be tested to see whether it can be accurate and find someone based on their face/photo would be at a crime scene. Cops use the software and go through surveillance cameras and scan their database of mugshots to pinpoint a prime suspect. Even though that’s an option that could cut down hours of going through evidence it can create more problems by matching the wrong person just because of the lighting of the crime scene or the quality of the picture captured by the surveillance camera. Something like this can put one through trauma when police officers show up at their doorstep and they were innocent the whole time. Misidentification can cause trauma just because the robot assumed the facial features of the suspect.

When you look at facial recognition you have two common errors, false positive and false negative. according to Brian E. Finch’s “Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies” stating “A false negative occurs when an algorithm fails to return a matching image despite being in the defined set…. The rate of false negatives varies greatly among proprietary algorithms.” imagine this happening where officials rely on programs as such and make terrible calls based on what the computer said, this not only opens windows to miscarriages of justice. The other error is known as a false positive, “A “false positive” occurs when the image of one individual is matched to the biometric characteristics of an entirely different person, resulting in a misidentification. The consequences of a false positive in a one-to-many system can be especially serious, including leading to the mistaken arrest of an innocent person based largely, if not entirely, on the misidentification.” What needs to be done so we cant have the same problems when using advanced technology ?

Could the errors of faulty facial recognition softwares be all technological issues or is there a deeper meaning? Halley Sutton did a background check on Police officers in NYC that would tamper with facial recognition to get their criminal. In the paper Halley explains, “The report found that the department was editing photographs and uploading photographs of celebrity look-alikes into the software in order to find suspects.” The case further revealed that “The report also found that police officers edited photographs to make them appear more like a mugshot by replacing facial features with those of a model found during a Google search.” Not only is this unethical but it’s also unlawful to do something like this. imagine being photoshopped to look like a criminal and to also find out the officers used unreliable references to make the search more accurate. The worst part is yet to come, even if you get misidentified by facial recognition and you get in trouble, there is no possible way to be helped.

In a text from Kaitlin Jackson from the NACDL it is said, “The police could rely on a psychic, take tips from unreliable informants, or pull photos out of mug shot books at random. All of those methods would pass constitutional muster because a defendant has no legal right to keep his likeness out of an identification procedure.” If you were to go to court due to a misidentification then you would simply start off. with a disadvantage because according to the law anything the officers conclude are weighed more than your word.

The procedure to testify against this is would be much of a hassle for not only the person’s sake but for the case of the crime. The procedure goes along the lines of, “the court would need to test the scientific validity of FRS at a hearing. At the end of the hearing, if the court found FRS to be scientifically reliable, then the eyewitness identification should be admitted…. the outcome of the hearing might be that FRS is unreliable. If FRS frequently selects look-alikes instead of the true perpetrator, then a real danger of misidentification exists in presenting those look-alikes to human eyewitnesses for identification. In that scenario, the remedy the defense should seek is suppression of the eyewitness identification because the risk of misidentification is so great.” there’s so many thing to follow up when put in this situation, first you get misidentified by faulty technology with lack of data, then you get wrongful put in jail, get sent to court to appeal and regain innocence, but doing so you have to comply with all the procedures to prove you weren’t there at the crime scene. We should not rely on technology that’s recently been introduced to the fields, and if they are being added then they should be filled with information and not just picture of the same people with similar features and to have a category identifying different orientations. 

References

Wu W, Protopapas P, Yang Z, Michalatos P. | Gender Classification and Bias Mitigation in Facial Recognition | Published online July 06, 2020

Brian E.Finch | Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies | Published October 30, 2020 | Via The Heritage Foundation

Halley Sutton | Report finds department abused facial recognition software | published 2019 | Wiley Periodicals, Inc., A Wiley Company

Kaitlin Jackson | Challenging Facial Recognition Software in Criminal Court | July 2019 | provided by NACDL

This entry was posted in Causal Argument, Portfolio WaterDrop, Waterdrop. Bookmark the permalink.

5 Responses to Causal Rewrite – Water

  1. davidbdale says:

    I can gather the point of your Introduction, Water, but you don’t express it clearly.

    It’s the year 2023 and the idea of people coming out or wanting to be identified as something else other than a man or woman to even transition has increased.
    —You mention an INCREASE in people “wanting to come out.” I don’t know why the increase is important, but maybe you’re trying to build urgency.

    It is said that about 5 percent of young adults ages ranging from 18-29 are either transgender or of a different sexual orientation.
    —Since you don’t specify, we have to imagine that 5% is an INCREASE over some other number. This number is about BEING transgender or of a “different” sexual orientation. Your first claim was about WANTING TO BE IDENTIFIED. Those are different groups.

    This means that with changes being made around society, different parties are being created and communities continue to grow in the sex/gender party, but with these changes you have to be careful when identifying one.
    —Huh?
    —”changes being made around society” is completely meaningless
    —”parties being created” would mean types of people who didn’t exist before
    —”communities continuing to grow” MIGHT MEAN groups with different sexual orientations, or larger number of transgender individuals, or groups of people who “want to be identified.” All very confusing.

    You don’t want to offend someone by getting either gender wrong or sexual orientation, that’s why facial recognition and the advancement of this tool can not only be threatening towards one identity but it could change a life for the worse.
    —Of course, you and I personally don’t want to needlessly offend someone by “wrong-gendering” them. But that’s VERY different from law enforcement or government agencies having a legitimate interest in identifying crime suspects or terror suspects or immigration smugglers by gender, isn’t it? Which one matters to you here?

    Like

  2. davidbdale says:

    Build a little faith with me here, Water. Respond to this initial feedback by making substantial changes to your Introduction (and any other paragraph you think could benefit from a similar improvement in clarity), and I will return then with another round of feedback if you put this post back into Feedback Please.

    Like

  3. davidbdale says:

    P4.
    Very unclear language here, Water.
    You didn’t transition from the “gender identification” topic to the new one of “crime scene” ID matching. I had to make the leap without your help.

    There’s SUCH A BIG DIFFERENCE between using Face-Recognition to match a PERSON with HIMSELF and matching a PERSON to A GENDER. You really need to acknowledge that crucial difference and object to whatever you object to, but on a firm basis.

    Like

  4. Water says:

    – “You mention an INCREASE in people “wanting to come out.” I don’t know why the increase is important, but maybe you’re trying to build urgency.” I understand the comment about the 5 percent of young adults being either transgender or of a different sexual orientation, I could make changes whether how much the increase means. I’m confused about the comment about me mentioning “wanting to be identified differently ” and “the 5 percent increase” These correlate with one another, being identified includes sexuality orientation and gender, what was wrong with the sentence structure there?

    You appear to be unaware that you’re describing two distinct groups, WaterDrop. One group might welcome being identified as gay or trans or female; another group might deeply resent or resist being so identified.
    Do you see the difference between:
    —”people coming out or wanting to be identified as something else other than a man or woman”
    AND
    — “5 percent of young adults ages ranging from 18-29 are either transgender or of a different sexual orientation.”

    If you’re concerned that AI falsely or even accurately identifies individuals as “gay” or “trans” or “of a different sexual orientation,” then you have to acknowledge that the two groups might respond differently. You appear to want to protect SOMEONE from being CORRECTLY IDENTIFIED or FALSELY IDENTIFIED as sexually- or gender-disparate. But WHO? Are you advocating for the rights of closeted homosexuals or sexually-transient individuals to stay closeted? Or do you have different concerns, about the inaccuracies in “matching” that occur when AI gets its categories wrong?

    – “This means that with changes being made around society, different parties are being created and communities continue to grow in the sex/gender party, but with these changes, you have to be careful when identifying one. ” The five percent increase in people changing their identity build-up/have more members than previously before. Hence, if you have more people transition or identify differently you should respect their decision and make sure you don’t fail to disrespect them by calling them something that’s not associated with their preference.

    —First of all, I wonder whether there’s an ACTUAL increase in the numbers of people with complex gender identities. They might just be more willing to declare it in a more permissive or accepting social atmosphere.
    —You haven’t established the stakes here, WaterDrop.
    —Do AI face-recognition software programs ACTUALLY TRY to identify sexual orientation?
    —By any stretch, do they try to distinguish “males transitioning to or presenting as females” or the obverse?
    —Do the “subjects” of those categorizations ever FIND OUT that they’ve been correctly or incorrectly identified?
    —Is your bigger concern that they might be CORRECTLY identified, or that they might be MISIDENTIFIED?
    —What’s your biggest worry? That a straight CIS individual might be incorrectly identified as a “male transitioning to female” and denied a job out of prejudice against “deviant” sexuality or gender preference?
    —If so, you might need to “envision” that society.
    —If not, then what REALLY IS your primary concern?

    “changes being made around society” I tried to say things aren’t the same as they were 5 years ago, now we have more communities being welcomed/built on. The sentence “parties being created” I realized the sentence was meaningless so that phrase will be deleted and only consist of “This means that with changes being made around society, communities continue to grow in the sex/gender party, but with these changes, you have to be careful when identifying someone of these associated parties.” Do you have any suggestions for rewriting the paragraph?

    —Stop being a wimp and say what you mean.

    – Communities continuing to grow” MIGHT MEAN groups with different sexual orientations, or larger number of transgender individuals, or groups of people who “want to be identified.” All very confusing. what’s confusing about this?

    We don’t know who’s in the “communities” you reference. Openly gay, openly trans, or wishing to conceal those identities?

    – “But that’s VERY different from law enforcement or government agencies having a legitimate interest in identifying crime suspects or terror suspects or immigration smugglers by gender, isn’t it? Which one matters to you here?” I took the approach to show what misidentifying can do to one, the casual purpose is to show facial recognition leads to misidentification cases which can cause one to be put in jail all because a program couldn’t identify or match the face of the suspect. I do not only include the aspect in society but in the judicial environment.

    Understood, BUT what you HAVE NOT distinguished is WHAT KIND OF MISIDENTIFICATION concerns you.
    —Take law enforcement. Police enter a blurry image of a suspect into the face-recognition program looking for a match. Does that software go looking for “a gay man whose face resembles this face?” or “a person whose face resembles this face?” In other words, does “misidentifying someone” as gay put him at risk of misidentification?
    —Take employment. Human Resources runs the picture of a job applicant through a face-recognition program. Are they scanning for a match from the Justice Department looking for mug shots, prior convictions? Or are they scanning the database of “known homosexuals” looking for a judgment on whether this picture is of a gay man or not?

    Believe it or not, from your explanations so far, readers CANNOT TELL which sort of misidentification you’re worried about. And until you’re VERY CLEAR about what AI does, how the programs work, what parameters they follow in their matching, etc., we have NO IDEA what you’re objecting to.

    – How can I change the language to be clear and understandable? What could I do to transition identification to the topic of crime scene id scenario? I thought that the statement facial programs are unreliable and misidentification was clear and the shift of misidentification to being involved in a situation was kind of straightforward, It may just be me but I’m confused about what could be done to make it comprehensible for others. What’s wrong with trying to include being misgendered and misidentified, you need gender to know what a person is.

    —No, you don’t. That’s exactly the problem and the reason your claims are not clear.
    I have a picture of A on the left. Match it to the same person, from 1-7, taken at a different time.:

    —Unless I very much misunderstand the sort of matching you're discussing, the program compares facial features of the "test" photo to the features of the random photos on the right.
    —It does not seek: men who resemble the test phot, or gay men who resemble the photo, or transitioning women who resemble the photo.
    —How can the program "mis-identify"? Does it say: the test subject it gay? or does it say, the test subject matches photo 5 from the random set?

    Like

  5. Water says:

    Sorry for the inactive conversation between us, I will try to keep up with the comments and ask more questions furthermore

    The help was useful and well needed

    Like

Leave a comment