Definition Rewrite – Water

Needs a Title

Ever watch a movie or show where robbers had pulled off one of the biggest heists in history and they’re on the run, then we get a transition to the command centre where the operation is taking place and the officers are on the computers scrambling/mashing the keyboard trying to pinpoint the locations of where they could be hiding out. They usually go through a “deep” scan using their state-of-the-art facial recognition software. We all know that for entertainment purposes, that would be too easy for the program to actually work. Or so we think, I strongly believe that even the latest up-to-date facial recognition software still fails to do the job it was assigned.

A facial recognition program aims to track or attain information about a person to keep track of minorities or groups for advertising, creating laws, or helping the police force. Facial recognition serves multiple purposes depending on the holder’s motives and who they work for. For something like this to fully function it has to have enormous amounts of data stored in its servers/database. The more data it has, the stronger the results come out, but for the data to be reliable it must have data from all aspects/categories that would describe a person. With these external data sets to achieve maximum results you have to include internal factors such as gender, sexuality, and race.

Programs rely on the data they are given but they don’t, they should add. The study conducted by Wenying Wu, Pavlos Patropapas, Zheng Yang, and Panagiotis Michalatos titled “Gender Classification and Bias Mitigation in Facial Recognition” was the testing of an existing database for gender classifications and facial recognition. The purpose of the testing was to determine whether facial recognition programs can determine these kinds of factors so that officials would not offend any of these minority groups’ other parties. The solution to this was to simply add more information to the data set such as adding more people of color and other groups that were represented poorly, they even had their own category for those of other genders, these included gender fluid people, non-binary and gender queer. To no surprise the results showed that the program with more information had sharper accuracy to what a person’s respected group would have them fall under.

Why is this a problem? The problem is that government forces rely on these kinds of programs and for it to be a liability at the same time is a thing that could endanger one’s life. Let’s say that a killer is on the loose and the FBI is using facial recognition to find the person, so they use the person’s picture to try to scan all over the city for anyone that resembles them. They find a match but then they come to the realization of them getting the wrong person, now they had not only ruined or scared someone but they had also misidentified a person who had nothing to do with the crime. Lack of data that goes into these servers are for the most part the same pictures, they rarely have diversity and when they are included it is by a small percent.

You would think an invention that could find you just by having your face on the database would protect you. When in reality it could do more harm than protect. Especially if you are someone of the lgbtq+ community where you are being misidentified or misgendered. An error like this can be an offence/insult to one respected group, that could start a revolt for gender equality or equal rights for one’s identity. it’s the little errors that could start an entire domino effect of people being treated like they are either inferior or looked at as trash.

Say we do add more information about every kind of person from different races, sexualities or gender, will this solve the lack of data? Perhaps, but this won’t stop other factors such as bias to a race or gender, there’s always that one person who thinks whatever they are is superior when compared to others. This is a big problem in facial recognition because one’s personal opinions could slip into programming if someone goes based on what they believe and not what’s right then they could contradict results when used.

Daniel C Wood’s article “Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias” claims that “if society works together to work on race and equality then the steps towards AI bias could be changed”. The author opens up the topic of how apps such as Twitter reveal some form of bias towards race. In one case an African American was noticing that he and a coworker were getting their faces cropped out of the picture and when they took it to Twitter, they tried to talk their way out of it along with providing no evidence of there being no racial bias. A power company that runs twitter have these data sets and they allow these kinds of problems to occur and provide no response where they own up to their mistakes, and to also provide a solution that would satisfy both parties. some said the best route to take around facial/race bias would be to take better pictures where the lighting confuses these softwares when its time to record what the persons information may be. This would be a decent idea except for the fact that if the lighting is too bright then their true skin color would not be revealed, except it would be someone of a brighter tone. Can there truly be a solution to end AI bias or is it something that will always remain.

Programs that provide high expectations usually fail, to do what it’s meant to do, so we usually have to deal with it hoping that one day there would be a solution. With this conclusion, it only means that we have to deal with the conflicts of being misidentified or misgendered due to the lack of data feed in these programs’ databases making these programs less reliable by the second.

References

Wu W, Protopapas P, Yang Z, Michalatos P. | Gender Classification and Bias Mitigation in Facial Recognition | Published online July 06, 2020

Daniel C. Wood | Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias Robotics, Artificial Intelligence & Law | May–June 2021, Vol. 4, No. 3

This entry was posted in Definition Rewrite, Waterdrop. Bookmark the permalink.

5 Responses to Definition Rewrite – Water

  1. davidbdale says:

    Please don’t use Tags, Waterdrop.
    Only through the sheerest chance did I discover that you wanted feedback for this post. There’s a Category for that. I’ve removed your tags here and on your Definition argument.
    Thanks!

    Like

  2. davidbdale says:

    P1. Ever watch a movie or show where robbers had pulled off one of the biggest heists in history and they’re on the run, then we get a transition to the command centre where the operation is taking place and the officers are on the computers scrambling/mashing the keyboard trying to pinpoint the locations of where they could be hiding out. They usually go through a “deep” scan using their state-of-the-art facial recognition software. We all know that for entertainment purposes, that would be too easy for the program to actually work. Or so we think, I strongly believe that even the latest up-to-date facial recognition software still fails to do the job it was assigned.
    —Don’t phrase your illustration as a RHETORICAL QUESTION!
    —I will lecture you endlessly about the perils of the RQ if you wish. But you can avoid the lecture by eliminating all rhetorical questions.
    —You set up a very confusing scenario. When you instruct us to imagine officers trying to pinpoint “a location,” the last thing we imagine in facial recognition. We’re all thinking tracking device or using the suspect’s phone, etc.
    —You then use another sentence to surmise that “we all know . . . THAT WOULD BE TOO EASY,” which is very confusing. I think you mean it would be BEYOND THE CAPABILITIES of existing software. In other words, TOO COMPLICATED!
    —You try to clarify what you mean in your last sentence, but it’s too late to undo the confusion. We end the paragraph not understanding why you took us on this journey.

    So, let’s simplify:

    Ever watch a movie or show where robbers had pulled off one of the biggest heists in history and they’re on the run, then we get a transition to the command centre where the operation is taking place and the officers are on the computers scrambling/mashing the keyboard trying to pinpoint the locations of where they could be hiding out.

    —Hollywood loves facial recognition. Movies depict the technology to convince us that blurry images of faces from odd angles can be “enhanced” to provide unerring recognition of any suspect caught on camera.

    They usually go through a “deep” scan using their state-of-the-art facial recognition software. We all know that for entertainment purposes, that would be too easy for the program to actually work.

    —I don’t know what to do with this.

    Or so we think, I strongly believe that even the latest up-to-date facial recognition software still fails to do the job it was assigned.

    —In reality, the officers would probably not be able to trust whatever match the software offered.

    The paragraph:

    Hollywood loves facial recognition. Movies depict the technology to convince us that blurry images of faces from odd angles can be “enhanced” to provide unerring recognition of any suspect caught on camera. In reality, the officers would probably not be able to trust whatever match the software offered.

    If you want to introduce objections to FR tech that you’ll describe later:

    Hollywood loves facial recognition. In movies, apparently, all criminals are in the database. Innocent people have nothing to fear. Every race, gender, ethnicity is painstakingly represented to avoid prejudice. And every match is a good match. Blurry images can be “enhanced” to faithfully identify anyone. In reality, the officers would probably not be able to trust whatever match the software offered.

    Does this help, Waterdrop?
    Would you like more of the same?

    Please, WaterDrop, if you value feedback, always reply. The opportunity to revise with help is the primary value of this course, and I love the conversations, but I tire of them quickly when they become one-sided. Thanks!

    You have a lot of good material here, and I’d love to help you refine it, but you need to engage in a feedback loop to get the best of what the course can offer. Thanks!

    Like

  3. davidbdale says:

    Paragraph 2.
    A facial recognition program aims to track or attain information about a person to keep track of minorities or groups for advertising, creating laws, or helping the police force. Facial recognition serves multiple purposes depending on the holder’s motives and who they work for. For something like this to fully function it has to have enormous amounts of data stored in its servers/database. The more data it has, the stronger the results come out, but for the data to be reliable it must have data from all aspects/categories that would describe a person. With these external data sets to achieve maximum results you have to include internal factors such as gender, sexuality, and race.
    —There’s no Main Idea here. We don’t know why you transition from minority groups to holder’s motives to database size to gender, sexuality and race.
    —Remember, you got here by way of a Hollywood movie analogy. We’re all lost.

    P3. Programs rely on the data they are given but they don’t, they should add. The study conducted by Wenying Wu, Pavlos Patropapas, Zheng Yang, and Panagiotis Michalatos titled “Gender Classification and Bias Mitigation in Facial Recognition” was the testing of an existing database for gender classifications and facial recognition. The purpose of the testing was to determine whether facial recognition programs can determine these kinds of factors so that officials would not offend any of these minority groups’ other parties. The solution to this was to simply add more information to the data set such as adding more people of color and other groups that were represented poorly, they even had their own category for those of other genders, these included gender fluid people, non-binary and gender queer. To no surprise the results showed that the program with more information had sharper accuracy to what a person’s respected group would have them fall under.
    Your citation is too long. You don’t need to name BOTH the Authors and the Title.
    —Soooooo lost.
    —Your introduction had NOTHING to do with identifying a person’s gender or race. It was about matching an image TO A PARTICULAR PERSON.
    —Your source material is useful, but for an entirely different argument.

    P4. Why is this a problem? The problem is that government forces rely on these kinds of programs and for it to be a liability at the same time is a thing that could endanger one’s life. Let’s say that a killer is on the loose and the FBI is using facial recognition to find the person, so they use the person’s picture to try to scan all over the city for anyone that resembles them. They find a match but then they come to the realization of them getting the wrong person, now they had not only ruined or scared someone but they had also misidentified a person who had nothing to do with the crime. Lack of data that goes into these servers are for the most part the same pictures, they rarely have diversity and when they are included it is by a small percent.
    Same objection. Why would a facial recognition program match an image TO A PARTICULAR PERSON because of its lack of adequate representation of a particular group? I know the answer, but you can’t take for granted that readers will follow you here.
    —Does under-representation of a particular demographic in a visual database produce BAD MATCHES to particular individuals? You haven’t earned that position yet.

    P5. You would think an invention that could find you just by having your face on the database would protect you. When in reality it could do more harm than protect. Especially if you are someone of the lgbtq+ community where you are being misidentified or misgendered. An error like this can be an offence/insult to one respected group, that could start a revolt for gender equality or equal rights for one’s identity. it’s the little errors that could start an entire domino effect of people being treated like they are either inferior or looked at as trash.
    I have to pull you back, WaterDrop.
    —There’s a VERY RESPECTABLE argument to be made that facial-recognition software might misidentify a person’s sexuality. But that’s not the argument you started. If that’s where you want to go, begin with a different illustration.

    P6. Say we do add more information about every kind of person from different races, sexualities or gender, will this solve the lack of data? Perhaps, but this won’t stop other factors such as bias to a race or gender, there’s always that one person who thinks whatever they are is superior when compared to others. This is a big problem in facial recognition because one’s personal opinions could slip into programming if someone goes based on what they believe and not what’s right then they could contradict results when used.
    You’re off the deep end here, WaterDrop.
    —Are you trying to argue that programmers are making DECISIONS about the sexuality of persons from their photographs?
    —The idea is compelling, and would certainly be a problem in places where sexual preference is criminalized, but how did we get here? Your argument keeps morphing.

    P7. Daniel C Wood’s article “Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias” claims that “if society works together to work on race and equality then the steps towards AI bias could be changed”. The author opens up the topic of how apps such as Twitter reveal some form of bias towards race. In one case an African American was noticing that he and a coworker were getting their faces cropped out of the picture and when they took it to Twitter, they tried to talk their way out of it along with providing no evidence of there being no racial bias. A power company that runs twitter have these data sets and they allow these kinds of problems to occur and provide no response where they own up to their mistakes, and to also provide a solution that would satisfy both parties. some said the best route to take around facial/race bias would be to take better pictures where the lighting confuses these softwares when its time to record what the persons information may be. This would be a decent idea except for the fact that if the lighting is too bright then their true skin color would not be revealed, except it would be someone of a brighter tone. Can there truly be a solution to end AI bias or is it something that will always remain.
    This is a fascinating anecdote, WaterDrop, but it contributes nothing to the very strong case you can probably build about the fallibility of facial recognition IN LAW ENFORCEMENT.
    —I’m going out on a limb here, but I imagine the faces of the coworkers who got cropped were not recognized as faces by Twitter’s filters. I can corroborate that when I photographed my students against a blackboard, my African immigrant students virtually disappeared against the background. Facebook did not “see” them as faces. I couldn’t either. I don’t see how that contributes to your VERY VALID complaint that FR programs can result in FALSE MATCHES!
    —Am I misunderstanding that your primary objection to FR at the moment is false matches?
    —If so, this anecdote does not contribute.

    P8. Programs that provide high expectations usually fail, to do what it’s meant to do, so we usually have to deal with it hoping that one day there would be a solution. With this conclusion, it only means that we have to deal with the conflicts of being misidentified or misgendered due to the lack of data feed in these programs’ databases making these programs less reliable by the second.
    You need to choose between “being misidentified” and “being mis-gendered,” WaterDrop. They’re both admirable choices, but you can’t cover both in 3000 words.
    —And you need to think hard about whether you mean “misgendered” or “wrongly classified by sexual preference.”

    Does this help?

    Like

    • Water says:

      The purpose of the introduction was to prove how facial recognition in the entertainment industry can either be accurate or a downplay of how even the best piece of technology can be useless despite being high priced. The second paragraph is to inform what facial recognition can do for various fields. ranging from marketing to law enforcement. I’ll rewrite the paragraphs with consideration of the feedback, the path I’m going to take is to talk about being misidentified and how serious and disrespectful facial recognition can be. I appreciate all the suggestions and hope for more in the future as I build my 3000-word paper.

      Like

  4. Water says:

    Thank you, professor, for the advice on reformatting my first paragraph making it clear in the sense of initiating the topic of Facial recognition and how it’s not as reliable and how it’s more of a problem of use. I have a small question about the order to present my claims, I want to downplay the reliability of facial recognition so I was thinking of using my source about the experiment of adding new sex/gender groups information into preexisting programs to then continue using citations from the article and how facial recognition targets certain races for advertising tactics and how may see this as an issue. Do you have any advice on how I should approach facial recognition being an issue in the aspect of wrongly classifying those of different sexual preferences?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s