Rebuttal Rewrite – Water

Yes, but NO

What if facial recognition actually worked and could properly record or relay information without it being able to misidentify what’s in front of the camera. Would this be beneficial or would nothing change? Facial recognition has a constant problem of not being able to make out what it’s looking at, so when it tries to give an answer it tends to give a response that is not accurate to what the person may go by. One could make the claim that facial recognition in fact is pretty stable and could be reliable and safe, famous companies such as Amazon support this idea.

Amazon has a subsection dedicated to facial recognition. It talks about how it works, and the use of the programs or devices, on its website they state “Facial recognition algorithms have near-perfect accuracy in ideal conditions. There is a higher success rate in controlled settings but generally a lower performance rate in the real world. It is difficult to accurately predict the success rate of this technology, as no single measure provides a complete picture.” There’s a flaw with this statement, not only did they make the claim facial recognition has near-perfect accuracy they then disprove their claim by saying it has a low-performance rate in the real world. Does this imply that it only works in certain locations and could be a liability if used in the real world. For example if used on security cameras and they were used to find anyone who either went past the speed limit or involved in a crime would it just be a blur due to the lighting and resolutions being bad or will it be able to have a clear picture of the person. A flaw like this really defeats the purpose of having facial recognition, I mean it could really be something incredible for the future but if a simple problem such as poor lighting or understanding facial features can be detrimental for safety or data collecting.

Developers say that they have made changes and improvements to these kinds of programs, but could they be lying or just simply trying to impress those companies who use the program. According to an article from Innovatrics titled “How the accuracy of facial recognition technology has improved over time” in this article they claim, “The first is “education” – the better the dataset that the neural network is trained on, the better are the results. To help the machine learn better, the datasets have to be labeled correctly and checked for mistakes. The other avenue is to rely on improving computing power. Neural networks can be more precise and their outputs can be tested repeatedly to find and repair their blind spots.” This makes sense to a certain degree. If you add more diverse pictures about people from all parts of the world and store them in a dataset, they mention education but they didn’t include what groups or other forms of information they would add to strengthen the accuracy. One main problem of facial recognition and trying to identify a person is they don’t have groups that involve societal changes such as people part of the LGBTQ or other respected groups. To be accurate must mean you have to include everything, inclusion presents external problems such as gender or racial bias, which have been proven to be an issue for these datasets as some had pictures with those of similar features and no diversity. It has even brought up the talk of either banning these kinds of programs.

Activists have been demanding lawmakers put a ban on facial recognition. The reason why they feel like Facial recognition should be removed is that they claim that photo-matching technology is inaccurate with respect to photos of women and minorities. The argumentative article by the security industry association titled “What Science Really Says About Facial Recognition Accuracy and Bias Concerns” tried to disprove a paper published by a grad student researcher at MIT Media Lab in 2018 who would disprove the function of facial recognition and misidentifies mainly people of color. The article tries to disprove their claim by saying, “the paper is frequently cited as showing that facial recognition software “misidentifies’ ‘ dark-skinned women nearly 35% of the time. But there’s a problem: Gender Shades evaluated demographic-labeling algorithms, not facial recognition.” Race has as much to do with facial recognition, facial recognition can’t go off facial features they have to base them off skin color because people could have similar shades of skin color but they could be from any part of the world, information such as region is crucial when determining one’s identity. Identity isn’t just gender or height, it’s everything used to categorize one, this includes sex, race, orientation and more.

Furthermore, in the article, the author tries to provide more information regarding facial recognition improvement as a whole and how the research paper had flaws regarding the experiment run on the accuracy of the software program.The American Civil Liberties Union conducted a test on amazon’s recognition program on the people of congress back in 2018 it was stated, “it created a database of 25,000 publicly available images and ran a search against official photos of the 535 members of Congress, returning “false matches” for 28 of them. The ACLU claimed that since 11 of these matches, or 40%, were people of color, and only 20% of Congress overall are people of color, this is evidence of racial bias in facial recognition systems” this proves facial recognition is unreliable in some instance but after the publishment of the experiment amazon reran the test but this time they had 850,000 images to run for matches and they found zero “false matches”. This simply means that with more images they could be able to pinpoint an accurate match.

Finally, in the year 2019, statistics regarding facial recognition came out and the stats were different compared to before. In the statistics, it declared that “According to data from the most recent evaluation from June 28, each of the top 150 algorithms are over 99% accurate across Black male, white male, Black female and white female demographics. For the top 20 algorithms, the accuracy of the highest performing demographic versus the lowest varies only between 99.7% and 99.8%” This would be interesting to see but if you think about it, they never included or mentioned other groups. They failed to mention how people from Asia or South America compare when out into these facial recognition programs. Overall leaving with the idea that facial recognition could be something if they either had more information or it could just be that facial recognition isn’t trusting and has too many flaws to correct.  

Sources

Amazon | What is Facial Recognition? | published online | Amazon

Innovatrics | How the accuracy of facial recognition technology has improved over time | Author: Barbara Rusnáková | Published online

Jake P., Senior Director, Government Relations, SIA, and David R., Chief Operating Officer & General Counsel, Rank One Computing| What Science Really Says About Facial Recognition Accuracy and Bias Concerns | July 23, 2022 | Security Industry Association

This entry was posted in Rebuttal Rewrite, Waterdrop. Bookmark the permalink.

1 Response to Rebuttal Rewrite – Water

  1. davidbdale says:

    P4.
    The quote in the middle of this paragraph is a good example of the question I’ve been trying to get you to answer since the first time we talked about your topic, Water.

    the paper is frequently cited as showing that facial recognition software “misidentifies’ ‘ dark-skinned women nearly 35% of the time. But there’s a problem: Gender Shades evaluated demographic-labeling algorithms, not facial recognition.”

    You don’t seem to discriminate between the two yourself, either, Water, but you need to be clear whether you’re objecting to failures of “Demographic-Labeling Algorithms” (DLAs) or Facial Recognition (FR).

    Following the quote, you seem to be objecting to failures to take note of skin color, but why? Do you think it would create the failure of calling an African man an Asian man but not know who he is? (DLA) Or do you think it would prevent a program from matching your own face with your own face? (FR)

    P5.
    As with much of this essay, I had a hard time figuring out what your REFUTING in your Rebuttal argument, Water. In P5, I think you’re encouraged that what sounded like a very serious accuracy issue was solved by adding massive numbers of images to the database. Is that right? Do you mean to endorse the program with reservations? Or do you still mean to object to it completely because it will never be 100% accurate? I can’t tell.

    Like

Leave a comment