Bibliography – Water

1. Daniel C. Wood | Facial Recognition, Racial Recognition, and Clear and Present Issues With AI Bias Robotics, Artificial Intelligence & Law | May–June 2021, Vol. 4, No. 3

Background: In the article “Facial recognition, Racial Recognition, and the Clear and Present Issues with AI Bias” the author announces that AI recognition has many flaws and rather than help the user it causes more conflict that can occur and ruin someone’s life. Whenever AI software goes through a run, its shows that determining the race is not always right and factors such as background can disrupt final observations.

How I Used It: As an introduction to my hypothesis to share the opinions of others who may think similarly to this hypothesis. The goal was to downplay the power and credibility of facial recognition and to express how an expensive program can not be as useful or reliable to get information.

2. https://terra-docs.s3.us-east-2.amazonaws.com/IJHSR/Articles/volume3-issue6/2021_36_p17_Mandis.pdf

Background: After realising the errors of facial recognition programs, the urge to fix these issues is immediately called upon. Given that numerous amounts of money get dumped on programs to detect faces and to determine what race one is must mean that they are reliable and accurate. In this article, percentages are given when comparing changes in accuracy in relation to race, skin color or hobbies.

How I Intend to Use It: I plan to use the statistics and percent change after programmers/users observe when they try to fix the program to be responsive to culture and background. Express how the more data the program is fed, the more error-prone it becomes. it can be used as a counterintuitive source where for some instances the change in percentage of how accurate the program becomes does get closer to what the person actually is.

3. Wu W, Protopapas P, Yang Z, Michalatos P. | Gender Classification and Bias Mitigation in Facial Recognition | Published online July 06, 2020

Background: This summary is about how a group of people wanted to test the accuracy of a program when it comes to gender and sexuality. By grouping LGBTQ members along with nonbinary members, the program was given the task to point out those who are in their respective groups and to a surprise, the program got the majority of them correct. This proves that some programs can be misleading that guide those who are unaware of possible errors to a degree where it can ruin a life. Yet this survey, proves that it can be accurate given enough information and reliable.

How I used it: In the counterintuitive aspect of my hypothesis where the program completes its goal and is dependable for government forces or corporations’ use. An argumentative point was made that it was.a developer issue and how after the researchers decided to do something about it add changes and make advancements/improvements they made changes but proved their hypothesis of how facial recognition has no diversity and this issue can lead to misidentifications

4. https://link.springer.com/article/10.1007/s43681-021-00108-6

Background: With the advancement in technology, more people become dependable on their devices. In the sense of generalizing programs comparing those with similar features to be the same race or ethnicity, kinds of information gathered from people’s faces manually update the AI to be familiar with behavioral decisions and learn new techniques for predicting one’s action. These kinds of techniques may sound nice on paper but when used in real-life situations, the program can influence cops/users to believe that person could be dangerous or falsely identify someone.

How I Intend to Use It: People give into technology and believe whatever it shows without having a slight bit of doubt. they are so confident with the response shown without fact-checking if it’s right, and solve it themself. If some investigation were to use a facial recognition program the probability of it giving a response that could throw them off are moderate to highly.

5. https://ai4da.com/ai-racial-equality/

Background: Racial biased was described in an example of how a program that hires based on someone’s face could be racially biased towards someone middle-aged and white. The math to ensure problems like this don’t occur is something numbers will be able to solve, it needs to understand the culture and the diverse side of the world. In the example where a certain program was run on 7,000 people, the final examination was that the program was directly racist and it could’ve been due to the natural instinct of a coder who thought it was a norm.

How I Intend to Use It: The dark side of the facial program can be that the reason for these racial biases is from the programmers themselves instead of the lack of cultural information. Instances like this make it hard to fix since everyone is biased towards culture and sexuality. Giving someone the job to determine what they are is an opportunity to express these biases, therefore, facial recognition programs are merely biased and not fixable.

6. Brian E.Finch | Addressing Legitimate ConcernsAbout Government Use of FacialRecognition Technologies | Published October 30, 2020 | Via The Heritage Foundation

Background: When it comes to facial recognition and they’re problems, that can be identified as false positive and false negative. False negative occurs when the algorithm of the program fails to return a matching image despite being in a category that can be detailed. A false positive is when the image of an individual is matched to a person whom is completely different, creating the idea that facial recognition can be misleading and misidentifies one even when provided with pictures of multiple people

How I Used It: I used the definitions of problems facial recognition and mixed in of scenario of how either false negative and false positive can be a problem that doesn’t get solved, since its an enteral issue and slightly less of a developer problem.

7. Halley Sutton | Report finds department abused facial recognition software | published 2019 | Wiley Periodicals, Inc., A Wiley Company

Background: A study was conducted on a case where police officers in New York were abusing the software program and making mugshots to resemble celebrities and making false bookings where they would try to make themselves look better for the media. The study went down the rabbit hole of what the officers did and how they would adde images of some celebrities and tried to match them with criminals, they went to the extent of photoshopping/editing the pictures to make them look good for the records.

How I intend to Use It: I will show how the program can be a harm to the public is used hby the wrong hands, people are willing to manipulate the system to make themselves look better, some would even change pictures to make their achievement look as if it was no problem, programs like this can danger ones life or even ruin them. WE have to notice this and make sure the message is clear that something that can be rewritten just by the likes of someone who is a regular officer have the capabilities of manipulating the system undercover for personal gain and not by doing it the right way in the name of justice.

8. Kaitlin Jackson | Challenging Facial Recognition Software in Criminal Court | July 2019 | provided by NACDL

Background: An issue that one faces after being misidentified and is wrongfully put behind bars would be facing charges for a crime they may have never been in. After one is put behind bars and they know they are innocent and don’t want to face charges they can take the department to court and try to gain back their freedom. Although this can be tedious for they would have to go through a process of eye witness and use them to testify the alleged person and see whether they can confirm the picture is telling the truth.

How I Intend to use it: I will show the importance a flaw in the device that is supposed to get information on one based on a picture, something like this can be take away ones life time because the device thought they were the person involved in either a crime or conflict. To further prove that the aftermath is just as worse as the main issue trying to get ones name clear of what they didn’t do.

9. Amazon | What is Facial Recognition? | published online | Amazon

Background: On their website they have a subsection dedicated to the success of their facial recognition software and how they claim their software has near perfect accuracy and their only flaw are due to external problems but are simple things that can be fixed or improved.

How I used it: I used amazons own words of how they claim to have an accurate facial recognition software and how the only issue of the program would due to third part factors such as lighting and angles or volume of ones features. I used it as the rebuttal argument and provided questions regarding the program and how it can be misleading.

10. Innovatrics | How the accuracy of facial recognition technology has improved over time | Author: Barbara Rusnáková | Published online

Background: “the better the dataset that the neural network is trained on, the better are the results. To help the machine learn better, the datasets have to be labeled correctly and checked for mistakes. The other avenue is to rely on improving computing power. Neural networks can be more precise and their outputs can be tested repeatedly to find and repair their blind spots” this piece of text presents the claim that developer are making to prove the facial recognition is a work in progress technology that given time would improve and make the machine more accurate.

How I Used it: I disprove this by saying that the only reason for the machines failure is natural and is something that cant be fixed immediately, I try to ask questions as if I were interviewing the developers by trying to understand the reason its not something that can be fixed imminently

11. Jake P., Senior Director, Government Relations, SIA, and David R., Chief Operating Officer & General Counsel, Rank One Computing| What Science Really Says About Facial Recognition Accuracy and Bias Concerns | July 23, 2022 | Security Industry Association

Background: The article tries to disprove a paper conducted on why the flaws of facial recognition occur and how they have no racial diversity in the datasets, and their counterclaim to the paper is by saying, “the paper is frequently cited as showing that facial recognition software “misidentifies’ ‘ dark-skinned women nearly 35% of the time. But there’s a problem: Gender Shades evaluated demographic-labeling algorithms, not facial recognition.” restating the claim to be that the paper has the information wrong and they made it seem like it was written to say the program is racial bias and have inclusion of dark skinned people.

How I used it: I disprove the claim of that being that the paper is biased and is making these claims in spite of trying to get them shut down, instead I find ways to make the paper present it self more and show why tis right and not biased. I used facts and use the counterclaims own words by stating that if they think its bias, why don’t they use information of the same program to disprove.

This entry was posted in Bibliography, Portfolio WaterDrop, Waterdrop. Bookmark the permalink.

Leave a comment