Hypothesis
Advancements in facial recognition are racial and gender-biased
Background: In the article “Facial recognition, Racial Recognition, and the Clear and Present Issues with AI Bias” the author announces that AI recognition has many flaws and rather than help the user it causes more conflict that can occur and ruin someone’s life. Whenever AI software goes through a run, its shows that determining the race is not always right and factors such as background can disrupt final observations.
How I Intend to Use It: As an introduction to my hypothesis it would be nice to share the opinions of others who may think similarly to this hypothesis. The goal is to downplay the power and credibility of facial recognition and to express how an expensive program can not be as useful or reliable to get information.
2. https://terra-docs.s3.us-east-2.amazonaws.com/IJHSR/Articles/volume3-issue6/2021_36_p17_Mandis.pdf
Background: After realising the errors of facial recognition programs, the urge to fix these issues is immediately called upon. Given that numerous amounts of money get dumped on programs to detect faces and to determine what race one is must mean that they are reliable and accurate. In this article, percentages are given when comparing changes in accuracy in relation to race, skin color or hobbies.
How I Intend to Use It: I plan to use the statistics and percent change after programmers/users observe when they try to fix the program to be responsive to culture and background. Express how the more data the program is fed, the more error-prone it becomes. it can be used as a counterintuitive source where for some instances the change in percentage of how accurate the program becomes does get closer to what the person actually is.
3. https://dl.acm.org/doi/abs/10.1145/3394231.3397900
Background: This summary is about how a group of people wanted to test the accuracy of a program when it comes to gender and sexuality. By grouping LGBTQ members along with nonbinary members, the program was given the task to point out those who are in their respective groups and to a surprise, the program got the majority of them correct. This proves that some programs can be misleading that guide those who are unaware of possible errors to a degree where it can ruin a life. Yet this survey, proves that it can be accurate given enough information and reliable.
How I Intend to Use It: In the counterintuitive aspect of my hypothesis there can be a scenario where the program completes its goal and is dependable for government forces or corporations’ use. An argumentative point could be proving that its the developer’s fault and how programs that are being sold and talked about as if it was the next big thing have a huge play in understanding. This source goes both ways to either prove right or prove wrong.
4. https://link.springer.com/article/10.1007/s43681-021-00108-6
Background: With the advancement in technology, more people become dependable on their devices. In the sense of generalizing programs comparing those with similar features to be the same race or ethnicity, kinds of information gathered from people’s faces manually update the AI to be familiar with behavioral decisions and learn new techniques for predicting one’s action. These kinds of techniques may sound nice on paper but when used in real-life situations, the program can influence cops/users to believe that person could be dangerous or falsely identify someone.
How I Intend to Use It: People give into technology and believe whatever it shows without having a slight bit of doubt. they are so confident with the response shown without fact-checking if it’s right, and solve it themself. If some investigation were to use a facial recognition program the probability of it giving a response that could throw them off are moderate to highly.
5. https://ai4da.com/ai-racial-equality/
Background: Racial biased was described in an example of how a program that hires based on someone’s face could be racially biased towards someone middle-aged and white. The math to ensure problems like this don’t occur is something numbers will be able to solve, it needs to understand the culture and the diverse side of the world. In the example where a certain program was run on 7,000 people, the final examination was that the program was directly racist and it could’ve been due to the natural instinct of a coder who thought it was a norm.
How I Intend to Use It: The dark side of the facial program can be that the reason for these racial biases is from the programmers themselves instead of the lack of cultural information. Instances like this make it hard to fix since everyone is biased towards culture and sexuality. Giving someone the job to determine what they are is an opportunity to express these biases, therefore, facial recognition programs are merely biased and not fixable.
When you first posted it, this was a preliminary assignment. It was among the better first drafts then, but now it’s far behind where it should be, Waterdrop.
Use this White Paper to take Notes and record your impressions of your sources AS YOU READ THEM, the best way to begin converting your research material into language of your own that you can export to your short arguments when it’s time to draft them. You don’t appear to have investigated your sources any further than when you first posted them.
This post will be regraded from time to time, or on your specific request.
LikeLike