Five algorithms that demonstrate artificial intelligence bias
Unfortunately, in the machine learning algorithm, AI bias is the output due to the prejudiced assumption made due to the algorithm development process. AI systems have biases due to the following two reasons and these reasons are as follows:
- Cognitive biases
- The complete data is insufficient
An AI system can be sound when the input data is good. It is a desperate fact about our society that is permanently biased. It may repeatedly happen because the human being is regularly biased towards religion, gender, nationality and minorities, due to which the bias is developed due to the social family since birth. So the technical industries make their artificial intelligence algorithms bias-free before releasing them to the market. Companies can do this to encourage research on bias-free Artificial Intelligence.
There is some algorithm which can signify the Artificial intelligence bias. These biases are used against minorities such as black people, women, poor people etc.
1. COMPAS Algorithm biased against the black people
COMPAS stands for Correctional Offender Management Profiling for Alternative Sanctions. It is a case management and decision support tool developed and owned by Northpointe and used by the U.S court. This COMPAS software uses an algorithm to detect potential recidivism risk. With the help of COMPAS, Judges make decisions about whether a criminal should be punished or not. ProPublica is a news company that found that COMPAS is biased. According to ProPublica, the COMPAS makes black people guiltier than white people. White people were treated less dangerous than black people, even for violent crimes, it shows that COMPAS was an inherited bias commonly found among human beings: black people can commit more crimes than white people.
2. PredPol algorithm biased against the minorities
PredPol stands for Predictive Policing. PredPol is an Artificial Intelligence algorithm aiming to predict where the crime will occur in future. It is possible by collecting the crime data such as arrest count and the number of police calls from a specific location. The USA police department uses this algorithm. Its primary aim is to reduce human bias in the police department by taking the prediction responsibility of Artificial Intelligence. But some researchers in the USA discovered that the PredPol is biased. It repeatedly sends police officers to particular minorities' families without seeing the number of crimes spotted in that area. It also proves that PredPol was also biased.
3. Amazon's Recruiting Engine is biased against the woman
Amazon's recruiting engine is an artificial intelligence algorithm created for selecting the resume, and then they are called for interview and selection. This algorithm was developed to remove human bias in the selection process of job applications. But this algorithm was biased against the woman while selecting resumes. When the Amazon research team checked the algorithm, it automatically chose the women candidate. After that, Amazon discarded this algorithm, which was not used further for evaluating the candidate.
4. Google Photo algorithm biased against black people
Google photo is an artificial intelligence algorithm which creates a separate section for the photos corresponding to what is shown in the picture. This algorithm is worked on the Convolutional Neural Network (CNN) process. With the help of the Convolutional Neural Network, it tags the photo by using the image reorganization process. But a Google algorithm was found that makes a group of black people with the gorilla. However, they claimed that they were sorry for that mistake and will never repeat it in the future. But the image levelling process is still not accurate.
5. IDEMIA's facial reorganization algorithm is biased against the black woman
IDEMIA is a company which create a facial reorganization system for the police department in the USA but when the National Institute of Standards and Technology checked the IDEMIA's facial reorganization algorithm, they found that it repeatedly identified the black woman as a danger in comparison to the white woman. But IDEMIA doesn't always see all faces. The black woman's false match case was higher than the white woman's. Then IDEMIA says that the algorithm used by the National Institute of Standards and Technology is not released commercially, and their algorithm is getting better.