Artificial Intelligence Tutorial

Introduction to Artificial Intelligence Intelligent Agents

Search Algorithms

Problem-solving Uninformed Search Informed Search Heuristic Functions Local Search Algorithms and Optimization Problems Hill Climbing search Differences in Artificial Intelligence Adversarial Search in Artificial Intelligence Minimax Strategy Alpha-beta Pruning Constraint Satisfaction Problems in Artificial Intelligence Cryptarithmetic Problem in Artificial Intelligence Difference between AI and Neural Network Difference between Artificial Intelligence and Human Intelligence Virtual Assistant (AI Assistant) ARTIFICIAL INTELLIGENCE PAINTING ARTIFICIAL INTELLIGENCE PNG IMAGES Best Books to learn Artificial Intelligence Certainty Factor in AI Certainty Factor in Artificial Intelligence Disadvantages of Artificial Intelligence In Education Eight topics for research and thesis in AI Engineering Applications of Artificial Intelligence Five algorithms that demonstrate artificial intelligence bias 6th Global summit on artificial intelligence and neural networks Artificial Communication Artificial Intelligence in Social Media Artificial Intelligence Interview Questions and Answers Artificial Intelligence Jobs in India For Freshers Integration of Blockchain and Artificial Intelligence Interesting Facts about Artificial Intelligence Machine Learning and Artificial Intelligence Helps Businesses Operating System Based On Artificial Intelligence SIRI ARTIFICIAL INTELLIGENCE SKILLS REQUIRED FOR ARTIFICIAL INTELLIGENCE Temporal Models in Artificial Intelligence Top 7 Artificial Intelligence and Machine Learning trends for 2022 Types Of Agents in Artificial Intelligence Vacuum Cleaner Problem in AI Water Jug Problem in Artificial Intelligence What is Artificial Super Intelligence (ASI) What is Logic in AI Which language is used for Artificial Intelligence Essay on Artificial Intelligence Upsc Flowchart for Genetic Algorithm in AI Hill Climbing In Artificial Intelligence IEEE Papers on Artificial Intelligence Impact of Artificial Intelligence On Everyday Life Impact of Artificial Intelligence on Jobs The benefits and challenges of AI network monitoring

Knowledge, Reasoning and Planning

Knowledge based agents in AI Knowledge Representation in AI The Wumpus world Propositional Logic Inference Rules in Propositional Logic Theory of First Order Logic Inference in First Order Logic Resolution method in AI Forward Chaining Backward Chaining Classical Planning

Uncertain Knowledge and Reasoning

Quantifying Uncertainty Probabilistic Reasoning Hidden Markov Models Dynamic Bayesian Networks Utility Functions in Artificial Intelligence


What is Artificial Super Intelligence (ASI) Artificial Satellites Top 7 Artificial Intelligence and Machine Learning trends for 2022 8 best topics for research and thesis in artificial intelligence 5 algorithms that demonstrate artificial intelligence bias AI and ML Trends in the World AI vs IoT

5 algorithms that demonstrate artificial intelligence bias

Unfortunately, in the machine learning algorithm, AI bias is the output due to the prejudiced assumption made due to the algorithm development process. AI systems have biases due to the following two reasons and these reasons are as follows:

  1. Cognitive biases
  2. The complete data is insufficient

An AI system can be sound when the input data is good. It is a desperate fact about our society that is permanently biased. It may repeatedly happen because the human being is regularly biased towards religion, gender, nationality and minorities, due to which the bias is developed due to the social family since birth. So the technical industries make their artificial intelligence algorithms bias-free before releasing them to the market. Companies can do this to encourage research on bias-free Artificial Intelligence.

There is some algorithm which can signify the Artificial intelligence bias. These biases are used against minorities such as black people, women, poor people etc.

1. COMPAS Algorithm biased against the black people

COMPAS stands for Correctional Offender Management Profiling for Alternative Sanctions. It is a case management and decision support tool developed and owned by Northpointe and used by the U.S court. This COMPAS software uses an algorithm to detect potential recidivism risk. With the help of COMPAS, Judges make decisions about whether a criminal should be punished or not. ProPublica is a news company that found that COMPAS is biased. According to ProPublica, the COMPAS makes black people guiltier than white people. White people were treated less dangerous than black people, even for violent crimes, it shows that COMPAS was an inherited bias commonly found among human beings: black people can commit more crimes than white people.

2. PredPol algorithm biased against the minorities

PredPol stands for Predictive Policing. PredPol is an Artificial Intelligence algorithm aiming to predict where the crime will occur in future. It is possible by collecting the crime data such as arrest count and the number of police calls from a specific location. The USA police department uses this algorithm. Its primary aim is to reduce human bias in the police department by taking the prediction responsibility of Artificial Intelligence. But some researchers in the USA discovered that the PredPol is biased. It repeatedly sends police officers to particular minorities' families without seeing the number of crimes spotted in that area. It also proves that PredPol was also biased.

3. Amazon's Recruiting Engine is biased against the woman

Amazon's recruiting engine is an artificial intelligence algorithm created for selecting the resume, and then they are called for interview and selection. This algorithm was developed to remove human bias in the selection process of job applications. But this algorithm was biased against the woman while selecting resumes. When the Amazon research team checked the algorithm, it automatically chose the women candidate. After that, Amazon discarded this algorithm, which was not used further for evaluating the candidate.

4. Google Photo algorithm biased against black people

Google photo is an artificial intelligence algorithm which creates a separate section for the photos corresponding to what is shown in the picture. This algorithm is worked on the Convolutional Neural Network (CNN) process. With the help of the Convolutional Neural Network, it tags the photo by using the image reorganization process. But a Google algorithm was found that makes a group of black people with the gorilla. However, they claimed that they were sorry for that mistake and will never repeat it in the future. But the image levelling process is still not accurate.

5. IDEMIA's facial reorganization algorithm is biased against the black woman

IDEMIA is a company which create a facial reorganization system for the police department in the USA but when the National Institute of Standards and Technology checked the IDEMIA's facial reorganization algorithm, they found that it repeatedly identified the black woman as a danger in comparison to the white woman. But IDEMIA doesn't always see all faces. The black woman's false match case was higher than the white woman's. Then IDEMIA says that the algorithm used by the National Institute of Standards and Technology is not released commercially, and their algorithm is getting better.