Artificial Intelligence Tutorial

Introduction to Artificial Intelligence Intelligent Agents

Search Algorithms

Problem-solving Uninformed Search Informed Search Heuristic Functions Local Search Algorithms and Optimization Problems Hill Climbing search Differences in Artificial Intelligence Adversarial Search in Artificial Intelligence Minimax Strategy Alpha-beta Pruning Constraint Satisfaction Problems in Artificial Intelligence Cryptarithmetic Problem in Artificial Intelligence

Knowledge, Reasoning and Planning

Knowledge based agents in AI Knowledge Representation in AI The Wumpus world Propositional Logic Inference Rules in Propositional Logic Theory of First Order Logic Inference in First Order Logic Resolution method in AI Forward Chaining Backward Chaining Classical Planning

Uncertain Knowledge and Reasoning

Quantifying Uncertainty Probabilistic Reasoning Hidden Markov Models Dynamic Bayesian Networks Utility Functions in Artificial Intelligence


What is Artificial Super Intelligence (ASI) Artificial Satellites Top 7 Artificial Intelligence and Machine Learning trends for 2022 8 best topics for research and thesis in artificial intelligence 5 algorithms that demonstrate artificial intelligence bias AI and ML Trends in the World AI vs IoT Difference between AI and Neural Network Difference between Artificial Intelligence and Human Intelligence Virtual Assistant (AI Assistant) ARTIFICIAL INTELLIGENCE PAINTING ARTIFICIAL INTELLIGENCE PNG IMAGES Best Books to learn Artificial Intelligence Certainty Factor in AI Certainty Factor in Artificial Intelligence Disadvantages of Artificial Intelligence In Education Eight topics for research and thesis in AI Engineering Applications of Artificial Intelligence Five algorithms that demonstrate artificial intelligence bias 6th Global summit on artificial intelligence and neural networks Acting Humanly In Artificial Intelligence AI and ML Trends in the World AI vs IoT Artificial Communication Artificial intelligence assistant operating system Artificial Intelligence in Pharmacy Artificial Intelligence in Power Station Artificial Intelligence in Social Media Artificial Intelligence in Supply Chain Management Artificial Intelligence in Transportation Artificial Intelligence Interview Questions and Answers Artificial Intelligence Jobs in India For Freshers Integration of Blockchain and Artificial Intelligence Interesting Facts about Artificial Intelligence Machine Learning and Artificial Intelligence Helps Businesses Operating System Based On Artificial Intelligence SIRI ARTIFICIAL INTELLIGENCE SKILLS REQUIRED FOR ARTIFICIAL INTELLIGENCE Temporal Models in Artificial Intelligence Top 7 Artificial Intelligence and Machine Learning trends for 2022 Types Of Agents in Artificial Intelligence Vacuum Cleaner Problem in AI Water Jug Problem in Artificial Intelligence What is Artificial Super Intelligence (ASI) What is Logic in AI Which language is used for Artificial Intelligence Essay on Artificial Intelligence Upsc Flowchart for Genetic Algorithm in AI Hill Climbing In Artificial Intelligence IEEE Papers on Artificial Intelligence Impact of Artificial Intelligence On Everyday Life Impact of Artificial Intelligence on Jobs The benefits and challenges of AI network monitoring

Forward Chaining in AI : Artificial Intelligence

Forward Chaining is the process which works on the basis of available data to make certain decisions. Forward chaining is the process of chaining data in the forward direction. In forward chaining, we start with the available data and use inference rules to extract data until the goal is not reached. Forward chaining is the concept of data and decision. From the available data, expand until a decision is not made.  

In artificial intelligence, we have two different methods to use forward chaining.

  • Forward Chaining in Propositional Logic
  • Forward Chaining in Predicate Logic/(FOPL)

We will discuss both one by one

Forward Chaining in Propositional Logic

In propositional logic, forward chaining starts its journey from the given knowledge base. If all the premises of the implication are known, then its conclusion will be added to the set of known facts.

We use the below forward chaining algorithm to perform the forward chaining.

 function PL-FC-ENTAILS?(KB, q) returns true or false
 inputs: KB, the knowledge base, a set of propositional definite clauses
 q, the query, a proposition symbol
 count ?a table, where count [c] is the number of symbols in c’s premise
 inferred ?a table,where inferred[s] is initially false for all symbols
 agenda ?a queue of symbols, initially symbols known to be true in KB
 while agenda is not empty do
 if p = q then return true
 if inferred[p] = false then
 for each clause c in KB where p is in c.PREMISE do
 decrement count [c]
 if count [c] = 0 then add c.CONCLUSION to agenda
 return false 

Conclusions drawn from the above algorithm:

  • Forward Chaining is sound as every inference is necessarily an application of Modus Ponen.
  • Forward Chaining is complete as all atomic sentences will be derived from it.

Let’s see an example:

  1. If D barks and D eats bone, then D is a dog.
  2. If V is cold and V is sweet, then V is ice-cream.
  3. If D is a dog, then D is black.
  4. If V is ice-cream, then it is Vanilla.

Derive forward chaining using the given known facts to prove Tomy is black.

  • Tomy barks.
  • Tomy eats bone.

Solution: Given Tomy barks.

 From (1), it is clear:

If Tomy barks and Tomy eats bone, then Tomy is a dog.

From (3), it is clear:

If Tomy is a dog, then Tomy is black.

Hence, it is proved that Tomy is black.

Note: There is an advantage of forward chaining that is, we can draw new inference from the given facts.

Forward Chaining in Predicate Logic/ FOPL

Forward Chaining in Predicate Logic is different from forward chaining in Propositional Logic. In FOPL, forward chaining is efficiently implemented on first-order clauses. First-order clauses are the disjunction of literals of which exactly one is positive.

For example, Crow(x) ? Thrust(x)?Water(x).



Therefore, a definite clause is either atomic or an implication whose predecessor is a conjunction of positive literals. As a result, it results in a single positive literal.

The forward chaining algorithm used in FOPL is:

 function FOL-FC-ASK(KB,?) returns a substitution or false
 inputs: KB, the knowledge base, a set of first-order definite clauses
 ?, the query, an atomic sentence
 local variables: new, the new sentences inferred on each iteration
 repeat until new is empty
 new ?{}
 for each rule in KB do
 (p1 ? . . . ? pn ? q)?STANDARDIZE-VARIABLES(rule)
 for each ? such that SUBST(?, p1 ? . . . ? pn) = SUBST(?, p_
 1 ? . . . ? p_
 for some p_
 1 , . . . , p_
 n in KB
 q_?SUBST(?, q)
 if q_ does not unify with some sentence already in KB or new then
 add q_ to new
 if ? is not fail then return ?
 add new to KB
 return false 

Conclusions drawn:

  • Forward Chaining in FOPL is sound as each inference is an application of Generalized Modus Ponen.
  • It is complete as it answers each query.

Let’s see an example of Forward Chaining in FOPL

Consider the below axioms:

  1. Gita loves all types of clothes.
  2. Suits are clothes.
  3. Jackets are clothes.
  4. Anything any wear and isn’t bad is clothes.
  5. Sita wears skirt and is good.
  6. Renu wears anything Sita wears.

Apply backward chaining and prove that Gita loves Kurtis.

Solution: Convert the given axioms into FOPL as:

  1. x: clothes(x)?loves(Gita, x).
  2. Suits(x)?Clothes(x).
  3. Jackets(x)?Clothes(x).
  4. wears(x,y)? ¬bad(y)?Clothes(x)
  5. wears(Sita,skirt) ? ¬good(Sita)
  6. wears(Sita,x)?wears(Renu,x)

To prove: Gita loves Kurtis.

FOPL: loves(Gita, Kurtis).

On applying forward chaining in the below graph:  

Thus, it is proved that Gita loves Kurtis.