Artificial Intelligence Tutorial

Introduction to Artificial Intelligence Intelligent Agents

Search Algorithms

Problem-solving Uninformed Search Informed Search Heuristic Functions Local Search Algorithms and Optimization Problems Hill Climbing search Differences in Artificial Intelligence Adversarial Search in Artificial Intelligence Minimax Strategy Alpha-beta Pruning Constraint Satisfaction Problems in Artificial Intelligence Cryptarithmetic Problem in Artificial Intelligence

Knowledge, Reasoning and Planning

Knowledge based agents in AI Knowledge Representation in AI The Wumpus world Propositional Logic Inference Rules in Propositional Logic Theory of First Order Logic Inference in First Order Logic Resolution method in AI Forward Chaining Backward Chaining Classical Planning

Uncertain Knowledge and Reasoning

Quantifying Uncertainty Probabilistic Reasoning Hidden Markov Models Dynamic Bayesian Networks Utility Functions in Artificial Intelligence

Misc

What is Artificial Super Intelligence (ASI) Artificial Satellites Top 7 Artificial Intelligence and Machine Learning trends for 2022 8 best topics for research and thesis in artificial intelligence 5 algorithms that demonstrate artificial intelligence bias AI and ML Trends in the World AI vs IoT Difference between AI and Neural Network Difference between Artificial Intelligence and Human Intelligence Virtual Assistant (AI Assistant) ARTIFICIAL INTELLIGENCE PAINTING ARTIFICIAL INTELLIGENCE PNG IMAGES Best Books to learn Artificial Intelligence Certainty Factor in AI Certainty Factor in Artificial Intelligence Disadvantages of Artificial Intelligence In Education Eight topics for research and thesis in AI Engineering Applications of Artificial Intelligence Five algorithms that demonstrate artificial intelligence bias 6th Global summit on artificial intelligence and neural networks Acting Humanly In Artificial Intelligence AI and ML Trends in the World AI vs IoT Artificial Communication Artificial intelligence assistant operating system Artificial Intelligence in Pharmacy Artificial Intelligence in Power Station Artificial Intelligence in Social Media Artificial Intelligence in Supply Chain Management Artificial Intelligence in Transportation Artificial Intelligence Interview Questions and Answers Artificial Intelligence Jobs in India For Freshers Integration of Blockchain and Artificial Intelligence Interesting Facts about Artificial Intelligence Machine Learning and Artificial Intelligence Helps Businesses Operating System Based On Artificial Intelligence SIRI ARTIFICIAL INTELLIGENCE SKILLS REQUIRED FOR ARTIFICIAL INTELLIGENCE Temporal Models in Artificial Intelligence Top 7 Artificial Intelligence and Machine Learning trends for 2022 Types Of Agents in Artificial Intelligence Vacuum Cleaner Problem in AI Water Jug Problem in Artificial Intelligence What is Artificial Super Intelligence (ASI) What is Logic in AI Which language is used for Artificial Intelligence Essay on Artificial Intelligence Upsc Flowchart for Genetic Algorithm in AI Hill Climbing In Artificial Intelligence IEEE Papers on Artificial Intelligence Impact of Artificial Intelligence On Everyday Life Impact of Artificial Intelligence on Jobs

Dynamic Bayesian Networks

DBN is a temporary network model that is used to relate variables to each other for adjacent time steps. Each part of a Dynamic Bayesian Network can have any number of Xivariables for states representation, and evidence variables Et. A DBN is a type of Bayesian networks. Dynamic Bayesian Networks were developed by Paul Dagmun at Standford’s University in the early 1990s.

How is DBN different from HMM?

A Hidden Markov Model (HMM) can be represented as a Dynamic Bayesian Network with a single state variable and evidence variable. On the other hand, a DBN can be converted into a HMM. The difference between the two is that, by decomposing the complex system state into its constituent variables, DBN take advantage of sparseness in the temporal probability model. Thus, the relationship between HMM and DBN is analogues.

Drawbacks of Hidden Markov Model

  • HMM requires more amount of space
  • The transition matrix is huge, which leads to an expensive Markov model.
  • HMM is not used to solve large problems as it is not possible to learn so many parameters.

Building DBN

There are following three kinds of information which need to be specified while building Dynamic Bayesian Networks:

  1. The prior distribution over the state variables P(X0)
  2. The transition model P(Xt+1|Xt)
  3. The sensor model P(Et|Xt)

In order to specify both transition and sensor models, the topology of the connections occurs between the successive slices, and between the evidence and state variable. It is because it is assumed that the two models are stationary.

Inference in DBN

There are two types of inference discussed in Dynamic Bayesian Networks:

  • Exact Inference in DBN
  • Approximate inference in DBN

Exact inference in DBN

A full Bayesian network of a DBN can be constructed by replicating the slices to accomplish the observations. This technique is known as Unrolling.           A naïve application would not be efficient because inference time increases with new observations. Instead, we can use an incremental approach by remembering only two slices.

Approximate Inference in DBN

With the exact inference methods, there is a possibility to use Approximate Inference methods too. Approximate methods such as likelihood weighting, Markov chain Monte Carlo, are the easily adaptable method to DBN context. The likelihood method works by sampling the nonviolence nodes in a topological manner. It is easy to apply the likelihood method directly over an unrolled DBN but leads to high time and more space requirement problems. It is because the standard algorithm runs each sample, all the way through a network. Another method can be to run all N samples together via DBN, where one slice executes at a time. Therefore, we use different innovations that best gives the best method:

Firstly, use the samples themselves as an approximate representation of the distribution of the current state.

Secondly, focusing on the set of samples on high-probability regions of the state space.

Thus, to do so, we use a family of algorithms known as Particle filtering. The working of the algorithm is:

  1. A population of initial-states N samples is created by sampling from the prior distribution, P(X0).
  2. Update cycle is repeated everytime as:
  3. Propagating each sample in the forward direction by sampling the next state value Xt+1, where the current state Xt is given, based over the transition model.
  4. The samples are weighted by the likelihood assigned to the new evidence.
  5. Resample the population to generate a new population of N samples.

Let’s see the Particle filtering algorithm:

function PARTICLE-FILTER(e, N, dbn) returns a set of samples for next time step

inputs: e, the new incoming evidence

N, number of samples to be maintained

dbn, a Dynamic Bayesian Network with prior P(X0), transition model P(X1|X0), sensor model P(E1|X1)

persistent: S, vector of samples of size N, initially generated from P(X0)

local variables: W, vector of weights of size N

for i = 1 to N do

S[i ]?sample from P(X1 | X0 = S[i ]) /* step 1 */

W[i ]?P(e | X1 = S[i]) /* step 2 */

S ?WEIGHTED-SAMPLE-WITH-REPLACEMENT(N, S,W) /* step 3 */

return S

Thus, particle filtering is consistent as well as an efficient technique because the algorithm maintains a well approximation to the true posterior by using a constant number of samples.