This is a compilation of the projects and implementations of models I have developed as a part of the MSAI program at Northwestern University
This was an assignment in the class MSAI 348 'Intro to AI' taught by Professor Jason (Willie) Wilson
This is my implementation of a knowledge base to hold facts and rules. When new facts or rules are added to the knowledge base with student_code.kb_add
in the form of horn clauses, forward chaining is applied to infer new facts and rules. The knowledge base also handles retracting of facts and rules with student_code.kb_retract
. Truth maintenance is employed to keep the knowledge base up to date, and make sure all inferred facts and rules are grounded. The knowledge base can also be queried with student_code.kb_ask
to see if facts or rules exist in the knowledge base.
This was an assignment in the class MSAI 348 'Intro to AI' taught by Professor Jason (Willie) Wilson
This is my implementation of A* search for pathing. The example used for this code is to find the fastest path around Northwestern's campus given travel times and using direct euclidian distance measurements as the heuristic.
student_code.a_star_search
can be called, providing the start and end locations to return the optimal path.
This was an assignment in the class MSAI 348 'Intro to AI' taught by Professor Jason (Willie) Wilson
This is my implementation of Minimax adversarial search to play the game: Konane
The program is built with varying levels of AI to play against. There is one version that only uses the minimax algorithm to choose the optimal move, and another version that builds upon the minimax algorithm and employs alpha-beta pruning to increase efficiency. The game to can be played by running
python main.py $P1 $P2
This was an assignment in the class MSAI 348 'Intro to AI' taught by Professor Jason (Willie) Wilson
Included are domain and problem files I created to be used with the online .pddl editor here to simulate robot pathing within Amazon warehouses. The PDDL domain includes actions {}
with necessary precondition and effects that can be carried out to fulfill the goal in the chosen problem file.
This was an assignment in the class MSAI 348 'Intro to AI' taught by Professor Jason (Willie) Wilson
This is my implementation of the infamous Earthquake or Burglary Baysian Network:
Given the joint probabilities for each individual event, the chain rule is propagated through the network to calculate conditional probabilities P(A|B)
.
This was an assignment in the class MSAI 348 'Intro to AI' taught by Professor Jason (Willie) Wilson
This is my NBC implementation used to classify movie reviews as positive or negative. The implementation uses add-one smoothing, removal of stop words, removal of numerical and symbol characters, and removal of capitalization.
My model reaches 0.934 and 0.755 f-scores for positive and negative classes, respectively after 10-fold cross-validation on the data set.
ID3.py
is my implementation of the ID3 decision tree algorithm using the calculation of info-gain to determine branches. Pruning is implemented to combat over-fitting during training. Nodes are only pruned if the training accuracy at that node does not decrease after pruning.
A decision tree can be created by calling:
ID3.ID3(data, default)
where data
is an array of examples where each example is a dictionary of attribute:value pairs, and the target class variable is a special attribute with the name "Class". Any missing attributes are denoted with a value of "?" and default
is the default value. The tree object is returned.
The tree can then be tested by calling:
ID3.test(node, examples)
where node
is the trained tree object, and examples
is a test set of examples. This returns an accuracy % of the test set that was classified correctly with the trained tree.