Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross Entropy #26

Open
wants to merge 17 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 9 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
# STMO-ZOO
# Cross-Entropy method
By Ceri-Anne Laureyssens

Welcome to the STMO zoo! This is your final assignment for the course Selected Topics in Mathematical Optimization. Your goal is to implement an optimization method in Julia and contribute this to this repository. To pass, you have to:
The notebook gives an introduction to cross-entropy and its use in the cross-entropy method. Cross-entropy is a metric used to measure the Kullback-Leibler (KL) distance between two probability distributions (f and g). The cross-entropy method is a Monte Carlo method for importance sampling and optimization and is found by minimizing the previously called KL distance in between distribution f and g (parameterized by θ). This is equivalent to choosing θ that minimizes the cross-entropy.

- fork this repo and create a pull request;
- add a module to `src` with **at least one function**
- add at least one unit test to the folder `test`;
- document all your functions and add a page to the documentation page;
- make a notebook in [Pluto](https://github.com/fonsp/Pluto.jl) and add it to `notebooks`;
- perform a small code review of two other students.
The notebook provides an implementation of the cross entropy method for optimizing multivariate time series distributions. Suppose we have a timeseries `X = {x₁, ..., xₙ}` where each `xᵢ` is a vector of dimension `m`. The `cross_entropy_method` function can handle two different scenarios:

Depending on the project you choose some of these individual assignments might be really minimalistic, with other parts larger. For example, if you want to develop an application, say solving the graph coloring problem with Tabu Search, you might have only a single function in the source code (e.g., generating an instance) but have a fairly large notebook with a tutorial. On the other hand, if you work on a method, e.g., implementing Bee Colony Optimization, you might have many functions in the source code, while your notebook is only a demonstration on the test functions.
1. The time series is sampled IID from a single distribution `p`: `xᵢ ~ p(x)`. In this case, the distribution is represented as a `Dict{Symbol, Tuple{Sampleable, Int64}}`. The dictionary will contain `m` symbols, one for each variable in the series. The `Sampleable` object represents `p` and the integer is the length of the timeseries (`N`).
2. The time series is sampled from a different distribution at each timestep `pᵢ`: `xᵢ ~ pᵢ(x)`. In this case, the distribution is also represented as a `Dict{Symbol, Tuple{Sampleable, Int64}}`.

[![Build Status](https://travis-ci.org/MichielStock/STMOZOO.svg?branch=master)](https://travis-ci.org/MichielStock/STMOZOO)[![Coverage Status](https://coveralls.io/repos/github/MichielStock/STMOZOO/badge.svg?branch=master)](https://coveralls.io/github/MichielStock/STMOZOO?branch=master)
Finishing off the notebook provides a fun little example implementing the CE method in the importance sampling technique.

[![Build Status](https://travis-ci.org/MichielStock/STMOZOO.svg?branch=master)](https://travis-ci.org/MichielStock/STMOZOO)[![Coverage Status](https://coveralls.io/repos/github/MichielStock/STMOZOO/badge.svg?branch=master)](https://coveralls.io/github/MichielStock/STMOZOO?branch=master)
Loading