Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example: hard core lattice gas #58

Merged
merged 3 commits into from
Aug 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
GenericTensorNetworks = "3521c873-ad32-4bb4-b63d-f4f178f42b49"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
LiveServer = "16fef848-5104-11e9-1b77-fb7a48bbb589"
TensorInference = "c2297e78-99bd-40ad-871d-f50e56b81012"
Expand Down
12 changes: 11 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,26 @@ using TensorInference
using TensorInference: OMEinsum
using TensorInference.OMEinsum: OMEinsumContractionOrders
using Documenter, Literate
using Pkg

# Literate Examples
const EXAMPLE_DIR = pkgdir(TensorInference, "examples")
const LITERATE_GENERATED_DIR = pkgdir(TensorInference, "docs", "src", "generated")
mkpath(LITERATE_GENERATED_DIR)
for each in readdir(EXAMPLE_DIR)
# setup directory
workdir = joinpath(LITERATE_GENERATED_DIR, each)
cp(joinpath(EXAMPLE_DIR, each), workdir; force=true)
# NOTE: for convenience, we use the `docs` environment
# enter env
# Pkg.activate(workdir)
# Pkg.instantiate()
# build
input_file = joinpath(workdir, "main.jl")
@info "building" input_file
Literate.markdown(input_file, workdir; execute=true)
# restore environment
# Pkg.activate(Pkg.PREV_ENV_PATH[])
end

const EXTRA_JL = ["performance.jl"]
Expand Down Expand Up @@ -42,7 +51,8 @@ makedocs(;
"Background" => "background.md",
"Examples" => [
"Overview" => "examples-overview.md",
"Asia network" => "generated/asia/main.md",
"Asia Network" => "generated/asia/main.md",
"Hard-core Lattice Gas" => "generated/hard-core-lattice-gas/main.md",
],
"UAI file formats" => "uai-file-formats.md",
"Performance tips" => "generated/performance.md",
Expand Down
94 changes: 94 additions & 0 deletions examples/hard-core-lattice-gas/main.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# # The hard core lattice gas
# ## Hard-core lattice gas problem
# Hard-core lattice gas refers to a model used in statistical physics to study the behavior of particles on a lattice, where the particles are subject to an exclusion principle known as the "hard-core" interaction that characterized by a blockade radius. Distances between two particles can not be smaller than this radius.
#
# * Nath T, Rajesh R. Multiple phase transitions in extended hard-core lattice gas models in two dimensions[J]. Physical Review E, 2014, 90(1): 012120.
# * Fernandes H C M, Arenzon J J, Levin Y. Monte Carlo simulations of two-dimensional hard core lattice gases[J]. The Journal of chemical physics, 2007, 126(11).
#
# Let define a $10 \times 10$ triangular lattice, with unit vectors
# ```math
# \begin{align}
# \vec a &= \left(\begin{matrix}1 \\ 0\end{matrix}\right)\\
# \vec b &= \left(\begin{matrix}\frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{matrix}\right)
# \end{align}
# ```

a, b = (1, 0), (0.5, 0.5*sqrt(3))
Na, Nb = 10, 10
sites = [a .* i .+ b .* j for i=1:Na, j=1:Nb]

# There exists blockade interactions between hard-core particles.
# We connect two lattice sites within blockade radius by an edge.
# Two ends of an edge can not both be occupied by particles.
blockade_radius = 1.1
using GenericTensorNetworks: show_graph, unit_disk_graph
using GenericTensorNetworks.Graphs: edges, nv
graph = unit_disk_graph(vec(sites), blockade_radius)
show_graph(graph; locs=sites, texts=fill("", length(sites)))

# These constraints defines a independent set problem that characterized by the following energy based model.
# Let $G = (V, E)$ be a graph, where $V$ is the set of vertices and $E$ be the set of edges. The energy model for the hard-core lattice gas problem is
# ```math
# E(\mathbf{n}) = -\sum_{i \in V}w_i n_i + \infty \sum_{(i, j) \in E} n_i n_j
# ```
# where $n_i \in \{0, 1\}$ is the number of particles at site $i$, and $w_i$ is the weight associated with it. For unweighted graphs, the weights are uniform.
# The solution space hard-core lattice gas is equivalent to that of an independent set problem. The independent set problem involves finding a set of vertices in a graph such that no two vertices in the set are adjacent (i.e., there is no edge connecting them).
# One can create a tensor network based modeling of an independent set problem with package [`GenericTensorNetworks.jl`](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
using GenericTensorNetworks
problem = IndependentSet(graph; optimizer=GreedyMethod());

# There has been a lot of discussions related to solution space properties in the `GenericTensorNetworks` [documentaion page](https://queracomputing.github.io/GenericTensorNetworks.jl/dev/generated/IndependentSet/).
# In this example, we show how to use `TensorInference` to use probabilistic inference for understand the finite temperature properties of this statistic physics model.
# We use [`TensorNetworkModel`](@ref) to convert a combinatorial optimization problem to a probabilistic model.
# Here, we let the inverse temperature be $\beta = 3$.

# ## Probabilistic modeling correlation functions
using TensorInference
β = 3.0
pmodel = TensorNetworkModel(problem, β)

# The partition function of this statistical model can be computed with the [`probability`](@ref) function.
partition_func = probability(pmodel)

# The default return value is a log-rescaled tensor. Use indexing to get the real value.
partition_func[]

# The marginal probabilities can be computed with the [`marginals`](@ref) function, which measures how likely a site is occupied.
mars = marginals(pmodel)
show_graph(graph; locs=sites, vertex_colors=[(1-b, 1-b, 1-b) for b in getindex.(mars, 2)], texts=fill("", nv(graph)))
# The can see the sites at the corner is more likely to be occupied.
# To obtain two-site correlations, one can set the variables to query marginal probabilities manually.
pmodel2 = TensorNetworkModel(problem, β; mars=[[e.src, e.dst] for e in edges(graph)])
mars = marginals(pmodel2);

# We show the probability that both sites on an edge are not occupied
show_graph(graph; locs=sites, edge_colors=[(b=mar[1, 1]; (1-b, 1-b, 1-b)) for mar in mars], texts=fill("", nv(graph)), edge_line_width=5)

# ## The most likely configuration
# The MAP and MMAP can be used to get the most likely configuration given an evidence.
# The relavant function is [`most_probable_config`](@ref).
# If we fix the vertex configuration at one corner to be one, we get the most probably configuration as bellow.
pmodel3 = TensorNetworkModel(problem, β; evidence=Dict(1=>1))
mars = marginals(pmodel3)
logp, config = most_probable_config(pmodel3)

# The log probability is 102. Let us visualize the configuration.
show_graph(graph; locs=sites, vertex_colors=[(1-b, 1-b, 1-b) for b in config], texts=fill("", nv(graph)))
# The number of particles is
sum(config)

# Otherwise, we will get a suboptimal configuration.
pmodel3 = TensorNetworkModel(problem, β; evidence=Dict(1=>0))
logp2, config2 = most_probable_config(pmodel)

# The log probability is 99, which is much smaller.
show_graph(graph; locs=sites, vertex_colors=[(1-b, 1-b, 1-b) for b in config2], texts=fill("", nv(graph)))
# The number of particles is
sum(config2)

## Sampling configurations
# One can ue [`sample`](@ref) to generate samples from hard-core lattice gas at finite temperature.
# The return value is a matrix, with the columns correspond to different samples.
configs = sample(pmodel3, 1000)
sizes = sum(configs; dims=1)
[count(==(i), sizes) for i=0:34] # counting sizes
7 changes: 7 additions & 0 deletions src/cuda.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,10 @@

# NOTE: this interface should be in OMEinsum
match_arraytype(::Type{<:CuArray{T, N}}, target::AbstractArray{T, N}) where {T, N} = CuArray(target)

function keep_only!(x::CuArray{T}, j) where T
CUDA.@allowscalar hotvalue = x[j]
fill!(x, zero(T))
CUDA.@allowscalar x[j] = hotvalue
return x

Check warning on line 16 in src/cuda.jl

View check run for this annotation

Codecov / codecov/patch

src/cuda.jl#L12-L16

Added lines #L12 - L16 were not covered by tests
end
2 changes: 0 additions & 2 deletions src/generictensornetworks.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
using .GenericTensorNetworks: generate_tensors, GraphProblem, flavors, labels

export probabilistic_model

"""
$TYPEDSIGNATURES

Expand Down
12 changes: 10 additions & 2 deletions src/map.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
########### Backward tropical tensor contraction ##############
# This part is copied from [`GenericTensorNetworks`](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
function einsum_backward_rule(eins, xs::NTuple{M, AbstractArray{<:Tropical}} where {M}, y, size_dict, dy)
return backward_tropical(OMEinsum.getixs(eins), xs, OMEinsum.getiy(eins), y, dy, size_dict)
return backward_tropical!(OMEinsum.getixs(eins), xs, OMEinsum.getiy(eins), y, dy, size_dict)
end

"""
Expand All @@ -16,7 +16,7 @@ The backward rule for tropical einsum.
* `ymask` is the boolean mask for gradients,
* `size_dict` is a key-value map from tensor label to dimension size.
"""
function backward_tropical(ixs, @nospecialize(xs::Tuple), iy, @nospecialize(y), @nospecialize(ymask), size_dict)
function backward_tropical!(ixs, @nospecialize(xs::Tuple), iy, @nospecialize(y), @nospecialize(ymask), size_dict)
y .= masked_inv.(y, ymask)
masks = []
for i in eachindex(ixs)
Expand All @@ -28,10 +28,18 @@ function backward_tropical(ixs, @nospecialize(xs::Tuple), iy, @nospecialize(y),
# compute the mask, one of its entry in `A^{-1}` that equal to the corresponding entry in `X` is masked to true.
j = argmax(xs[i] ./ inv.(A))
mask = onehot_like(A, j)
# zero-out irrelavant indices, otherwise, the result may be unreliable.
keep_only!(xs[i], j)
push!(masks, mask)
end
return masks
end
function keep_only!(x::AbstractArray{T}, j) where T
hotvalue = x[j]
fill!(x, zero(T))
x[j] = hotvalue
return x
end
masked_inv(x, y) = iszero(y) ? zero(x) : inv(x)
function onehot_like(A::AbstractArray, j)
mask = zero(A)
Expand Down
Loading