Skip to content

Commit

Permalink
Tr/clean up docs (#98)
Browse files Browse the repository at this point in the history
* Fix admonitions in docs

* Fix invalid @ref's in docs
  • Loading branch information
imreddyTeja authored Dec 6, 2024
1 parent 2efcedc commit 7f3a0cd
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 23 deletions.
24 changes: 13 additions & 11 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,27 +13,28 @@ implemented, the [APIs](@ref) page collects all of them.
## `Device`s and `Context`s

The two most important objects in `ClimaComms.jl` are the [`Device`](@ref
`AbstractDevice`) and the [`Context`](@ref `AbstractCommsContext`).
ClimaComms.AbstractDevice) and the [`Context`](@ref ClimaComms.AbstractCommsContext).

A `Device` identifies a computing device, a piece of hardware that will be
executing some code. The `Device`s currently implemented are
- [`CPUSingleThreaded`](@ref), for a CPU core with a single thread;
- [`CUDADevice`](@ref), for a single CUDA-enabled GPU.
- [`CPUSingleThreaded`](@ref ClimaComms.CPUSingleThreaded), for a CPU core with a single thread;
- [`CUDADevice`](@ref ClimaComms.CUDADevice), for a single CUDA-enabled GPU.

!!! warn [`CPUMultiThreaded`](@ref) is also available, but this device is not
!!! warn
[`CPUMultiThreaded`](@ref ClimaComms.CPUMultiThreaded) is also available, but this device is not
actively used or developed.

`Device`s are part of [`Context`](@ref `AbstractCommsContext`)s,
`Device`s are part of [`Context`](@ref ClimaComms.AbstractCommsContext)s,
objects that contain information require for multiple `Device`s to communicate.
Implemented `Context`s are
- [`SingletonCommsContext`](@ref), when there is no parallelism;
- [`MPICommsContext`](@ref), for a MPI-parallelized runs.
- [`SingletonCommsContext`](@ref ClimaComms.SingletonCommsContext), when there is no parallelism;
- [`MPICommsContext`](@ref ClimaComms.MPICommsContext) , for a MPI-parallelized runs.

To choose a device and a context, most `CliMA` packages use the
[`device()`](@ref) and [`context()`](@ref) functions. These functions look at
[`device()`](@ref ClimaComms.device()) and [`context()`](@ref ClimaComms.context()) functions. These functions look at
specific environment variables and set the `device` and `context` accordingly.
By default, the [`CPUSingleThreaded`](@ref) device is chosen and the context is
set to [`SingletonCommsContext`](@ref) unless `ClimaComms` detects being run in
By default, the [`CPUSingleThreaded`](@ref ClimaComms.CPUSingleThreaded) device is chosen and the context is
set to [`SingletonCommsContext`](@ref ClimaComms.SingletonCommsContext) unless `ClimaComms` detects being run in
a standard MPI launcher (as `srun` or `mpiexec`).

For example, to run a simulation on a GPU, run `julia` as
Expand All @@ -43,7 +44,8 @@ export CLIMACOMMS_CONTEXT="SINGLETON"
# call/open julia as usual
```

!!! note There might be other ways to control the device and context. Please,
!!! note
There might be other ways to control the device and context. Please,
refer to the documentation of the specific package to learn more.

## Running with MPI/CUDA
Expand Down
24 changes: 12 additions & 12 deletions docs/src/internals.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ First, we will describe what `Device`s and `Context`s are.

`Device`s identify a specific type of computing hardware (e.g., a CPU/a NVidia
GPU, et cetera). The `Device`s implemented are
- [`CPUSingleThreaded`](@ref), for a CPU core with a single thread;
- [`CUDADevice`](@ref), for a single CUDA GPU.
- [`CPUSingleThreaded`](@ref ClimaComms.CPUSingleThreaded), for a CPU core with a single thread;
- [`CUDADevice`](@ref ClimaComms.CUDADevice), for a single CUDA GPU.

`Device`s in `ClimaComms` are
[singletons](https://docs.julialang.org/en/v1/manual/types/#man-singleton-types),
Expand Down Expand Up @@ -42,7 +42,7 @@ Low-level `CliMA` code often needs to implement different methods for different
this level of specialization is often not required at higher levels.

Higher-level code often interacts with `Device`s through `ClimaComms` functions
such [`time`](@ref) or [`sync`](@ref). These functions implement device-agnostic
such [`time`](@ref ClimaComms.@time) or [`sync`](@ref ClimaComms.@sync). These functions implement device-agnostic
operations. For instance, the proper way to compute how long a given expression takes to compute is
```julia
import ClimaComms: @time
Expand All @@ -59,15 +59,15 @@ For a complete list of such functions, consult the [APIs](@ref) page.
### `Context`s

A `Context` contains the information needed for multiple devices to communicate.
For simulations with only one device, [`SingletonCommsContext`](@ref) simply
contains an instance of an [`AbstractDevice`](@ref). For `MPI` simulations, the
For simulations with only one device, [`SingletonCommsContext`](@ref ClimaComms.SingletonCommsContext) simply
contains an instance of an [`AbstractDevice`](@ref ClimaComms.AbstractDevice). For `MPI` simulations, the
context contains the MPI communicator as well.

`Context`s specify devices and form of parallelism, so they are often passed
around in both low-level and higher-level code.

`ClimaComms` provide functions that are context-agnostic. For instance,
[`reduce`](@ref) applies a given function to an array across difference
[`reduce`](@ref ClimaComms.reduce) applies a given function to an array across difference
processes and collects the result. Let us see an example
```julia
import ClimaComms
Expand All @@ -86,12 +86,12 @@ reduced_array = ClimaComms.reduce(context, my_array, +)
ClimaComms.iamroot(context) && @show reduced_array
```

[`@import_required_backends`](@ref) is responsible for loading relevant
[`@import_required_backends`](@ref ClimaComms.@import_required_backends) is responsible for loading relevant
libraries, for more information refer to the [Backends and extensions](@ref)
section.

In this snippet, we obtained the default context from environment variables
using the [`context`](@ref) function. As developers, we do not know whether this
using the [`context`](@ref ClimaComms.context) function. As developers, we do not know whether this
code is being run on a single process or multiple, so took the more generic
stance that the code _might_ be run on several processes. When several processes
are being used, the same code is being run by parallel Julia instances, each
Expand All @@ -103,9 +103,9 @@ relevant.

In this example, we used `mypid` to set up `my_array` in such a way that it
would be different on different processes. We set up the array with `ArrayType`,
obtained with [`array_type`](@ref). This function provides the type to allocate
the array on the correct device (CPU or GPU). Then, we applied [`reduce`](@ref)
to sum them all. [`reduce`](@ref) collects the result to the _root_ process, the
obtained with [`array_type`](@ref ClimaComms.array_type). This function provides the type to allocate
the array on the correct device (CPU or GPU). Then, we applied [`reduce`](@ref ClimaComms.reduce)
to sum them all. [`reduce`](@ref ClimaComms.reduce) collects the result to the _root_ process, the
one with `pid = 1`. For single-process runs, the only process is also a root
process.

Expand All @@ -130,7 +130,7 @@ package is required but not loaded and warn you about it.
Using non-trivial backends might require you to install `CUDA.jl` and/or
`MPI.jl` in your environment.

> Note: When using [`context`](@ref) to select the context, it is safe to always
> Note: When using [`context`](@ref ClimaComms.context) to select the context, it is safe to always
> add [`ClimaComms.@import_required_backends`](@ref) at the top of your scripts.
> *Do not add* [`ClimaComms.@import_required_backends`](@ref) to library code
> (i.e., in `src`) because the macro requires dependencies that should not be
Expand Down

0 comments on commit 7f3a0cd

Please sign in to comment.