Skip to content

Commit

Permalink
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_fullSup
Browse files Browse the repository at this point in the history
  • Loading branch information
LuisAlfredoNu committed Sep 17, 2024
2 parents a734267 + a7c4b09 commit 22af3a0
Show file tree
Hide file tree
Showing 33 changed files with 985 additions and 511 deletions.
28 changes: 22 additions & 6 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,12 @@

### New features since last release

* Add shot measurement support to `lightning.tensor`.
[(#852)](https://github.com/PennyLaneAI/pennylane-lightning/pull/852)

* Build and upload Lightning-Tensor wheels (x86_64, AARCH64) to PyPI.
[(#862)](https://github.com/PennyLaneAI/pennylane-lightning/pull/862)
[(#905)](https://github.com/PennyLaneAI/pennylane-lightning/pull/905)

* Add `Projector` observable support via diagonalization to Lightning-GPU.
[(#894)](https://github.com/PennyLaneAI/pennylane-lightning/pull/894)
Expand All @@ -27,34 +31,46 @@

### Improvements

* Skip the compilation of Lightning simulators and development requirements to boost the build of public docs up to 5x.
[(#904)](https://github.com/PennyLaneAI/pennylane-lightning/pull/904)

* Build Lightning wheels in `Release` mode.
[(#903)](https://github.com/PennyLaneAI/pennylane-lightning/pull/903)

* Update Pybind11 to 2.13.5.
[(#901)](https://github.com/PennyLaneAI/pennylane-lightning/pull/901)

* Migrate wheels artifacts to v4.
[(#893)](https://github.com/PennyLaneAI/pennylane-lightning/pull/893)

* Prefer `tomlkit` over `toml` for building Lightning wheels, and choose `tomli` and `tomllib` over `toml` when installing the package.
[(#857)](https://github.com/PennyLaneAI/pennylane-lightning/pull/857)

* Update GitHub actions in response to a high-severity vulnerability.
[(#887)](https://github.com/PennyLaneAI/pennylane-lightning/pull/887)

* Optimize and simplify controlled kernels in Lightning-Qubit.
[(#882)](https://github.com/PennyLaneAI/pennylane-lightning/pull/882)

* Optimize gate cache recording for `lightning.tensor` C++ layer.
[(#879)](https://github.com/PennyLaneAI/pennylane-lightning/pull/879)

* Unify Lightning-Kokkos device and Lightning-Qubit device under a Lightning Base device.
[(#876)](https://github.com/PennyLaneAI/pennylane-lightning/pull/876)

* Smarter defaults for the `split_obs` argument in the serializer. The serializer splits linear combinations into chunks instead of all their terms.
[(#873)](https://github.com/PennyLaneAI/pennylane-lightning/pull/873/)

* Unify Lightning-Kokkos device and Lightning-Qubit device under a Lightning Base device.
[(#876)](https://github.com/PennyLaneAI/pennylane-lightning/pull/876)
* Prefer `tomlkit` over `toml` for building Lightning wheels, and choose `tomli` and `tomllib` over `toml` when installing the package.
[(#857)](https://github.com/PennyLaneAI/pennylane-lightning/pull/857)

* LightningKokkos gains native support for the `PauliRot` gate.
[(#855)](https://github.com/PennyLaneAI/pennylane-lightning/pull/855)

### Documentation

### Bug fixes

* Bug fix for analytic `probs` in the `lightning.tensor` C++ layer.
[(#906)](https://github.com/PennyLaneAI/pennylane-lightning/pull/906)

### Contributors

This release contains contributions from (in alphabetical order):
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docker_linux_x86_64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
timeout-minutes: 180
name: docker::${{ matrix.os }}::${{ matrix.pl_backend }}::${{ inputs.lightning-version }}
runs-on:
group: 'Lightning Additional Runners'
group: 'PL Additional Runners'
steps:

- name: Checkout
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/wheel_linux_aarch64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,8 @@ jobs:
cat /etc/yum.conf | sed "s/\[main\]/\[main\]\ntimeout=5/g" > /etc/yum.conf
python -m pip install ninja cmake~=3.27.0
CIBW_ENVIRONMENT: CMAKE_ARGS="-DENABLE_LAPACK=OFF"
CIBW_ENVIRONMENT: |
CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Release"
CIBW_MANYLINUX_AARCH64_IMAGE: manylinux_2_28

Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/wheel_linux_aarch64_cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ jobs:
PATH=/opt/rh/gcc-toolset-12/root/usr/bin:$PATH:/usr/local/cuda-${{ matrix.cuda_version }}/bin \
LD_LIBRARY_PATH=/opt/rh/gcc-toolset-12/root/usr/lib64:/opt/rh/gcc-toolset-12/root/usr/lib:/opt/rh/gcc-toolset-12/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-12/root/usr/lib/dyninst:$LD_LIBRARY_PATH:/usr/local/cuda-${{ matrix.cuda_version }}/lib64 \
PKG_CONFIG_PATH=/opt/rh/gcc-toolset-12/root/usr/lib64/pkgconfig:$PKG_CONFIG_PATH \
CMAKE_ARGS="-DENABLE_LAPACK=OFF"
CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Release"
CIBW_REPAIR_WHEEL_COMMAND_LINUX: "./bin/auditwheel repair -w {dest_dir} {wheel}"

Expand Down Expand Up @@ -120,6 +120,8 @@ jobs:
cuda_version: ["12"]
cibw_build: ${{ fromJson(needs.set_wheel_build_matrix.outputs.python_version) }}
runs-on: ubuntu-latest
permissions:
id-token: write
if: |
github.event_name == 'release' ||
github.ref == 'refs/heads/master'
Expand All @@ -130,9 +132,7 @@ jobs:
name: ${{ runner.os }}-wheels-${{ matrix.pl_backend }}-${{ fromJson('{ "cp310-*":"py310","cp311-*":"py311","cp312-*":"py312" }')[matrix.cibw_build] }}-${{ matrix.arch }}-cu${{ matrix.cuda_version }}.zip
path: dist

- name: Upload wheels to PyPI
- name: Upload wheels to TestPyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.TEST_PYPI_LGPU_TOKEN }}
repository_url: https://test.pypi.org/legacy/
3 changes: 2 additions & 1 deletion .github/workflows/wheel_linux_x86_64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,8 @@ jobs:
source /opt/rh/gcc-toolset-13/enable -y
PATH="/opt/rh/gcc-toolset-13/root/usr/bin:$PATH"
CIBW_ENVIRONMENT: PATH="/opt/rh/gcc-toolset-13/root/usr/bin:$PATH"
CIBW_ENVIRONMENT: |
PATH="/opt/rh/gcc-toolset-13/root/usr/bin:$PATH" CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Release"
CIBW_BEFORE_TEST: |
python -m pip install -r requirements-tests.txt
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/wheel_linux_x86_64_cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ jobs:
PATH=/opt/rh/gcc-toolset-12/root/usr/bin:$PATH:/usr/local/cuda-${{ matrix.cuda_version }}/bin \
LD_LIBRARY_PATH=/opt/rh/gcc-toolset-12/root/usr/lib64:/opt/rh/gcc-toolset-12/root/usr/lib:/opt/rh/gcc-toolset-12/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-12/root/usr/lib/dyninst:$LD_LIBRARY_PATH:/usr/local/cuda-${{ matrix.cuda_version }}/lib64 \
PKG_CONFIG_PATH=/opt/rh/gcc-toolset-12/root/usr/lib64/pkgconfig:$PKG_CONFIG_PATH \
CMAKE_ARGS="-DENABLE_LAPACK=OFF"
CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Release"
CIBW_REPAIR_WHEEL_COMMAND_LINUX: "./bin/auditwheel repair -w {dest_dir} {wheel}"

Expand Down Expand Up @@ -138,6 +138,8 @@ jobs:
cuda_version: ["12"]
cibw_build: ${{ fromJson(needs.set_wheel_build_matrix.outputs.python_version) }}
runs-on: ubuntu-latest
permissions:
id-token: write
if: |
github.event_name == 'release' ||
github.ref == 'refs/heads/master'
Expand All @@ -148,9 +150,7 @@ jobs:
name: ${{ runner.os }}-wheels-${{ matrix.pl_backend }}-${{ fromJson('{ "cp310-*":"py310","cp311-*":"py311","cp312-*":"py312" }')[matrix.cibw_build] }}-${{ matrix.arch }}-cu${{ matrix.cuda_version }}.zip
path: dist

- name: Upload wheels to PyPI
- name: Upload wheels to TestPyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.TEST_PYPI_LGPU_TOKEN }}
repository-url: https://test.pypi.org/legacy/
2 changes: 1 addition & 1 deletion .github/workflows/wheel_macos_arm64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ jobs:
python -m pip install pybind11 ninja cmake~=3.27.0 setuptools scipy
CIBW_ENVIRONMENT: |
CMAKE_ARGS="-DCMAKE_CXX_COMPILER_TARGET=arm64-apple-macos11 -DCMAKE_SYSTEM_NAME=Darwin -DCMAKE_SYSTEM_PROCESSOR=ARM64 -DENABLE_OPENMP=OFF"
CMAKE_ARGS="-DCMAKE_CXX_COMPILER_TARGET=arm64-apple-macos11 -DCMAKE_SYSTEM_NAME=Darwin -DCMAKE_SYSTEM_PROCESSOR=ARM64 -DENABLE_OPENMP=OFF -DCMAKE_BUILD_TYPE=Release"
CIBW_BEFORE_TEST: |
python -m pip install -r requirements-tests.txt
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/wheel_macos_x86_64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,9 @@ jobs:
CIBW_BEFORE_BUILD: |
python -m pip install pybind11 ninja cmake~=3.27.0 setuptools scipy
CIBW_ENVIRONMENT: |
CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Release"
PL_BACKEND: ${{ matrix.pl_backend }}

CIBW_BEFORE_TEST: |
Expand Down
10 changes: 5 additions & 5 deletions .github/workflows/wheel_win_x86_64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
uses: actions/cache@v4
with:
path: D:\a\install_dir\${{ matrix.exec_model }}
key: ${{ matrix.os }}-kokkos${{ matrix.kokkos_version }}-${{ matrix.exec_model }}-RelWithDebInfo
key: ${{ matrix.os }}-kokkos${{ matrix.kokkos_version }}-${{ matrix.exec_model }}-Release

- name: Clone Kokkos libs
if: steps.kokkos-cache.outputs.cache-hit != 'true'
Expand All @@ -80,10 +80,10 @@ jobs:
-DKokkos_ENABLE_DEPRECATION_WARNINGS=OFF `
-DCMAKE_CXX_STANDARD=20 `
-DCMAKE_POSITION_INDEPENDENT_CODE=ON `
-DCMAKE_BUILD_TYPE=RelWithDebInfo `
-DCMAKE_BUILD_TYPE=Release `
-T clangcl
cmake --build ./Build --config RelWithDebInfo --verbose
cmake --install ./Build --config RelWithDebInfo --verbose
cmake --build ./Build --config Release --verbose
cmake --install ./Build --config Release --verbose
win-wheels:
needs: [set_wheel_build_matrix, build_dependencies]
Expand Down Expand Up @@ -136,7 +136,7 @@ jobs:
python -m pip install pybind11 cmake~=3.27.0 build
CIBW_ENVIRONMENT: |
CMAKE_ARGS="-DENABLE_LAPACK=OFF"
CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Release"
CIBW_MANYLINUX_X86_64_IMAGE: manylinux2014

Expand Down
25 changes: 5 additions & 20 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,32 +3,17 @@ version: 2
sphinx:
configuration: doc/conf.py

python:
install:
- requirements: ci_build_requirements.txt
- requirements: doc/requirements.txt
- requirements: requirements-dev.txt
- method: pip
path: .

build:
os: ubuntu-22.04
tools:
python: "3.10"
apt_packages:
- cmake
- build-essential
- libopenblas-base
- libopenblas-dev
- graphviz
- wget
jobs:
pre_install:
- wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda_12.3.2_545.23.08_linux.run
- sh cuda_12.3.2_545.23.08_linux.run --silent --toolkit --toolkitpath=${READTHEDOCS_VIRTUALENV_PATH}/cuda-12.3 || cat /tmp/cuda-installer.log
- echo "setuptools~=66.0" >> ci_build_requirements.txt
post_install:
- rm -rf ./build && export PATH=${READTHEDOCS_VIRTUALENV_PATH}/cuda-12.3/bin${PATH:+:${PATH}} && export LD_LIBRARY_PATH=${READTHEDOCS_VIRTUALENV_PATH}/cuda-12.3/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} && PL_BACKEND="lightning_gpu" python scripts/configure_pyproject_toml.py && CMAKE_ARGS="-DPL_DISABLE_CUDA_SAFETY=1" python -m build
- rm -rf ./build && PL_BACKEND="lightning_kokkos" python scripts/configure_pyproject_toml.py && python -m build
- rm -rf ./build && export PATH=${READTHEDOCS_VIRTUALENV_PATH}/cuda-12.3/bin${PATH:+:${PATH}} && export LD_LIBRARY_PATH=${READTHEDOCS_VIRTUALENV_PATH}/cuda-12.3/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} && PL_BACKEND="lightning_tensor" python scripts/configure_pyproject_toml.py && CMAKE_ARGS="-DPL_DISABLE_CUDA_SAFETY=1" python -m build
- python -m pip install --exists-action=w --no-cache-dir -r doc/requirements.txt
- PL_BACKEND="lightning_qubit" python scripts/configure_pyproject_toml.py && SKIP_COMPILATION=True python -m build
- rm -rf ./build && PL_BACKEND="lightning_gpu" python scripts/configure_pyproject_toml.py && SKIP_COMPILATION=True python -m build
- rm -rf ./build && PL_BACKEND="lightning_kokkos" python scripts/configure_pyproject_toml.py && SKIP_COMPILATION=True python -m build
- rm -rf ./build && PL_BACKEND="lightning_tensor" python scripts/configure_pyproject_toml.py && SKIP_COMPILATION=True python -m build
- python -m pip install ./dist/*.whl
6 changes: 5 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -63,11 +63,15 @@ clean:
rm -rf pennylane_lightning/*_ops*
rm -rf *.egg-info

.PHONY: python
.PHONY: python python-skip-compile
python:
PL_BACKEND=$(PL_BACKEND) python scripts/configure_pyproject_toml.py
pip install -e . --config-settings editable_mode=compat -vv

python-skip-compile:
PL_BACKEND=$(PL_BACKEND) python scripts/configure_pyproject_toml.py
SKIP_COMPILATION=True pip install -e . --config-settings editable_mode=compat -vv

.PHONY: wheel
wheel:
PL_BACKEND=$(PL_BACKEND) python scripts/configure_pyproject_toml.py
Expand Down
3 changes: 3 additions & 0 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath(""))
sys.path.insert(0, os.path.abspath("."))
sys.path.insert(0, os.path.abspath("_ext"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.abspath("doc")), "doc"))

Expand Down Expand Up @@ -189,6 +190,8 @@ def __getattr__(cls, name):
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]

nbsphinx_execute = "never"

# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = True
Expand Down
2 changes: 2 additions & 0 deletions doc/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
tomlkit
build
breathe
docutils==0.16
exhale>=0.3.3
Expand Down
22 changes: 21 additions & 1 deletion pennylane_lightning/core/src/bindings/Bindings.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -746,7 +746,27 @@ void registerLightningTensorBackendAgnosticMeasurements(PyClass &pyclass) {
[](MeasurementsT &M, const std::shared_ptr<ObservableT> &ob) {
return M.var(*ob);
},
"Variance of an observable object.");
"Variance of an observable object.")
.def("generate_samples", [](MeasurementsT &M,
const std::vector<std::size_t> &wires,
const std::size_t num_shots) {
constexpr auto sz = sizeof(std::size_t);
const std::size_t num_wires = wires.size();
const std::size_t ndim = 2;
const std::vector<std::size_t> shape{num_shots, num_wires};
auto &&result = M.generate_samples(wires, num_shots);

const std::vector<std::size_t> strides{sz * num_wires, sz};
// return 2-D NumPy array
return py::array(py::buffer_info(
result.data(), /* data as contiguous array */
sz, /* size of one scalar */
py::format_descriptor<std::size_t>::format(), /* data type */
ndim, /* number of dimensions */
shape, /* shape of the matrix */
strides /* strides for each axis */
));
});
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,37 @@ auto generateBitPatterns(const std::vector<std::size_t> &qubitIndices,
}
return indices;
}

/**
* @brief Introduce quantum controls in indices generated by
* generateBitPatterns.
*
* @param indices Indices for the operation.
* @param num_qubits Number of qubits in register.
* @param controlled_wires Control wires.
* @param controlled_values Control values (false or true).
*/
void controlBitPatterns(std::vector<std::size_t> &indices,
const std::size_t num_qubits,
const std::vector<std::size_t> &controlled_wires,
const std::vector<bool> &controlled_values) {
constexpr std::size_t one{1U};
if (controlled_wires.empty()) {
return;
}
std::vector<std::size_t> controlled_values_i(controlled_values.size());
std::transform(controlled_values.begin(), controlled_values.end(),
controlled_values_i.begin(),
[](const bool v) { return static_cast<std::size_t>(v); });
std::for_each(
indices.begin(), indices.end(),
[num_qubits, &controlled_wires, &controlled_values_i](std::size_t &i) {
for (std::size_t k = 0; k < controlled_wires.size(); k++) {
const std::size_t rev_wire =
(num_qubits - 1) - controlled_wires[k];
const std::size_t value = controlled_values_i[k];
i = (i & ~(one << rev_wire)) | (value << rev_wire);
}
});
}
} // namespace Pennylane::LightningQubit::Gates
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,19 @@ auto getIndicesAfterExclusion(const std::vector<std::size_t> &indicesToExclude,
auto generateBitPatterns(const std::vector<std::size_t> &qubitIndices,
std::size_t num_qubits) -> std::vector<std::size_t>;

/**
* @brief Introduce quantum controls in indices generated by
* generateBitPatterns.
*
* @param indices Indices for the operation.
* @param num_qubits Number of qubits in register.
* @param controlled_wires Control wires.
* @param controlled_values Control values (false or true).
*/
void controlBitPatterns(std::vector<std::size_t> &indices,
std::size_t num_qubits,
const std::vector<std::size_t> &controlled_wires,
const std::vector<bool> &controlled_values);
/**
* @brief Internal utility struct to track data indices of application for
* operations.
Expand Down
Loading

0 comments on commit 22af3a0

Please sign in to comment.