- Branch main represents 'public-cas-fs-protected' i.e. the code of workers in SCONE Secure Hw Mode with public CAS run by scontain team.
- Branch no-cas-fs-unprotected represents the code of workers (1) without SCONE (2) SCONE Sim Mode (3) SCONE Unsecure Hw Mode
- Branch private-cas-fs-protected represents the code of workers in SCONE Secure Hw Mode with private CAS in the cluster.
Note: CAS image is not available in SCONE free tier, hence the user must upgrade to Standard or Business edition if private CAS is desired in the cluster.
- In order to build, install, and run Hyperledger Avalon with SCONE, SCONE must be installed and configured.
- The following instructions will guide you through the installation of SCONE
- After correct installation of SCONE, access to SCONE images is required, SCONE uses gitlab as their docker images directory.
- Register at https://gitlab.scontain.com and then request access for SCONE images in community version.
- Login using gitlab credentials
docker login registry.scontain.com:5050
- After successful login you can access the SCONE images required for Hyperledger Avalon and test the workflows.
-
To run the in Secure Hardware Mode, get the latest code from master branch:
git clone https://github.com/T-Systems-MMS/hyperledger-avalon-scone.git
-
To run the in Hardware Mode, you can run scone-demo.sh script from the project root directory:
./scone-demo.sh start ./scone-demo.sh stop
It automatically starts SCONE CAS and LAS, then it creates images for SCONE KME and SCONE Workers. You can change the number of workers in config/scone_config.toml and docker-compose-scone-avalon.yaml files. In basic demo there are 3 SCONE workers which have some pre-existing examples too.
See Examples and detailed usage in Worker Readme
In order to build, install, and run Hyperledger Avalon a number of additional components must be installed and configured. The following instructions will guide you through the installation and build process for Hyperledger Avalon.
If you have not done so already, clone the Avalon source repository. Choose whether you want the stable version (recommended) or the most recent version
-
To use the current stable release (recommended), run this command:
git clone https://github.com/hyperledger/avalon -b pre-release-v0.6
-
Or, to use the latest branch, run this command:
git clone https://github.com/hyperledger/avalon
You have a choice of Docker-based build or a Standalone-based build. We recommend the Docker-based build since it is automated and requires fewer steps.
Follow the instructions below to execute a Docker-based build and execution.
-
Install Docker Engine and Docker Compose, if not already installed. See PREREQUISITES for instructions
-
Build and run the Docker image from the top-level directory of your
avalon
source repository.Intel SGX Simulator mode (for hosts without Intel SGX):
- To run in Singleton mode (the same worker handles both keys and workloads):
To start a worker pool (with one Key Management Enclave and one Work order Processing Enclave):
sudo docker-compose up --build
sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml up --build
- For subsequent runs on the same workspace, if you changed a source or configuration file, run the above command again
- For subsequent runs on the same workspace, if you did not make any
changes, startup and build time can be reduced by running:
For worker pool, run:
MAKECLEAN=0 sudo -E docker-compose up
MAKECLEAN=0 sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml up
SGX Hardware mode (for hosts with Intel SGX):
- Refer to Intel SGX in Hardware-mode section in PREREQUISITES document to install Intel SGX pre-requisites and to configure IAS keys.
- Run:
For worker pool, run:
sudo docker-compose -f docker-compose.yaml -f docker-compose-sgx.yaml up --build
sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml \ -f docker-compose-pool-sgx.yaml up --build
- For subsequent runs on the same workspace, if you changed a source or configuration file, run the above command again
- For subsequent runs on the same workspace, if you did not make any
changes, startup and build time can be reduced by running:
For worker pool, run:
MAKECLEAN=0 sudo -E docker-compose -f docker-compose.yaml -f docker-compose-sgx.yaml up
MAKECLEAN=0 sudo docker-compose -f docker-compose.yaml -f docker-compose-pool.yaml \ -f docker-compose-pool-sgx.yaml up
- To run in Singleton mode (the same worker handles both keys and workloads):
-
On a successful run, you should see the message
BUILD SUCCESS
followed by a repetitive messageEnclave manager sleeping for 10 secs
-
Open a Docker container shell using following command
sudo docker exec -it avalon-shell bash
-
To execute test cases refer to Testing section below
-
To exit the Avalon program, press
Ctrl-c
Running multiple worker pools together
To run multiple worker pools together, modify docker-compose-pool.yaml
. It also has a corresponding docker compose file docker-compose-pool-sgx.yaml
for running in Intel SGX hardware mode. This setup starts a worker pool with the following default configuration:
- One KME (Key Management Enclave)
- One WPE (Work order Processing Enclave) supporting all workloads, viz.
echo-result, heart-disease-eval, inside-out-eval, simple-wallet
These docker compose files can be further customized to run multiple worker pools in a single Avalon setup. Points to note when customizing/running multiple pools together using docker:
- Name of the docker image for all WPE in a pool should be same as pools are homogeneous as of now
- All WPE in a pool should connect to same KME using command line arguments
--kme_listener_url
and--worker_id
- When submitting work orders using the generic client application
--worker_id
argument needs to be mentioned explicitly to choose one of the workers in the system (Note : Each pool respresents a single worker). For example:
./generic_client.py -o --uri "http://avalon-listener:1947" \
--workload_id "echo-result" --in_data "Hello" --worker_id kme-worker-1
Follow the PREREQUISITES document to install and configure components on which Hyperledger Avalon depends.
This section describes how to get started with Avalon quickly using provided scripts to compile and install Avalon. The steps below will set up a Python virtual environment to run Avalon.
-
Make sure environment variables are set as described in the PREREQUISITES document
-
Change to your Avalon source repository cloned above:
cd avalon
-
Set
TCF_HOME
to the top level directory of youravalon
source repository. You will need these environment variables set in every shell session where you interact with Avalon. Append this line (withpwd
expanded) to your login shell script (~/.bashrc
or similar):export TCF_HOME=`pwd` echo "export TCF_HOME=$TCF_HOME" >> ~/.bashrc
-
If you are using Intel SGX hardware, check that
SGX_MODE=HW
before building the code. If you are not using Intel SGX hardware, check thatSGX_MODE
is not set or set toSGX_MODE=SIM
. By defaultSGX_MODE=SIM
, indicating use the Intel SGX simulator. -
If you are not using Intel SGX hardware, go to the next step. Check that
TCF_ENCLAVE_CODE_SIGN_PEM
is set. Refer to the PREREQUISITES document for more details on these variables.You will also need to obtain an Intel IAS subscription key and SPID from the portal https://api.portal.trustedservices.intel.com/ Replace the SPID and IAS Subscription key values in file
$TCF_HOME/config/singleton_enclave_config.toml
with the actual hexadecimal values (the IAS key may be either your Primary key or Secondary key):spid = '<spid obtained from portal>' ias_api_key = '<ias subscription key obtained from portal>'
-
Create a Python virtual environment:
cd $TCF_HOME/tools/build python3 -m venv _dev
-
Activate the new Python virtual environment for the current shell session. You will need to do this in each new shell session (in addition to exporting environment variables).
source _dev/bin/activate
If the virtual environment for the current shell session is activated, you will the see this prompt:
(_dev)
-
Install PIP3 packages into your Python virtual environment:
pip3 install --upgrade setuptools json-rpc py-solc-x web3 colorlog twisted wheel toml pyzmq pycryptodomex ecdsa
-
Build Avalon components:
make clean make
Once the code is successfully built, run the test suite to check that the
installation is working correctly.
Follow these steps to run the Demo.py
testcase:
NOTE: Skip step 1 in the case of Docker-based builds, since
docker-compose.yaml
will run the TCS startup script.
- For standalone builds only:
- Open a new terminal, Terminal 1
cd $TCF_HOME/scripts
- Run
source $TCF_HOME/tools/build/_dev/bin/activate
. You should see the(_dev)
prompt - Run
./tcs_startup.sh -s
The-s
option startskv_storage
before other Avalon components. - Wait for the listener to start. You should see the message
TCS Listener started on port 1947
, followed by a repetitive messageEnclave manager sleeping for 10 secs
- To run the Demo test case, open a new terminal, Terminal 2
- In Terminal 2, run
source $TCF_HOME/tools/build/_dev/bin/activate
. You should see the(_dev)
prompt - In Terminal 2, cd to
$TCF_HOME/tests
and type this command to run theDemo.py
test:cd $TCF_HOME/tests python3 Demo.py --input_dir ./json_requests/ \ --connect_uri "http://localhost:1947" work_orders/output.json
- For Docker-based builds:
- Follow the steps above for "Docker-based Build and Execution"
- Terminal 1 is running
docker-compose
and Terminal 2 is running the "avalon-shell" Docker container shell from the previous build steps - In Terminal 2, cd to
$TCF_HOME/tests
and type this command to run theDemo.py
test:cd $TCF_HOME/tests python3 Demo.py --input_dir ./json_requests/ \ --connect_uri "http://avalon-listener:1947" work_orders/output.json
- The response to the Avalon listener and Intel® SGX Enclave Manager can be seen at Terminal 1
- The response to the test case request can be seen at Terminal 2
- If you wish to exit the Avalon program, press
Ctrl-c
A GUI is also available to run this demo. See examples/apps/heart_disease_eval
To run lint checks on codebase, execute the following commands -
cd $TCF_HOME
docker-compose -f docker-compose-lint.yaml up
The steps above runs lint on all modules by default.
If you want to run lint on selective modules, you need to pass the modules via
LINT_MODULES
. For example:
cd $TCF_HOME
LINT_MODULES={sdk,common} docker-compose -f docker-compose-lint.yaml up
Module names can be found here in the codebase.
-
If you see the message
ModuleNotFoundError: No module named '...'
, you did not runsource _dev/bin/activate
or you did not successfully build Avalon -
If you see the message
CMake Error: The current CMakeCache.txt . . . is different than the directory . . . where CMakeCache.txt was created.
then the CMakeCache.txt file is out-of-date. Remove the file and rebuild.
-
Verify your environment variables are set correctly and the paths exist
-
If the Demo test code breaks due to some error, please perform the following steps before re-running:
sudo rm $TCF_HOME/config/Kv*
$TCF_HOME/scripts/tcs_startup.sh -t -s
- You can re-run the test now
- If some error still occurs then run :
$TCF_HOME/scripts/tcs_startup.sh -f
. This forcefully terminate Avalon.
-
If you get build errors rerunning
make
, trysudo make clean
first -
If you see the message
No package 'openssl' found
, you do not have OpenSSL libraries or the correct version of OpenSSL libraries. See PREREQUISITES for installation instructions -
If you see the message
ImportError: ...: cannot open shared object file: No such file or directory
, then you need to setLD_LIBRARY_PATH
with:source /opt/intel/sgxsdk/environment
. For details, see PREREQUISITES