From 54be2d9528a2f8c6300f8d8ff96bc2690c050d94 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Mon, 2 Dec 2024 19:10:49 +0100 Subject: [PATCH 01/19] first attempt at writing the function and stating to work on the notebook --- docs/notebooks/time_varying.ipynb | 429 ++++++++++++++++++++++++++ src/torchsurv/loss/time_covariates.py | 45 +++ 2 files changed, 474 insertions(+) create mode 100644 docs/notebooks/time_varying.ipynb create mode 100644 src/torchsurv/loss/time_covariates.py diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb new file mode 100644 index 0000000..079fa67 --- /dev/null +++ b/docs/notebooks/time_varying.ipynb @@ -0,0 +1,429 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Implementing time-varying covariates\n", + "\n", + "In this notebook, we analyse a simulated dataset with time-varying covariates and survival outcomes. `TorchSurv` is used to train a model that predicts relative risk of subjects based on covariates observed over time. We will attempt to thoroughly explain the necessary elements to understand our implementation, but for a detailed read on time-varying survival models refer to Chapter 6 of [Dynamic Regression Models for Survival Data](https://link.springer.com/book/10.1007/0-387-33960-4). \n", + "\n", + "### Dependencies\n", + "\n", + "To run this notebook, dependencies must be installed. the recommended method is to use our developpment conda environment (**preffered**). Instruction can be found [here](https://opensource.nibr.com/torchsurv/devnotes.html#set-up-a-development-environment-via-conda) to install all optional dependancies. The other method is to install only required packages using the command line below:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Install only required packages (optional)\n", + "# %pip install lifelines\n", + "# %pip install matplotlib\n", + "# %pip install sklearn\n", + "# %pip install pandas" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import warnings\n", + "\n", + "warnings.filterwarnings(\"ignore\")" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "import pandas as pd\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import torch\n", + "from torch.utils.data import DataLoader\n", + "from sklearn.model_selection import train_test_split\n", + "\n", + "# Our package\n", + "#from torchsurv.loss.time_varying import neg_partial_log_likelihood2\n", + "\n", + "# PyTorch boilerplate - see https://github.com/Novartis/torchsurv/blob/main/docs/notebooks/helpers_introduction.py\n", + "from helpers_introduction import Custom_dataset, plot_losses" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Simulating a dataset\n", + "\n", + "We will simulate a dataset of 100 subjects with 6 follow up times where a covariate is observed. The covariates will change over time slightly but will be generated from one random variable per subject so that " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# defining parameters\n", + "sample_size = 100 #number of subjects to generate\n", + "obs_time = 6 #number of observations over time for each subject\n", + "\n", + "# create random variables following a normal distribution N(1,1) for each subject \n", + "mean = 1\n", + "standard_dev = 1\n", + "random_vars = torch.randn(sample_size)*standard_dev + mean\n", + "\n", + "# using the random variables from above, we create a set of covariates for each subject \n", + "t = torch.linspace(0, 2*math.pi, 6) # Generating 6 equidistant time points from 0 to 2*pi\n", + "\n", + "# Creating the matrix\n", + "sample_size = 100 #number of subjects to generate\n", + "matrix = torch.zeros(sample_size, 6)\n", + "\n", + "# Filling the matrix with sin values\n", + "for i in range(6):\n", + " matrix[:, i] = torch.sin(t[i])\n", + "\n", + "# Multiplying with a vector of random variables\n", + "sample_size = 100 #number of subjects to generate\n", + "random_vars = torch.randn(sample_size)\n", + "result = torch.matmul(matrix.T, random_vars.unsqueeze(1))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# make random boolean events\n", + "events = random_vars > 0.5\n", + "print(events) # tensor([ True, False, True, True, False, False, True, False])\n", + "\n", + "# make random positive time to event\n", + "time = random_vars * 100\n", + "print(time) # tensor([32.8563, 38.3207, 24.6015, 72.2986, 19.9004, 65.2180, 73.2083, 21.2663])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Implementing partial log likelihood for time-varying covariates\n", + "\n", + "Let $T*_i$ be the be the failure time of interest for subject $i$ and $C$ be the censoring time. Let $T_i = min(T*, C)$. We use $\\delta_i$ to denote whether $T*_i$ was observed. We will use $Z(t)$ to denote the value of of covariate $Z$ and time $t$. Let $t_k$ for $k \\in \\{1, \\dots, K\\} denote the time points at which the covariates are observed. For the moment, we assume that all subjects have been observed on the same time grid. $R_k$ is the set of individuals who are at risk at $t_k$.\n", + "\n", + "\n", + "Consider a network that outputs a vector $\\theta$ for each observed covariate $Z(t_k)$, which can be denoted as $\\theta(t_k)$. The vector of these values can be written to be $\\theta_K$. Similarly, $Z_K$ can be the vector of the covariate history up until time K. \n", + "\n", + "The log likelihood in terms of $\\theta(t_k)$ can be written as follows.\n", + "\n", + "$$ l(\\theta) = \\sum_{i=1}^n \\delta_i \\Big ( \\frac{\\sum_{j \\in R_i} exp(\\theta_K)Z_K Z_K^T}{\\sum_{j \\in R_i} exp(\\theta_K)}-\\frac{[\\sum_{j \\in R_i} exp(\\theta_K)Z_K][\\sum_{j \\in R_i} exp(\\theta_K)Z_K]^T}{\\sum_{j \\in R_i} exp(\\theta_K)}\\Big)$$\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "def time_partial_log_likelihood(\n", + " log_hz: torch.Tensor, #nx1 vector\n", + " event: torch.Tensor, #n vector (i think)\n", + " time: torch.Tensor, #n vector (i think)\n", + " covariates: torch.Tensor, #nxp vector, p number of params\n", + ") -> torch.Tensor:\n", + "\n", + " # sort data by time-to-event or censoring\n", + " time_sorted, idx = torch.sort(time)\n", + " log_hz_sorted = log_hz[idx]\n", + " event_sorted = event[idx]\n", + "\n", + " exp_log_hz = torch.exp(log_hz_sorted)\n", + " #need to sort the covariate here as well \n", + " #sort covariates so that the rows match the ordering\n", + " covariates_sorted = covariates[idx, :]\n", + "\n", + " #the left hand side (HS) of the equation\n", + " #below is Z_k Z_k^T - i think it should be a vector matrix dim nxn\n", + " covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T)\n", + " \n", + " #pointwise multiplication of vectors to get the nominator of left HS\n", + " #outcome in a vector of length n\n", + " # Ends up being (1, n)\n", + " log_nominator_left = torch.matmul(exp_log_hz.T, covariate_inner_product)\n", + "\n", + " #right hand size of the equation\n", + " #formulate the brackets \\sum exp(theta)Z_k\n", + " bracket = torch.mul(exp_log_hz, covariates_sorted)\n", + " nominator_right = torch.matmul(bracket, bracket.T) #nxn matrix\n", + " ###not sure if the next line is this\n", + " #log_nominator_right = torch.sum(nominator_right, dim=0).unsqueeze(0)\n", + " ### or this\n", + " log_nominator_right = nominator_right[0,].unsqueeze(0)\n", + " #the denominator is the same on both sides\n", + " log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) #dim=0 sums over the oth dimension\n", + " partial_log_likelihood = torch.div(log_nominator_left - log_nominator_right, log_denominator) # (n, n)\n", + " return (partial_log_likelihood)[event_sorted]\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Testing on the old dataset for dimensions sake\n", + "\n", + "using the data from the introduction notebook just to make sure dimensions work, this is not correct implementation" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [], + "source": [ + "import lifelines" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Load GBSG2 dataset\n", + "df = lifelines.datasets.load_gbsg2()\n", + "df.head(5)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Constant parameters accross models\n", + "# Detect available accelerator; Downgrade batch size if only CPU available\n", + "if any([torch.cuda.is_available(), torch.backends.mps.is_available()]):\n", + " print(\"CUDA-enabled GPU/TPU is available.\")\n", + " BATCH_SIZE = 128 # batch size for training\n", + "else:\n", + " print(\"No CUDA-enabled GPU found, using CPU.\")\n", + " BATCH_SIZE = 32 # batch size for training\n", + "\n", + "EPOCHS = 100\n", + "LEARNING_RATE = 1e-2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "df_onehot = pd.get_dummies(df, columns=[\"horTh\", \"menostat\", \"tgrade\"]).astype(\"float\")\n", + "df_onehot.drop(\n", + " [\"horTh_no\", \"menostat_Post\", \"tgrade_I\"],\n", + " axis=1,\n", + " inplace=True,\n", + ")\n", + "df_onehot.head(5)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "df_train, df_test = train_test_split(df_onehot, test_size=0.3)\n", + "df_train, df_val = train_test_split(df_train, test_size=0.3)\n", + "print(\n", + " f\"(Sample size) Training:{len(df_train)} | Validation:{len(df_val)} |Testing:{len(df_test)}\"\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Dataloader\n", + "dataloader_train = DataLoader(\n", + " Custom_dataset(df_train), batch_size=BATCH_SIZE, shuffle=True\n", + ")\n", + "dataloader_val = DataLoader(\n", + " Custom_dataset(df_val), batch_size=len(df_val), shuffle=False\n", + ")\n", + "dataloader_test = DataLoader(\n", + " Custom_dataset(df_test), batch_size=len(df_test), shuffle=False\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "cox_model = torch.nn.Sequential(\n", + " torch.nn.BatchNorm1d(num_features), # Batch normalization\n", + " torch.nn.Linear(num_features, 32),\n", + " torch.nn.ReLU(),\n", + " torch.nn.Dropout(),\n", + " torch.nn.Linear(32, 64),\n", + " torch.nn.ReLU(),\n", + " torch.nn.Dropout(),\n", + " torch.nn.Linear(64, 1), # Estimating log hazards for Cox models\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# This is for testing the loss function\n", + "x_test, (test_event, test_time) = next(iter(dataloader_train))\n", + "\n", + "log_hz = cox_model(x_test)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([1.3927, 1.5773, 0.0192, 0.1983])" + ] + }, + "execution_count": 34, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "print('x_test', x_test.shape)\n", + "print('events', test_event.shape)\n", + "print('times', test_time.shape)\n", + "\n", + "time_sorted, idx = torch.sort(time)\n", + "log_hz_sorted = log_hz[idx]\n", + "event_sorted = event[idx]\n", + "time_unique = torch.unique(time_sorted)\n", + "print('')\n", + "print(\"time_sorted\", time_sorted.shape)\n", + "print('log_hz_sorted', log_hz_sorted.shape)\n", + "print('event_sorted', event_sorted.shape)\n", + "print(\"time_unique\", time_unique.shape)\n", + "\n", + "print('-'*30)\n", + "cov_fake = torch.clone(x_test)\n", + "print('covariates', cov_fake.shape)\n", + "covariates_sorted = cov_fake[idx, :]\n", + "covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T)\n", + "print('cov_inner', covariate_inner_product.shape)\n", + "log_nominator_left = torch.matmul(log_hz_sorted.T, covariate_inner_product)\n", + "print('log_nom_left', log_nominator_left.shape)\n", + "bracket = torch.mul(log_hz_sorted, covariates_sorted)\n", + "print('bracket', bracket.shape)\n", + "log_nominator_right = torch.matmul(bracket, bracket.T)\n", + "print('log_nom_right', log_nominator_right.shape)\n", + "sum_nominator_right = log_nominator_right[0,].unsqueeze(0)\n", + "print('sum_nom', sum_nominator_right.shape)\n", + "log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0).T\n", + "print('log_denom', log_denominator.shape)\n", + "last_bit = torch.div(log_nominator_left - sum_nominator_right, log_denominator)\n", + "print('last_bit', last_bit.shape)\n", + "last_bit\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## RNN Example from Github" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "from torchsurv.loss import cox\n", + "from torchsurv.metrics.cindex import ConcordanceIndex\n", + "\n", + "# Parameters\n", + "input_size = 10\n", + "output_size = 1\n", + "num_layers = 2\n", + "seq_length = 5\n", + "batch_size = 8\n", + "\n", + "# make random boolean events\n", + "events = torch.rand(batch_size) > 0.5\n", + "print(events) # tensor([ True, False, True, True, False, False, True, False])\n", + "\n", + "# make random positive time to event\n", + "time = torch.rand(batch_size) * 100\n", + "print(time) # tensor([32.8563, 38.3207, 24.6015, 72.2986, 19.9004, 65.2180, 73.2083, 21.2663])\n", + "\n", + "# Create simple RNN model\n", + "rnn = torch.nn.RNN(input_size, output_size, num_layers)\n", + "inputs = torch.randn(seq_length, batch_size, input_size)\n", + "h0 = torch.randn(num_layers, batch_size, output_size)\n", + "\n", + "# Forward pass time series input\n", + "outputs, _ = rnn(inputs, h0)\n", + "estimates = outputs[-1] # Keep only last predictions, many to one approach\n", + "print(estimates.size()) # torch.Size([8, 1])\n", + "print(f\"Estimate shape for {batch_size} samples = {estimates.size()}\") # Estimate shape for 8 samples = torch.Size([8, 1])\n", + "\n", + "\n", + "loss = cox.neg_partial_log_likelihood(estimates, events, time)\n", + "print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n", + "\n", + "cindex = ConcordanceIndex()\n", + "print(f\"c-index = {cindex(estimates, events, time)}\") # c-index = 0.20000000298023224" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.15" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/src/torchsurv/loss/time_covariates.py b/src/torchsurv/loss/time_covariates.py new file mode 100644 index 0000000..5f6cb59 --- /dev/null +++ b/src/torchsurv/loss/time_covariates.py @@ -0,0 +1,45 @@ +import sys +import warnings + +import torch + +def time_partial_log_likelihood( + log_hz: torch.Tensor, #nx1 vector + event: torch.Tensor, #n vector (i think) + time: torch.Tensor, #n vector (i think) + covariates: torch.Tensor, #nxp vector, p number of params +) -> torch.Tensor: + + # sort data by time-to-event or censoring + time_sorted, idx = torch.sort(time) + log_hz_sorted = log_hz[idx] + event_sorted = event[idx] + + #keep log if we can + exp_log_hz = torch.exp(log_hz_sorted) + #remove mean over time from covariates + #sort covariates so that the rows match the ordering + covariates_sorted = covariates[idx, :] - covariates.mean(dim=0) + + #the left hand side (HS) of the equation + #below is Z_k Z_k^T - i think it should be a vector matrix dim nxn + covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T) + + #pointwise multiplication of vectors to get the nominator of left HS + #outcome in a vector of length n + # Ends up being (1, n) + log_nominator_left = torch.matmul(exp_log_hz.T, covariate_inner_product) + + #right hand size of the equation + #formulate the brackets \sum exp(theta)Z_k + bracket = torch.mul(exp_log_hz, covariates_sorted) + nominator_right = torch.matmul(bracket, bracket.T) #nxn matrix + ###not sure if the next line is this + #log_nominator_right = torch.sum(nominator_right, dim=0).unsqueeze(0) + ### or this + log_nominator_right = nominator_right[0,].unsqueeze(0) + #the denominator is the same on both sides + log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) #dim=0 sums over the oth dimension + partial_log_likelihood = torch.div(log_nominator_left - log_nominator_right, log_denominator) # (n, n) + + return (partial_log_likelihood)[event_sorted] \ No newline at end of file From fbfdb99fd7a2f048d3cfdd568c19b0202e4117ba Mon Sep 17 00:00:00 2001 From: Dembowska Date: Wed, 11 Dec 2024 23:22:55 +0100 Subject: [PATCH 02/19] moving new files over to otebooks folder, added comments and changed the log partial likelihood function --- docs/notebooks/loss_time_covariates.py | 130 +++++++++++++++++++++++++ 1 file changed, 130 insertions(+) create mode 100644 docs/notebooks/loss_time_covariates.py diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py new file mode 100644 index 0000000..c08a14f --- /dev/null +++ b/docs/notebooks/loss_time_covariates.py @@ -0,0 +1,130 @@ +import sys +import warnings + +import torch + + +def neg_partial_time_log_likelihood( + log_hz: torch.Tensor, + event: torch.Tensor, + time: torch.Tensor, + ties_method: str = "efron", + reduction: str = "mean", + checks: bool = True, +) -> torch.Tensor: + ''' + THIS FUNCTION IS NOT DONE, i HAVENT TESTED THE NEGATIVE PART YET + ''' + if checks: + _check_inputs(log_hz, event, time) + + if any([event.sum() == 0, len(log_hz.size()) == 0]): + warnings.warn("No events OR single sample. Returning zero loss for the batch") + return torch.tensor(0.0, requires_grad=True) + + # sort data by time-to-event or censoring + time_sorted, idx = torch.sort(time) + log_hz_sorted = log_hz[idx] + event_sorted = event[idx] + time_unique = torch.unique(time_sorted) # time-to-event or censoring without ties + + # only consider theta at tiem of + pll = _partial_likelihood_time_cox(log_hz_sorted, event_sorted) + + # Negative partial log likelihood + pll = torch.neg(pll) + if reduction.lower() == "mean": + loss = pll.nanmean() + elif reduction.lower() == "sum": + loss = pll.sum() + else: + raise ( + ValueError( + f"Reduction {reduction} is not implemented yet, should be one of ['mean', 'sum']." + ) + ) + return loss + +def _partial_likelihood_time_cox( + log_hz: torch.Tensor, #nxTxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time + event: torch.Tensor, #n length vector, boolean, true or false to determine if someone had an event + time: torch.Tensor, #n length vector, time at which someone experiences event +) -> torch.Tensor: + """Calculate the partial log likelihood for the Cox proportional hazards model + with time-varying covariates and in the absence of ties in event time. + + For time-varying covariates, the haard ratio is no longer assumed to be constant, + but the partial log likelihood only cares about the covariate value at time of death. + + Hence, despite taking in a whole vector of stuff, we only take the last value + into consideration for the partial log likelihood. + + Requirements we want: + - time vector must somehow correspond to the T dimension in the log_hz tensor, i.e. for those who experience an event, + we want to identify the index of the covariate upon failure. We could either consider the last covariate before a series of zeros + (requires special data formatting but could reduce issues as it automatically contains event time information). + - this version doesn't allow for P>1 but it can be considered as an additional dimension and then in the final + step you can take the mean across p + - we want values of the covariate at event time to not be null, maybe there could be some automation function that imputes the latest values if possible + - maybe some guidance can go here on how to format the covariates, right now its just a tensor. + """ + + time_sorted, idx = torch.sort(time) + #sort the output of the RNN by the subjects who have earlier event time + #we want a tensor out + log_hz_sorted = outputs[:,idx,:] + event_sorted = events[idx] + + #format the time so we can use it to index + #in the next step we want to pick out the covariate at event time for each subject for each covariate p + #this line is just to be able to index - can be changed depending on how time is formatted + time_sorted=time_sorted.type(torch.int64) + # below is pseudocode of what to do to geth the log likelihood + #as an outcome we want an nx1xp tensor aka. time is reduced and we only cosnider Z(tau_j) + log_hz_sorted_tj = log_hz_sorted[time_sorted, :, :] + + #same step as in normal cox loss, just again, we consider Z(tau_j) where tau_j denotes event time to subject j + log_denominator_tj = torch.logcumsumexp(log_hz_sorted_tj.flip(0), dim=0).flip(0) + + return (log_hz_sorted_tj - log_denominator_tj)[event_sorted] + + +def _time_varying_covariance( + log_hz: torch.Tensor, #nx1 vector + event: torch.Tensor, #n vector (i think) + time: torch.Tensor, #n vector (i think) + covariates: torch.Tensor, #nxp vector, p number of params +) -> torch.Tensor: + """ Calculate the covariance matrix for the outcome thetas from a network in + in the case of time-varying covariates. Returns a nxn matrix with n being the batch size.""" + # sort data by time-to-event or censoring + time_sorted, idx = torch.sort(time) + log_hz_sorted = log_hz[idx] + event_sorted = event[idx] + + #keep log if we can + exp_log_hz = torch.exp(log_hz_sorted) + #remove mean over time from covariates + #sort covariates so that the rows match the ordering + covariates_sorted = covariates[idx, :] - covariates.mean(dim=0) + + #the left hand side (HS) of the equation + #below is Z_k Z_k^T - i think it should be a vector matrix dim nxn + covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T) + + #pointwise multiplication of vectors to get the nominator of left HS + #outcome in a vector of length n + # Ends up being (1, n) + log_nominator_left = torch.matmul(exp_log_hz.T, covariate_inner_product) + + #right hand size of the equation + #formulate the brackets \sum exp(theta)Z_k + bracket = torch.mul(exp_log_hz, covariates_sorted) + covariance_matrix = torch.matmul(bracket, bracket.T) #nxn matrix + # ###nbelow is commented out as it does not apply but I wanted to keep it for the functions + # #log_nominator_right = torch.sum(nominator_right, dim=0).unsqueeze(0) + # log_nominator_right = nominator_right[0,].unsqueeze(0) + # log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) #dim=0 sums over the oth dimension + # partial_log_likelihood = torch.div(log_nominator_left - log_nominator_right, log_denominator) # (n, n) + + return (covariance_matrix) \ No newline at end of file From 0d3905bd164e00906f9f6d10ae8dc5b4661e7516 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Fri, 13 Dec 2024 17:58:57 +0100 Subject: [PATCH 03/19] remove unused code, added comments, simulation in lifelines --- docs/notebooks/loss_time_covariates.py | 2 +- docs/notebooks/time_varying.ipynb | 623 +++++++++++++++++++++---- 2 files changed, 546 insertions(+), 79 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index c08a14f..9e57ccc 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -85,7 +85,7 @@ def _partial_likelihood_time_cox( #same step as in normal cox loss, just again, we consider Z(tau_j) where tau_j denotes event time to subject j log_denominator_tj = torch.logcumsumexp(log_hz_sorted_tj.flip(0), dim=0).flip(0) - + return (log_hz_sorted_tj - log_denominator_tj)[event_sorted] diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 079fa67..dc05839 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -13,6 +13,55 @@ "To run this notebook, dependencies must be installed. the recommended method is to use our developpment conda environment (**preffered**). Instruction can be found [here](https://opensource.nibr.com/torchsurv/devnotes.html#set-up-a-development-environment-via-conda) to install all optional dependancies. The other method is to install only required packages using the command line below:" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Implementing partial log likelihood for time-varying covariates\n", + "\n", + "### Context and statistical set-up\n", + "\n", + "Let $T^*_i$ be the be the failure time of interest for subject $i$ and $C$ be the censoring time. Let $T_i = min(T^*, C)$. We use $\\delta_i$ to denote whether $T*_i$ was observed. We will use $Z(t)$ to denote the value of of covariate $Z$ and time $t$. \n", + "We use $Z(t) to denote the value of Z at time $t$ and $\\overline{Z}(t)$ to denote the set of covariates from the beggining up to time $t$: $ \\overline{Z}(t) = \\{ Z(s): 0 \\leq s \\leq t\\}$.\n", + "Let $t_k$ for $k \\in \\{1, \\dots, K\\} denote the time points at which the covariates are observed. For the moment, we assume that all subjects have been observed on the same time grid. $R_k$ is the set of individuals who are at risk at $t_k$. \n", + "\n", + "The conditional hazard function of $T$ given $\\overline{Z}(t)$ is defined as\n", + "$$ \\lambda(T|\\overline{Z}(t)) = Pr(T \\in [t, t+ dt)|T \\geq t, \\overline{Z}(t)), $$\n", + "in other words, it is the probability that an event will occur in the next time instance if we have observed covariates up to time $t$ and that a subject has not yet experienced an event.\n", + "\n", + "The typical cox proportional hazards model with constant covariates $Z$ assumes a constant hazard ratio: $\\lambda(T|Z)= \\lambda_0(t) exp(\\beta Z)$, where $\\beta$ in an unknown set of regression parameters and $\\lambda_0(t)$ is an unspecified baseline hazard function. In this case $\\frac{\\lambda(T|Z)}{\\lambda_0(t)} = exp(\\beta Z) $. The cumlative hazard ia defined as $\\Lambda(t) = \\int_0^t \\lambda(s)ds$. \n", + "\n", + "In a time varying cox model, the hazard ratio is now dependant on time:\n", + "$$ \\frac{\\lambda(t|Z)}{\\lambda_0(t)} = exp(\\beta Z(t)) $$ \n", + "and the proportinal hazard model specifies:\n", + "$$ \\lambda(t|Z) = \\lambda_0(t)exp(\\beta Z(t)) $$\n", + "\n", + "Let $i_j$ denote the label or identity of the individual who fails at time $\\tau_j$, including the value of their time-varying covariate\n", + "during their time in the study $\\{ Z_{i_j}(t): t \\in [0, \\tau_j] \\}$. The partial likelihood is:\n", + "$$ L (\\beta) = \\prod_j \\Big (\\frac{\\lambda(\\tau_j: Z_i(\\tau_j)))}{\\sum_{l \\in R_i} \\lambda(\\tau_j: Z_l(\\tau_j)))} \\Big),$$\n", + "in terms of the model form:\n", + "$$ L (\\beta) = \\prod_j \\Big (\\frac{\\exp(\\beta Z_i(\\tau_j))}{\\sum_{j \\in R_i} \\exp(\\beta Z_i(\\tau_j))} \\Big).$$\n", + "\n", + "Taking the log on both sides, we get the partial log-likelihood:\n", + "$$ \\log L (\\beta) = \\sum_j \\Big (\\beta Z_i(\\tau_j)) - \\log [\\sum_{j \\in R_i} \\exp(\\beta Z_i(\\tau_j))]\\Big ). $$\n", + "\n", + "\n", + "### Extension to neural networks\n", + "\n", + "Consider a more genera form, where we have the cox proportional hazards model:\n", + "$$\\lambda(T|\\overline{Z}(t))= \\lambda_0(t) \\theta(Z(t))$$\n", + "\n", + "Additionally, consider some network that maps the input covariates $Z(t)$ to the log relative hazards: $\\log \\theta(Z(t))$.\n", + "\n", + "The partial likelihood with repsect to $\\theta(Z(\\tau_j))$ is written as:\n", + "$$ \\log L(\\theta) = \\sum_j \\Big( \\log \\theta(Z_i(\\tau_j)) - \\log [\\sum_{j \\in R_i} \\theta (Z_i(\\tau_j))] \\Big).$$\n", + "It onlu considers the covariate values at the time of event or censoring denoted as $\\tau_j$, all prior covariates are not considered.\n", + "\n", + "As the output of the network is set to be $\\log \\theta(Z(t))$, the code is written to account for this, to show this explicitly, set $\\phi(Z(t)) = \\log \\theta(Z(t))$ and write the log likelihood in terms oh $phi$:\n", + "\n", + "$$ \\log L(\\theta) = \\sum_j \\Big( \\phi(Z_i(\\tau_j)) - \\log [\\sum_{j \\in R_i} \\exp \\phi(Z_i(\\tau_j))] \\Big).$$\n" + ] + }, { "cell_type": "code", "execution_count": null, @@ -28,7 +77,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ @@ -47,6 +96,7 @@ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import torch\n", + "import math\n", "from torch.utils.data import DataLoader\n", "from sklearn.model_selection import train_test_split\n", "\n", @@ -68,142 +118,559 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ + "torch.manual_seed(123)\n", + "\n", "# defining parameters\n", "sample_size = 100 #number of subjects to generate\n", - "obs_time = 6 #number of observations over time for each subject\n", + "obs_time = 10 #number of observations over time for each subject\n", "\n", "# create random variables following a normal distribution N(1,1) for each subject \n", - "mean = 1\n", - "standard_dev = 1\n", - "random_vars = torch.randn(sample_size)*standard_dev + mean\n", + "mean = 5\n", + "standard_dev = 5\n", + "random_vars = torch.randn(sample_size)#*standard_dev + mean\n", "\n", "# using the random variables from above, we create a set of covariates for each subject \n", - "t = torch.linspace(0, 2*math.pi, 6) # Generating 6 equidistant time points from 0 to 2*pi\n", + "t = torch.linspace(0, 2*math.pi, obs_time) # Generating 6 equidistant time points from 0 to 2*pi\n", "\n", "# Creating the matrix\n", - "sample_size = 100 #number of subjects to generate\n", - "matrix = torch.zeros(sample_size, 6)\n", + "matrix = torch.zeros(sample_size, obs_time)\n", "\n", "# Filling the matrix with sin values\n", - "for i in range(6):\n", - " matrix[:, i] = torch.sin(t[i])\n", + "for i in range(obs_time):\n", + " matrix[:, i] = torch.cos(t[i])\n", "\n", - "# Multiplying with a vector of random variables\n", - "sample_size = 100 #number of subjects to generate\n", - "random_vars = torch.randn(sample_size)\n", - "result = torch.matmul(matrix.T, random_vars.unsqueeze(1))" + "# Multiplying with a vector of random variables, dim sample_size x obs_time\n", + "covars = matrix * random_vars[:, None]\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we create outcome variables for the dataset based on the random variables we generated initially. This is so that the observations are related to the outcome in some way so that our network can distinguish some pattern.\n", + "\n", + "We use the random variables ot determine how long someone has been observed and when they experience an event (if they experience one). Then we remove observations for the times beyond their event time.\n", + "\n", + "### Data Format\n", + "\n", + "Here we create a single matrix of data that corresponds to one covariate being observed over time for some dataset.\n", + "The time series is padded with zeros so that each subject has the same legth vector, the vector contains their covariate $Z_i(t)$ up until failure time $\\tau_j$ and then values beyond that are zero.\n", + "\n", + "In general, prior to fitting a survival model or a network, one should consider ohw to handle missing data beforehand. This is most important for covariates that are missing at event time $\\tau_j $. Data imputation methods can vary depending on the use case but some to consider are:\n", + "- use the most recent value (assumes step function),\n", + "- interpolate,\n", + "- impute based on some model." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ - "# make random boolean events\n", - "events = random_vars > 0.5\n", - "print(events) # tensor([ True, False, True, True, False, False, True, False])\n", - "\n", "# make random positive time to event\n", - "time = random_vars * 100\n", - "print(time) # tensor([32.8563, 38.3207, 24.6015, 72.2986, 19.9004, 65.2180, 73.2083, 21.2663])" + "time = torch.floor(random_vars * 10)+4\n", + "time[time>9]=9\n", + "time[time<1]=0\n", + "#print(time) \n", + "# tensor([1.2792e+01, -7.7415e+00, 9.2325e+00, 1.0845e+01, 7.6460e+00, ...\n", + "\n", + "# decide who has an event, here we cosnider those whose time is greater than one (this means some a small subroup has not experienced an event)\n", + "events = time > 1\n", + "# tensor([ True, True, False, False, True, ...\n", + "#print(events)\n", + "\n", + "# remove the covariates for those who have observed an event\n", + "\n", + "for i in range(sample_size):\n", + " if events[i]==True:\n", + " time_cap = int(time[i])\n", + " covars[i, time_cap:] = torch.zeros(obs_time-time_cap)\n", + "\n", + "# covars should be tensor([[ 3.3737e-01, 2.5844e-01, 5.8584e-02, -1.6869e-01, -3.1702e-01, ... and zeros after an event occured\n", + "#print(covars)" ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": 1, "metadata": {}, + "outputs": [], "source": [ - "## Implementing partial log likelihood for time-varying covariates\n", + "from loss_time_covariates import _partial_likelihood_time_cox" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "ename": "ImportError", + "evalue": "cannot import name 'time_covariates' from 'torchsurv.loss' (/home/demboso1/conda-env2/lib/python3.10/site-packages/torchsurv/loss/__init__.py)", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[0;32mIn[160], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtorchsurv\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mloss\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m time_covariates\n\u001b[1;32m 2\u001b[0m \u001b[38;5;66;03m#from torchsurv.metrics.cindex import ConcordanceIndex\u001b[39;00m\n\u001b[1;32m 3\u001b[0m \n\u001b[1;32m 4\u001b[0m \u001b[38;5;66;03m# Parameters\u001b[39;00m\n\u001b[1;32m 5\u001b[0m input_size \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m1\u001b[39m\n", + "\u001b[0;31mImportError\u001b[0m: cannot import name 'time_covariates' from 'torchsurv.loss' (/home/demboso1/conda-env2/lib/python3.10/site-packages/torchsurv/loss/__init__.py)" + ] + } + ], + "source": [ + "#from torchsurv.loss import time_covariates\n", + "#from torchsurv.metrics.cindex import ConcordanceIndex\n", + "\n", + "# Parameters\n", + "input_size = 1\n", + "output_size = 1\n", + "num_layers = 2\n", + "seq_length = obs_time\n", + "batch_size = sample_size\n", + "\n", + "# Create simple RNN model\n", + "rnn = torch.nn.RNN(input_size, output_size, num_layers)\n", + "inputs = torch.randn(seq_length, batch_size, input_size)\n", + "test = covars.T.unsqueeze(2)\n", + "print(test.shape)\n", + "print(inputs.shape)\n", "\n", - "Let $T*_i$ be the be the failure time of interest for subject $i$ and $C$ be the censoring time. Let $T_i = min(T*, C)$. We use $\\delta_i$ to denote whether $T*_i$ was observed. We will use $Z(t)$ to denote the value of of covariate $Z$ and time $t$. Let $t_k$ for $k \\in \\{1, \\dots, K\\} denote the time points at which the covariates are observed. For the moment, we assume that all subjects have been observed on the same time grid. $R_k$ is the set of individuals who are at risk at $t_k$.\n", + "#initializa hidden state\n", + "h0 = torch.randn(num_layers, batch_size, output_size)\n", + "print(h0.shape)\n", + "# Forward pass time series input\n", + "outputs, _ = rnn(test, h0)\n", + "print(outputs.shape)\n", + "# estimates = outputs[-1] # Keep only last predictions, many to one approach\n", + "# print(estimates.size()) # torch.Size([8, 1])\n", + "# print(f\"Estimate shape for {batch_size} samples = {estimates.size()}\") # Estimate shape for 8 samples = torch.Size([8, 1])\n", "\n", "\n", - "Consider a network that outputs a vector $\\theta$ for each observed covariate $Z(t_k)$, which can be denoted as $\\theta(t_k)$. The vector of these values can be written to be $\\theta_K$. Similarly, $Z_K$ can be the vector of the covariate history up until time K. \n", + "# loss = cox.neg_partial_log_likelihood(estimates, events, time)\n", + "# print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n", "\n", - "The log likelihood in terms of $\\theta(t_k)$ can be written as follows.\n", + "# cindex = ConcordanceIndex()\n", + "# print(f\"c-index = {cindex(estimates, events, time)}\") # c-index = 0.20000000298023224" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Comparison to Lifelines package\n", "\n", - "$$ l(\\theta) = \\sum_{i=1}^n \\delta_i \\Big ( \\frac{\\sum_{j \\in R_i} exp(\\theta_K)Z_K Z_K^T}{\\sum_{j \\in R_i} exp(\\theta_K)}-\\frac{[\\sum_{j \\in R_i} exp(\\theta_K)Z_K][\\sum_{j \\in R_i} exp(\\theta_K)Z_K]^T}{\\sum_{j \\in R_i} exp(\\theta_K)}\\Big)$$\n", - "\n" + "Re-format the simulaiton data to fit a normal time-varying cox model in the lifelines package." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "metadata": {}, "outputs": [], "source": [ + "import pandas as pd\n", + "\n", + "# as a reminder covars is the matrix of covariates where a row corresponds to a subject and a column corresponds to their observation at some time \n", + "# the columns are padded so if a subject experiences an event, the remaining of the column is zero\n", + "\n", + "# Generating example torch matrix\n", + "torch_matrix = covars\n", + "# Convert torch matrix to pandas dataframe\n", + "\n", + "#set time to integer\n", + "max_time = max(time.type(torch.int64))\n", + "\n", + "vars = []\n", + "#times = []\n", + "start = []\n", + "stop = []\n", + "events = []\n", + "subjs = []\n", + "for i in range(sample_size):\n", + " subj_counter = 0\n", + " for j in range(max_time):\n", + " if torch_matrix[i,j] == 0:\n", + " break\n", + " else:\n", + " vars.append(torch_matrix[i,j].item())\n", + " #times.append(j)\n", + " start.append(j-1)\n", + " stop.append(j)\n", + " events.append(False)\n", + " subj_counter += 1\n", + " subjs.extend([i] * subj_counter)\n", + " events[-1]=True\n", + "\n", + "df = pd.DataFrame({\n", + " \"subj\": subjs,\n", + " #\"times\": times,\n", + " \"start\":start,\n", + " \"stop\": stop,\n", + " \"events\": events,\n", + " \"var\": vars, \n", + "})\n" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Iteration 1: norm_delta = 2.11e-02, step_size = 0.9500, log_lik = -363.73938, newton_decrement = 5.00e-02, seconds_since_start = 0.0\n", + "Iteration 2: norm_delta = 1.05e-03, step_size = 0.9500, log_lik = -363.68954, newton_decrement = 1.24e-04, seconds_since_start = 0.0\n", + "Iteration 3: norm_delta = 5.24e-05, step_size = 0.9500, log_lik = -363.68942, newton_decrement = 3.09e-07, seconds_since_start = 0.0\n", + "Iteration 4: norm_delta = 2.76e-06, step_size = 1.0000, log_lik = -363.68942, newton_decrement = 7.73e-10, seconds_since_start = 0.0\n", + "Convergence completed after 4 iterations.\n" + ] + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
modellifelines.CoxTimeVaryingFitter
event col'events'
penalizer0.1
number of subjects100
number of periods778
number of events100
partial log-likelihood-363.69
time fit was run2024-12-13 16:55:56 UTC
\n", + "
\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
coefexp(coef)se(coef)coef lower 95%coef upper 95%exp(coef) lower 95%exp(coef) upper 95%cmp tozp-log2(p)
var-0.030.970.09-0.210.150.811.160.00-0.320.750.41

\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
Partial AIC729.38
log-likelihood ratio test0.10 on 1 df
-log2(p) of ll-ratio test0.41
\n", + "
" + ], + "text/latex": [ + "\\begin{tabular}{lrrrrrrrrrrr}\n", + " & coef & exp(coef) & se(coef) & coef lower 95% & coef upper 95% & exp(coef) lower 95% & exp(coef) upper 95% & cmp to & z & p & -log2(p) \\\\\n", + "covariate & & & & & & & & & & & \\\\\n", + "var & -0.03 & 0.97 & 0.09 & -0.21 & 0.15 & 0.81 & 1.16 & 0.00 & -0.32 & 0.75 & 0.41 \\\\\n", + "\\end{tabular}\n" + ], + "text/plain": [ + "\n", + " event col = 'events'\n", + " penalizer = 0.1\n", + "number of subjects = 100\n", + " number of periods = 778\n", + " number of events = 100\n", + "partial log-likelihood = -363.69\n", + " time fit was run = 2024-12-13 16:55:56 UTC\n", + "\n", + "---\n", + " coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%\n", + "covariate \n", + "var -0.03 0.97 0.09 -0.21 0.15 0.81 1.16\n", + "\n", + " cmp to z p -log2(p)\n", + "covariate \n", + "var 0.00 -0.32 0.75 0.41\n", + "---\n", + "Partial AIC = 729.38\n", + "log-likelihood ratio test = 0.10 on 1 df\n", + "-log2(p) of ll-ratio test = 0.41" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiQAAAGwCAYAAACZ7H64AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAr1ElEQVR4nO3de1xVdb7/8ffmjgIbEBU01LxrqWM1klrTzXHULp6yGSsd0xrtojk52d3Mxpqc9GhOU1M6JztNTh5nbMrHybLL5JRGNpllqfXQDggYoAjsDQpy+/7+6OH+tRcCgsAXWq/n48EjXXvtxfsTwnqz9lp7eYwxRgAAABaF2A4AAABAIQEAANZRSAAAgHUUEgAAYB2FBAAAWEchAQAA1lFIAACAdWG2A5yKmpoaffvtt4qNjZXH47EdBwAAnAJjjEpKStStWzeFhNR/DKRdFJJvv/1WqamptmMAAIAmyM7O1hlnnFHvOu2ikMTGxkr6bqC4uDjLaQD80GVmZmrx4sV66KGH1KtXL9txgHbL7/crNTU1sB+vT7soJCdepomLi6OQAGhxsbGxCg8PV2xsLD9zgGZwKqdbcFIrAACwjkICAA6hoaGKjY1VaGio7SiAa3jaw91+/X6/vF6vfD4fh08BAGgnGrP/5ggJAACwjkICAA45OTmaN2+ecnJybEcBXINCAgAOlZWVys/PV2Vlpe0ogGtQSAAAgHUUEgAAYB2FBAAAWEchAQCH5ORk3XfffUpOTrYdBXCNdvHW8QDQmqKjozV06FDbMQBX4QgJADgUFxdrw4YNKi4uth0FcA0KCQA4FBUVacOGDSoqKrIdBXANCgkAALCOQgIAAKyjkAAAAOsoJADgEBMTo9GjRysmJsZ2FMA1PMYYYztEQxpz+2IAANA2NGb/zRESAHDg5npA66OQAIBDTk6O5s2bp5ycHNtRANegkAAAAOsoJAAAwDoKCQAAsI5CAgAArOOyXwAA0CK47BcAALQrFBIAcMjNzdXChQuVm5trOwrgGhQSAHAoLy/X/v37VV5ebjsK4BoUEgAAYB2FBAAAWEchAQAA1lFIAMChc+fOuv3229W5c2fbUQDXCLMdAADampiYGF1wwQW2YwCuwhESAHDw+/1666235Pf7bUcBXINCAgAOR44c0QsvvKAjR47YjgK4BoUEAABYRyEBAADWUUgAAIB1FBIAcIiOjtbQoUMVHR1tOwrgGh5jjLEdoiGNuX0xAABoGxqz/+YICQA41NTUqKysTDU1NbajAK5BIQEAhwMHDujmm2/WgQMHbEcBXINCAgAArKOQAAAA6ygkAADAOgoJAACwjrv9AoBDjx499Oyzz6pjx462owCuQSEBAIfQ0FDe8whoZbxkAwAO+fn5WrZsmfLz821HAVyDQgIADseOHdOnn36qY8eO2Y4CuAaFBAAAWEchAQAA1lFIAACAdRQSAHBITEzU1KlTlZiYaDsK4Bpc9gsADl6vVxMmTLAdA3AVjpAAgMPRo0e1fft2HT161HYUwDUoJADgcOjQIa1cuVKHDh2yHQVwDQoJAACwjkICAACso5AAAADrKCQA4BAREaFevXopIiLCdhTANTzGGGM7REP8fr+8Xq98Ph934AQAoJ1ozP6bIyQAAMA6CgkAOGRmZmratGnKzMy0HQVwDQoJADgYY1RVVaV28Io28INBIQEAANZRSAAAgHUUEgAAYB13+wUAh+7du+uJJ55Qly5dbEcBXINCAgAOEREROuOMM2zHAFyFl2wAwKGgoECrVq1SQUGB7SiAa1BIAMChpKREW7ZsUUlJie0ogGtQSAAAgHUUEgAAYB2FBAAAWEchAQAHr9erq666Sl6v13YUwDW47BcAHBITE3XdddfZjgG4CkdIAMChvLxce/bsUXl5ue0ogGtQSADAITc3V48++qhyc3NtRwFcg0ICAACso5AAAADrKCQAAMA6CgkAOISFhSkxMVFhYVyICLQWjzHG2A7REL/fL6/XK5/Pp7i4ONtxAADAKWjM/psjJAAAwDoKCQA4ZGdna86cOcrOzrYdBXANCgkAOFRVVamwsFBVVVW2owCuQSEBAADWUUgAAIB1FBIAAGAdhQQAHFJSUrRgwQKlpKTYjgK4Bu/6AwAOUVFRGjx4sO0YgKtwhAQAHAoLC7Vu3ToVFhbajgK4BoUEABx8Pp82btwon89nOwrgGhQSAABgHYUEAABYRyEBAADWUUgAwCE2NlYXX3yxYmNjbUcBXMNjjDG2QzSkMbcvBgAAbUNj9t8cIQEAh4qKCuXk5KiiosJ2FMA1KCQA4HDw4EHdc889OnjwoO0ogGtQSAAAgHUUEgAAYB2FBAAAWEchAQAHj8ejsLAweTwe21EA1+CyXwAA0CK47BcAALQrFBIAcDh48KAeeOABLvsFWhGFBAAcKioqlJmZyRujAa2IQgIAAKyjkAAAAOsoJAAAwDoKCQA4dOnSRb/+9a/VpUsX21EA1wizHQAA2pqOHTsqLS3NdgzAVThCAgAOPp9PmzZtks/nsx0FcA0KCQA4FBYW6qWXXlJhYaHtKIBrUEgAAIB1FBIAAGAdhQQAAFhHIQEAhw4dOuicc85Rhw4dbEcBXMNjjDG2QzSkMbcvBgAAbUNj9t8cIQEAh+rqavn9flVXV9uOArgGhQQAHLKysnTrrbcqKyvLdhTANSgkAADAOgoJAACwjkICAACso5AAAADruOwXABxqamp0/PhxRUZGKiSE39uApmrM/juslTIBQLsREhKi6Oho2zEAV6H6A4BDXl6elixZory8PNtRANegkACAQ1lZmXbt2qWysjLbUQDXoJAAAADrKCQAAMA6CgkAALCOQgIADp06ddL06dPVqVMn21EA1+CyXwBwiIuL09ixY23HAFyFIyQA4FBaWqqtW7eqtLTUdhTANSgkAOBw+PBhPfPMMzp8+LDtKIBrUEgAAIB1FBIAAGAdhQQAAFhHIQEAh6ioKPXt21dRUVG2owCu4THGGNshGtKY2xcDAIC2oTH7b46QAAAA6ygkAOCQkZGhG264QRkZGbajAK5BIQEAANZRSAAAgHUUEgAAYB2FBAAAWMfdfgHA4YwzztCKFSuUmJhoOwrgGhQSAHAIDw9X165dbccAXIWXbADA4fDhw3r66ae52y/QiigkAOBQWlqqbdu2qbS01HYUwDUoJAAAwDoKCQAAsI5CAgAArKOQAIBDQkKCJk2apISEBNtRANfgsl8AcIiPj9ekSZNsxwBchSMkAOBQVlamXbt2qayszHYUwDUoJADgkJeXpyVLligvL892FMA1KCQAAMA6CgkAALCOQgIAAKyjkACAw4mb64WHh9uOAriGxxhjbIdoiN/vl9frlc/nU1xcnO04AADgFDRm/80REgAAYB2FBAAcsrKydMsttygrK8t2FMA1KCQA4FBdXa2SkhJVV1fbjgK4BoUEAABYRyEBAADWUUgAAIB1FBIAcEhJSdEjjzyilJQU21EA1wizHQAA2pqoqCj169fPdgzAVThCAgAOhYWFeumll1RYWGg7CuAaFBIAcPD5fNq0aZN8Pp/tKIBrUEgAAIB1FBIAAGAdJ7UCaFUXXXSRsrOz610nNTVV//rXv1opEYC2wPVHSHr37q3evXvbjgG4RnZ2dr33iMnKymqwsLS02NhY/fSnP1VsbKzVHEBraQv7QqtHSCoqKhQREWEzAgALevToof/7v/876WO2fyhKUlJSkmbMmGE7BuAqp3yEZNWqVerWrZtqamqClk+cOFE33XSTvvnmG02cOFFdu3ZVTEyMfvzjH+udd94JWrdXr15avHixpk2bpri4OM2aNat5pgCAZnT8+HFlZGTo+PHjtqMAruExxphTWbGoqEjJycnatGmTLrvsMknfXaufkpKiTZs2KSkpSR999JFGjx6tyMhIvfjii1q2bJm+/vpr9ejRQ9J3haSoqEgLFy7Uf/zHf0iS+vTpU+tzHT9+POgHgd/vV2pqqnw+n+Li4k535iC9e/dWdna2UlNTm3W7AE7uxPdbfUdIbH9PVlZWqqioSAkJCQoPD7eWA2gtDX1fNpXf75fX6z2l/fcpHyFJSEjQ+PHj9de//jWw7O9//7uSkpJ0ySWXaNiwYbrlllt09tlnq1+/flq8eLH69OmjjRs3Bm3n0ksv1V133aU+ffqctIxI0uOPPy6v1xv4oCwAAPDD1qhzSKZMmaKZM2fqmWeeUWRkpNauXavrrrtOISEhKi0t1aJFi/T6668rNzdXVVVVKisrq3Xy2nnnndfg57n//vv1m9/8JvD3E0dIWkpLtEIAJ3cq54jY/p7MyMjQgw8+qMcee0xnnnmmtRxAa2kL5241qpBceeWVMsbo9ddf149//GN98MEHWrFihSRp/vz5evvtt7Vs2TL17dtX0dHRuvbaa1VRURG0jY4dOzb4eSIjIxUZGdmYaAAAoB1rVCGJiorSNddco7Vr12r//v0aMGCAzjnnHEnStm3bNH36dF199dWSpNLSUmVmZjZ7YADtX1ZWVp2/kWVlZQXOO7MlJCREUVFRCglx/TsjAK2m0Zf9TpkyRVdccYV2796tqVOnBpb369dPr7zyiq688kp5PB499NBDta7IaYt4qQZoXQ29/NqjRw/r54317NlTzz//vNUMQGtqC/vCRheSSy+9VImJifr66691ww03BJYvX75cN910k0aNGqWkpCTde++98vv9zRoWQPvHO7ACOJlTvuzXpsZcNgQAp+vgwYN68skndeedd6p79+624wDtVotc9gsAblFRUaGDBw/WOikfQMuhkAAAAOsoJAAAwDoKCQAAsI5CAgAOXbt21V133aWuXbvajgK4RqMv+wWAH7oOHTro3HPPtR0DcBWOkACAQ3FxsV577TUVFxfbjgK4BoUEAByKior0P//zPyoqKrIdBXANCgkAALCOQgIAAKyjkAAAAOsoJADg0LFjR6Wlpaljx462owCuwc31AABAi+DmegBwGqqqqlRYWKiqqirbUQDXoJAAgEN2drbmzJmj7Oxs21EA16CQAAAA6ygkAADAOgoJAACwjkICAACs47JfAHAwxqiqqkphYWHyeDy24wDtVmP232GtlAkA2g2Px6Pw8HDbMQBX4SUbAHDIzc3V4sWLlZubazsK4BoUEgBwKC8v1969e1VeXm47CuAaFBIAAGAdhQQAAFhHIQEAANZRSADAISkpSTNnzlRSUpLtKIBrcNkvADjExsbqkksusR0DcBWOkACAQ0lJid577z2VlJTYjgK4BoUEABwKCgq0evVqFRQU2I4CuAaFBAAAWEchAQAA1lFIAACAdRQSAHCIiorSoEGDFBUVZTsK4BoeY4yxHaIhjbl9MQAAaBsas//mCAkAOBhjVFlZqXbw+xrwg0EhAQCHzMxM3XjjjcrMzLQdBXANCgkAALCOQgIAAKyjkAAAAOsoJAAAwDru9gsADqmpqfrjH//I2wwArYhCAgAOYWFhSkxMtB0DcBVesgEAh0OHDmnlypU6dOiQ7SiAa1BIAMDh6NGj2r59u44ePWo7CuAaFBIAAGAdhQQAAFhHIQEAANZRSADAISEhQZMnT1ZCQoLtKIBrcNkvADjEx8dr4sSJtmMArsIREgBwOHbsmHbs2KFjx47ZjgK4BoUEABzy8/P1n//5n8rPz7cdBXANCgkAALCOQgIAAKyjkAAAAOsoJADgEBERoe7duysiIsJ2FMA1PMYYYztEQ/x+v7xer3w+H7cDBwCgnWjM/psjJAAAwDoKCQA4HDhwQDfddJMOHDhgOwrgGhQSAHCoqalReXm5ampqbEcBXINCAgAArKOQAAAA6ygkAADAOgoJADh069ZNjz32mLp162Y7CuAaYbYDAEBbExkZqTPPPNN2DMBVOEICAA4FBQVas2aNCgoKbEcBXINCAgAOJSUlevvtt1VSUmI7CuAaFBIAAGAdhQQAAFhHIQEAANZRSADAwev1asKECfJ6vbajAK7BZb8A4JCYmKipU6fajgG4CkdIAMChvLxc+/btU3l5ue0ogGtQSADAITc3Vw8//LByc3NtRwFcg0ICAACso5AAAADrKCQAAMA6CgkAOISGhio2NlahoaG2owCu4THGGNshGuL3++X1euXz+RQXF2c7DgAAOAWN2X9zhAQAAFhHIQEAh5ycHM2bN085OTm2owCuQSEBAIfKykrl5+ersrLSdhTANSgkAADAOgoJAACwjkICAACso5AAgENycrLuu+8+JScn244CuEaY7QAA0NZER0dr6NChtmMArsIREgBwKC4u1oYNG1RcXGw7CuAaFBIAcCgqKtKGDRtUVFRkOwrgGhQSAABgHYUEAABYRyEBAADWUUgAwCEmJkajR49WTEyM7SiAa3iMMcZ2iIY05vbFAACgbWjM/psjJADgwM31gNZHIQEAh5ycHM2bN085OTm2owCuQSEBAADWUUgAAIB1FBIAAGAdhQQAAFjHZb8AAKBFcNkvAABoVygkAOCQm5urhQsXKjc313YUwDUoJADgUF5erv3796u8vNx2FMA1KCQAAMA6CgkAALCOQgIAAKyjkACAQ+fOnXX77berc+fOtqMArhFmOwAAtDUxMTG64IILbMcAXIUjJADg4Pf79dZbb8nv99uOArgGhQQAHI4cOaIXXnhBR44csR0FcA0KCQAAsI5CAgAArKOQAAAA6ygkAOAQHR2toUOHKjo62nYUwDU8xhhjO0RDGnP7YgAA0DY0Zv/NERIAcKipqVFZWZlqampsRwFcg0ICAA4HDhzQzTffrAMHDtiOArgGhQQAAFhHIQEAANZRSAAAgHUUEgAAYB13+wUAhx49eujZZ59Vx44dbUcBXINCAgAOoaGhvOcR0Mp4yQYAHPLz87Vs2TLl5+fbjgK4BoUEAByOHTumTz/9VMeOHbMdBXANCgkAALCOQgIAAKyjkAAAAOsoJADgkJiYqKlTpyoxMdF2FMA1uOwXABy8Xq8mTJhgOwbgKhwhAQCHo0ePavv27Tp69KjtKIBrUEgAwOHQoUNauXKlDh06ZDsK4BoUEgAAYB2FBAAAWEchAQAA1lFIAMAhIiJCvXr1UkREhO0ogGt4jDHGdoiG+P1+eb1e+Xw+7sAJAEA70Zj9N0dIAACAdRQSAHDIzMzUtGnTlJmZaTsK4BoUEgBwMMaoqqpK7eAVbeAHg0ICAACso5AAAADrKCQAAMA67vYLAA7du3fXE088oS5dutiOArgGhQQAHCIiInTGGWfYjgG4Ci/ZAIBDQUGBVq1apYKCAttRANegkACAQ0lJibZs2aKSkhLbUQDXoJAAAADrKCQAAMA6CgkAALCOQgIADl6vV1dddZW8Xq/tKIBrcNkvADgkJibquuuusx0DcBWOkACAQ3l5ufbs2aPy8nLbUQDXoJAAgENubq4effRR5ebm2o4CuAaFBAAAWEchAQAA1lFIAACAdRQSAHAICwtTYmKiwsK4EBFoLR5jjLEdoiF+v19er1c+n09xcXG24wAAgFPQmP03R0gAAIB1FBIAcMjOztacOXOUnZ1tOwrgGhQSAHCoqqpSYWGhqqqqbEcBXINCAgAArKOQAAAA6ygkAADAOgoJADikpKRowYIFSklJsR0FcA3e9QcAHKKiojR48GDbMQBX4QgJADgUFhZq3bp1KiwstB0FcA0KCQA4+Hw+bdy4UT6fz3YUwDUoJAAAwDoKCQAAsI5CAgAArKOQAIBDbGysLr74YsXGxtqOAriGxxhjbIdoSGNuXwwAANqGxuy/OUICAA4VFRXKyclRRUWF7SiAa1BIAMDh4MGDuueee3Tw4EHbUQDXaBfv1HriVSW/3285CQA3KCkpUWVlpUpKSvi5A5yGE98/p3J2SLs4hyQnJ0epqam2YwAAgCbIzs7WGWecUe867aKQ1NTU6Ntvv1VsbKw8Ho/tOPL7/UpNTVV2drarTrJ169ySe2dnbuZ2A+ZuubmNMSopKVG3bt0UElL/WSLt4iWbkJCQBpuVDXFxca76x3uCW+eW3Ds7c7sLc7tLS8/t9XpPaT1OagUAANZRSAAAgHUUkiaIjIzUww8/rMjISNtRWpVb55bcOztzM7cbMHfbmLtdnNQKAAB+2DhCAgAArKOQAAAA6ygkAADAOgoJAACwjkJSh8LCQk2ZMkVxcXGKj4/XzTffrNLS0nrXv+OOOzRgwABFR0erR48emjt3rnw+X9B6WVlZuvzyy9WhQwd16dJFd999t6qqqlp6nFPW2LkladWqVbr44osVFxcnj8ej4uLiWuv06tVLHo8n6GPJkiUtNEXjtdTcTdlua2pKvvLycs2ePVudOnVSTEyMJk2apPz8/KB1nF9rj8ejdevWteQo9Xr66afVq1cvRUVFKS0tTR9//HG96//tb3/TwIEDFRUVpSFDhmjTpk1BjxtjtHDhQqWkpCg6OlpjxozRvn37WnKEJmnuuadPn17r6zpu3LiWHKFJGjP37t27NWnSpMDPqCeffPK0t2lLc8+9aNGiWl/vgQMHttwABic1btw4M2zYMPPRRx+ZDz74wPTt29dcf/31da7/xRdfmGuuucZs3LjR7N+/37z77rumX79+ZtKkSYF1qqqqzNlnn23GjBljdu7caTZt2mSSkpLM/fff3xojnZLGzm2MMStWrDCPP/64efzxx40kU1RUVGudnj17mt/+9rcmNzc38FFaWtpCUzReS83dlO22pqbku/XWW01qaqp59913zSeffGLOP/98M2rUqKB1JJk1a9YEfb3LyspacpQ6rVu3zkRERJjnn3/e7N6928ycOdPEx8eb/Pz8k66/bds2Exoaap544gmzZ88es2DBAhMeHm6++OKLwDpLliwxXq/XvPrqq+bzzz83V111lTnzzDOtzXgyLTH3jTfeaMaNGxf0dS0sLGytkU5JY+f++OOPzfz5883LL79skpOTzYoVK057mza0xNwPP/ywOeuss4K+3ocPH26xGSgkJ7Fnzx4jyfz73/8OLHvjjTeMx+MxBw8ePOXtrF+/3kRERJjKykpjjDGbNm0yISEhJi8vL7DOn/70JxMXF2eOHz/efAM00enO/d5779VbSE72D74taKm5m+vfUUtpSr7i4mITHh5u/va3vwWW7d2710gy6enpgWWSzD/+8Y8Wy94YI0aMMLNnzw78vbq62nTr1s08/vjjJ13/F7/4hbn88suDlqWlpZlbbrnFGGNMTU2NSU5ONkuXLg08XlxcbCIjI83LL7/cAhM0TXPPbcx3hWTixIktkre5NHbu76vr59TpbLO1tMTcDz/8sBk2bFgzpqwfL9mcRHp6uuLj43XeeecFlo0ZM0YhISHavn37KW/H5/MpLi5OYWFhge0OGTJEXbt2Dazzs5/9TH6/X7t3726+AZqoueauy5IlS9SpUycNHz5cS5cubTMvVbXU3C39//N0NSXfjh07VFlZqTFjxgSWDRw4UD169FB6enrQurNnz1ZSUpJGjBih559//pRuP97cKioqtGPHjqC8ISEhGjNmTK28J6SnpwetL333fXpi/YyMDOXl5QWt4/V6lZaWVuc2W1tLzH3Cli1b1KVLFw0YMEC33Xabjhw50vwDNFFT5raxzebWkhn37dunbt26qXfv3poyZYqysrJON26d2sXN9VpbXl6eunTpErQsLCxMiYmJysvLO6VtFBQUaPHixZo1a1bQdr9fRiQF/n6q221JzTF3XebOnatzzjlHiYmJ+vDDD3X//fcrNzdXy5cvP63tNoeWmrsl/382h6bky8vLU0REhOLj44OWd+3aNeg5v/3tb3XppZeqQ4cOeuutt3T77bertLRUc+fObfY56lNQUKDq6uqTft999dVXJ31OXd+nJ+Y78d/61rGtJeaWpHHjxumaa67RmWeeqW+++UYPPPCAxo8fr/T0dIWGhjb/II3UlLltbLO5tVTGtLQ0vfDCCxowYIByc3P1yCOP6MILL9SXX36p2NjY041di6sKyX333aff//739a6zd+/e0/48fr9fl19+uQYPHqxFixad9vZOV2vNXZ/f/OY3gT8PHTpUERERuuWWW/T444+32NsWt4W5bWgLcz/00EOBPw8fPlxHjx7V0qVLW72QoHldd911gT8PGTJEQ4cOVZ8+fbRlyxZddtllFpOhJYwfPz7w56FDhyotLU09e/bU+vXrdfPNNzf753NVIbnrrrs0ffr0etfp3bu3kpOTdejQoaDlVVVVKiwsVHJycr3PLykp0bhx4xQbG6t//OMfCg8PDzyWnJxc66znE1cnNLTd09EaczdWWlqaqqqqlJmZqQEDBjTrtk+wPXdr/v/8vpacOzk5WRUVFSouLg46SpKfn1/vTGlpaVq8eLGOHz/eqvfNSEpKUmhoaK2rgOrLm5ycXO/6J/6bn5+vlJSUoHV+9KMfNWP6pmuJuU+md+/eSkpK0v79+9tEIWnK3Da22dxaK2N8fLz69++v/fv3N9s2g7Ta2SrtyImT/T755JPAss2bNzd4MqLP5zPnn3++ueiii8zRo0drPX7ipNbvn/X83HPPmbi4OFNeXt68QzRBU+c+ob6TWp1eeuklExIS0ibO0G+puU93uy2tKflOnNT697//PbDsq6++qnVSq9Ojjz5qEhISmi98I4wYMcLMmTMn8Pfq6mrTvXv3ek/uvOKKK4KWjRw5stZJrcuWLQs87vP52uRJrc0598lkZ2cbj8djXnvtteYJ3QwaO/f31XdSa1O32VpaYm6nkpISk5CQYFauXHk6UetEIanDuHHjzPDhw8327dvN1q1bTb9+/YIuh8zJyTEDBgww27dvN8Z89wMpLS3NDBkyxOzfvz/oMqmqqipjzP+/7Hfs2LHms88+M2+++abp3Llzm7vstzFzG2NMbm6u2blzp1m9erWRZN5//32zc+dOc+TIEWOMMR9++KFZsWKF+eyzz8w333xjXnrpJdO5c2czbdq0Vp+vLi0x96ls17amzH3rrbeaHj16mH/+85/mk08+MSNHjjQjR44MPL5x40azevVq88UXX5h9+/aZZ555xnTo0MEsXLiwVWc7Yd26dSYyMtK88MILZs+ePWbWrFkmPj4+cLXbL3/5S3PfffcF1t+2bZsJCwszy5YtM3v37jUPP/zwSS/7jY+PN6+99prZtWuXmThxYpu87Lc55y4pKTHz58836enpJiMjw7zzzjvmnHPOMf369WsTv1Cd0Ni5jx8/bnbu3Gl27txpUlJSzPz5883OnTvNvn37TnmbbUFLzH3XXXeZLVu2mIyMDLNt2zYzZswYk5SUZA4dOtQiM1BI6nDkyBFz/fXXm5iYGBMXF2dmzJhhSkpKAo9nZGQYSea9994zxvz/35JP9pGRkRF4XmZmphk/fryJjo42SUlJ5q677gpcFtwWNHZuY767NOxkc69Zs8YYY8yOHTtMWlqa8Xq9JioqygwaNMj87ne/a1M/xFpi7lPZrm1NmbusrMzcfvvtJiEhwXTo0MFcffXVJjc3N/D4G2+8YX70ox+ZmJgY07FjRzNs2DDz7LPPmurq6tYcLchTTz1levToYSIiIsyIESPMRx99FHjsoosuMjfeeGPQ+uvXrzf9+/c3ERER5qyzzjKvv/560OM1NTXmoYceMl27djWRkZHmsssuM19//XVrjNIozTn3sWPHzNixY03nzp1NeHi46dmzp5k5c2ab2imf0Ji5T/wbd35cdNFFp7zNtqK55548ebJJSUkxERERpnv37mby5Mlm//79LZbfY4yFa/EAAAC+h/chAQAA1lFIAACAdRQSAABgHYUEAABYRyEBAADWUUgAAIB1FBIAAGAdhQQAAFhHIQHauIsvvlh33nlni2z7Jz/5if7617+2yLYrKirUq1cvffLJJ6e0/kMPPaRZs2a1SBZbzj//fG3YsMF2DKBdoJAALrVx40bl5+cH3VK+V69eevLJJ2utu2jRoqA72S5atEgej0cej0ehoaFKTU3VrFmzVFhYGFgnIiJC8+fP17333ttglry8PK1cuVIPPvhgYFlJSYnuvPNO9ezZU9HR0Ro1apT+/e9/Bz1v+vTpgRwnPsaNGxd4/Pjx4/rlL3+puLg49e/fX++8807Q85cuXao77rijwXyS5Pf79eCDD2rgwIGKiopScnKyxowZo1deeUUn3vDaWR4XLFig++67TzU1Naf0OQA3o5AALvWHP/xBM2bMUEhI034MnHXWWcrNzVVWVpbWrFmjN998U7fddlvQOlOmTNHWrVu1e/fuerf15z//WaNGjVLPnj0Dy371q1/p7bff1l/+8hd98cUXGjt2rMaMGaODBw8GPXfcuHHKzc0NfLz88suBx1atWqUdO3YoPT1ds2bN0g033BAoDxkZGVq9erUee+yxBmctLi7WqFGj9OKLL+r+++/Xp59+qvfff1+TJ0/WPffcI5/Pd9LnjR8/XiUlJXrjjTca/ByA21FIgHamqKhI06ZNU0JCgjp06KDx48dr3759QeusXr1aqamp6tChg66++motX75c8fHxgccPHz6sf/7zn7ryyiubnCMsLEzJycnq3r27xowZo5///Od6++23g9ZJSEjQ6NGjtW7dunq3tW7duqAsZWVl2rBhg5544gn95Cc/Ud++fbVo0SL17dtXf/rTn4KeGxkZqeTk5MBHQkJC4LG9e/fqqquu0llnnaXZs2fr8OHDKigokCTddttt+v3vf6+4uLgGZ33ggQeUmZmp7du368Ybb9TgwYPVv39/zZw5U5999pliYmJO+rzQ0FBNmDChwfkBUEiAdmf69On65JNPtHHjRqWnp8sYowkTJqiyslKStG3bNt1666369a9/rc8++0w//elPax0F2Lp1qzp06KBBgwY1S6bMzExt3rxZERERtR4bMWKEPvjggzqfW1hYqD179ui8884LLKuqqlJ1dbWioqKC1o2OjtbWrVuDlm3ZskVdunTRgAEDdNttt+nIkSOBx4YNG6atW7eqrKxMmzdvVkpKipKSkrR27VpFRUXp6quvbnC2mpoarVu3TlOmTFG3bt1qPR4TE6OwsLA6n9/Q/AC+U/d3EYA2Z9++fdq4caO2bdumUaNGSZLWrl2r1NRUvfrqq/r5z3+up556SuPHj9f8+fMlSf3799eHH36o//3f/w1s58CBA+ratetJX6659957tWDBgqBlFRUVGjx4cNCyL774QjExMaqurlZ5ebkkafny5bW2161bNx04cKDOmbKysmSMCdrZx8bGauTIkVq8eLEGDRqkrl276uWXX1Z6err69u0bWG/cuHG65pprdOaZZ+qbb77RAw88oPHjxys9PV2hoaG66aabtGvXLg0ePFhJSUlav369ioqKtHDhQm3ZskULFizQunXr1KdPHz3//PPq3r17rXwFBQUqKirSwIED65yhPt26dVN2drZqamqa/PIY4AYUEqAd2bt3r8LCwpSWlhZY1qlTJw0YMEB79+6VJH399de1fvMfMWJEUCEpKyurdfThhLvvvlvTp08PWvaHP/xB77//ftCyAQMGaOPGjSovL9dLL72kzz777KQniEZHR+vYsWN1zlRWViZJtfL85S9/0U033aTu3bsrNDRU55xzjq6//nrt2LEjsM73T8gdMmSIhg4dqj59+mjLli267LLLFB4erqeffjpouzNmzNDcuXO1c+dOvfrqq/r888/1xBNPaO7cuSe9IubEOSdNFR0drZqaGh0/flzR0dGntS3gh4y6DrhQUlKSioqK6nysb9++QR+JiYm11ouIiFDfvn119tlna8mSJQoNDdUjjzxSa73CwkJ17ty53iySauXp06eP/vWvf6m0tFTZ2dn6+OOPVVlZqd69e9e5rd69eyspKUn79+8/6ePvvfeedu/erTlz5mjLli2aMGGCOnbsqF/84hfasmXLSZ/TuXNnxcfH66uvvqrz89ansLBQHTt2pIwADaCQAO3IoEGDVFVVpe3btweWHTlyRF9//XXgJZUBAwbUujzW+ffhw4crLy+vzlLSFAsWLNCyZcv07bffBi3/8ssvNXz48Dqf16dPH8XFxWnPnj0nfbxjx45KSUlRUVGRNm/erIkTJ9a5rZycHB05ckQpKSm1HisvL9fs2bP13HPPKTQ0VNXV1YHzbiorK1VdXX3SbYaEhOi6667T2rVra80mSaWlpaqqqqozU0PzA/gOhQRoR/r166eJEydq5syZ2rp1qz7//HNNnTpV3bt3D+yo77jjDm3atEnLly/Xvn379Nxzz+mNN96Qx+MJbGf48OFKSkrStm3bmi3byJEjNXToUP3ud78LWv7BBx9o7NixdT4vJCREY8aMqXWy6ubNm/Xmm28qIyNDb7/9ti655BINHDhQM2bMkPRdEbj77rv10UcfKTMzU++++64mTpyovn376mc/+1mtz7N48WJNmDAhUA5Gjx6tV155Rbt27dIf//hHjR49us6Mjz32mFJTU5WWlqYXX3xRe/bs0b59+/T8889r+PDhKi0trfO5Dc0P4DsUEqCdWbNmjc4991xdccUVGjlypIwx2rRpk8LDwyV9t6N99tlntXz5cg0bNkxvvvmm5s2bF3SORmhoqGbMmKG1a9c2a7Z58+bpz3/+s7KzsyVJ6enp8vl8uvbaa+t93q9+9SutW7cu6A3EfD6fZs+erYEDB2ratGm64IILtHnz5sCcoaGh2rVrl6666ir1799fN998s84991x98MEHioyMDNr+l19+qfXr1we9pHTttdfq8ssv14UXXqhdu3Zp5cqVdeZLTEzURx99pKlTp+rRRx/V8OHDdeGFF+rll1/W0qVL5fV6T/q8gwcP6sMPPwyUKAB185jTPWMLQJs3c+ZMffXVV0GXn+bl5emss87Sp59+GvSGZM1p8uTJGjZsmB544IF61zPGKC0tTfPmzdP111/fIllsuPfee1VUVKRVq1bZjgK0eRwhAX6Ali1bps8//1z79+/XU089pf/+7//WjTfeGLROcnKy/uu//ktZWVktkqGiokJDhgzRvHnzGlzX4/Fo1apV9Z6L0R516dJFixcvth0DaBc4QgL8AJ24aqSkpES9e/fWHXfcoVtvvdV2LACoE4UEAABYx0s2AADAOgoJAACwjkICAACso5AAAADrKCQAAMA6CgkAALCOQgIAAKyjkAAAAOv+H8yAnND9Kr3dAAAAAElFTkSuQmCC", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from lifelines import CoxTimeVaryingFitter\n", "\n", - "def time_partial_log_likelihood(\n", - " log_hz: torch.Tensor, #nx1 vector\n", - " event: torch.Tensor, #n vector (i think)\n", - " time: torch.Tensor, #n vector (i think)\n", - " covariates: torch.Tensor, #nxp vector, p number of params\n", - ") -> torch.Tensor:\n", - "\n", - " # sort data by time-to-event or censoring\n", - " time_sorted, idx = torch.sort(time)\n", - " log_hz_sorted = log_hz[idx]\n", - " event_sorted = event[idx]\n", - "\n", - " exp_log_hz = torch.exp(log_hz_sorted)\n", - " #need to sort the covariate here as well \n", - " #sort covariates so that the rows match the ordering\n", - " covariates_sorted = covariates[idx, :]\n", - "\n", - " #the left hand side (HS) of the equation\n", - " #below is Z_k Z_k^T - i think it should be a vector matrix dim nxn\n", - " covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T)\n", - " \n", - " #pointwise multiplication of vectors to get the nominator of left HS\n", - " #outcome in a vector of length n\n", - " # Ends up being (1, n)\n", - " log_nominator_left = torch.matmul(exp_log_hz.T, covariate_inner_product)\n", - "\n", - " #right hand size of the equation\n", - " #formulate the brackets \\sum exp(theta)Z_k\n", - " bracket = torch.mul(exp_log_hz, covariates_sorted)\n", - " nominator_right = torch.matmul(bracket, bracket.T) #nxn matrix\n", - " ###not sure if the next line is this\n", - " #log_nominator_right = torch.sum(nominator_right, dim=0).unsqueeze(0)\n", - " ### or this\n", - " log_nominator_right = nominator_right[0,].unsqueeze(0)\n", - " #the denominator is the same on both sides\n", - " log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) #dim=0 sums over the oth dimension\n", - " partial_log_likelihood = torch.div(log_nominator_left - log_nominator_right, log_denominator) # (n, n)\n", - " return (partial_log_likelihood)[event_sorted]\n" + "ctv = CoxTimeVaryingFitter(penalizer=0.1)\n", + "ctv.fit(df, id_col=\"subj\", event_col=\"events\", start_col=\"start\", stop_col=\"stop\", show_progress=True)\n", + "ctv.print_summary()\n", + "ctv.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Testing on the old dataset for dimensions sake\n", + "## Testing it on the lifelines dataset\n", "\n", - "using the data from the introduction notebook just to make sure dimensions work, this is not correct implementation" + "This is to demonstrate the method with a neural network, example inspired by the [lifelines example](https://lifelines.readthedocs.io/en/latest/Time%20varying%20survival%20regression.html#).\n", + "\n", + "This is a classic dataset for survival regression with time varying covariates. The original dataset is from J Crowley and M Hu. 'Covariance analysis of heart transplant survival data', and this dataset is from R’s survival library.\n" ] }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
startstopeventageyearsurgerytransplantid
00.050.01-17.1553730.123203001
10.06.013.8357290.254620002
20.01.006.2970570.265572003
31.016.016.2970570.265572013
40.036.00-7.7371660.490075004
\n", + "
" + ], + "text/plain": [ + " start stop event age year surgery transplant id\n", + "0 0.0 50.0 1 -17.155373 0.123203 0 0 1\n", + "1 0.0 6.0 1 3.835729 0.254620 0 0 2\n", + "2 0.0 1.0 0 6.297057 0.265572 0 0 3\n", + "3 1.0 16.0 1 6.297057 0.265572 0 1 3\n", + "4 0.0 36.0 0 -7.737166 0.490075 0 0 4" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import lifelines\n", + "\n", + "df = lifelines.datasets.load_stanford_heart_transplants()\n", + "df.head(5)" + ] + }, + { + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "import lifelines" + "The dataset contains the following:\n", + "\n", + "- `start`: entry time,\n", + "- `stop`: exit time,\n", + "- `event`: status for this interval of time,\n", + "- `age`: subjetct's age -48 years,\n", + "- `year`: tyear of acceptance (in years after 1 Nov 1967)\n", + "- `surgery`: prior bypass surgery 1=yes\n", + "- `transplant`: received transplant 1=yes\n", + "- `id`: patient id" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ - "# Load GBSG2 dataset\n", - "df = lifelines.datasets.load_gbsg2()\n", - "df.head(5)" + "from lifelines.utils import to_long_format, add_covariate_to_timeline\n", + "\n", + "base_df = pd.DataFrame([\n", + " {'id': 1, 'duration': 10, 'event': True, 'var1': 0.1},\n", + " {'id': 2, 'duration': 12, 'event': True, 'var1': 0.5}\n", + "])\n", + "\n", + "base_df = to_long_format(base_df, duration_col=\"duration\")" ] }, { @@ -407,7 +874,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "conda-env2", "language": "python", "name": "python3" }, From dafdb404ee848e366bb22922d5d1a700e4b71293 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Tue, 17 Dec 2024 13:50:11 +0100 Subject: [PATCH 04/19] small changes to both the notebook and loss files --- docs/notebooks/introduction.ipynb | 159 ++++++++++++++++++++----- docs/notebooks/loss_time_covariates.py | 12 +- docs/notebooks/time_varying.ipynb | 76 +++++++----- src/torchsurv/loss/time_covariates.py | 45 ------- 4 files changed, 179 insertions(+), 113 deletions(-) delete mode 100644 src/torchsurv/loss/time_covariates.py diff --git a/docs/notebooks/introduction.ipynb b/docs/notebooks/introduction.ipynb index bdfb74d..b42a79d 100644 --- a/docs/notebooks/introduction.ipynb +++ b/docs/notebooks/introduction.ipynb @@ -33,7 +33,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "id": "013dbcb4", "metadata": {}, "outputs": [], @@ -45,7 +45,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 2, "id": "2601dd00-7bd2-49d5-9bdf-a84205872890", "metadata": {}, "outputs": [], @@ -71,7 +71,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 3, "id": "d7a98ea2-100f-43ef-8c45-c786ddcd313e", "metadata": {}, "outputs": [ @@ -79,7 +79,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "CUDA-enabled GPU/TPU is available.\n" + "No CUDA-enabled GPU found, using CPU.\n" ] } ], @@ -107,7 +107,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 4, "id": "1df49737-dc02-4d6b-acd7-d03b79f18a29", "metadata": { "scrolled": true @@ -225,7 +225,7 @@ "4 no 73 Post 35 II 1 26 65 772 1" ] }, - "execution_count": 5, + "execution_count": 4, "metadata": {}, "output_type": "execute_result" } @@ -270,7 +270,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 5, "id": "7a5fd9ef-2643-46b7-9c98-05ff919026ea", "metadata": {}, "outputs": [ @@ -399,7 +399,7 @@ "4 0.0 1.0 0.0 " ] }, - "execution_count": 6, + "execution_count": 5, "metadata": {}, "output_type": "execute_result" } @@ -416,7 +416,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 6, "id": "0f8b7f3b-fb2a-4d74-ac99-8f6390b2f5eb", "metadata": {}, "outputs": [ @@ -446,7 +446,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 7, "id": "326c03fc-91f1-493b-a9ba-820de17fb2f8", "metadata": {}, "outputs": [], @@ -465,7 +465,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 8, "id": "570386fb-f0ea-4061-bae2-11b274e7f851", "metadata": {}, "outputs": [ @@ -473,10 +473,10 @@ "name": "stdout", "output_type": "stream", "text": [ - "x (shape) = torch.Size([128, 9])\n", + "x (shape) = torch.Size([32, 9])\n", "num_features = 9\n", - "event = torch.Size([128])\n", - "time = torch.Size([128])\n" + "event = torch.Size([32])\n", + "time = torch.Size([32])\n" ] } ], @@ -517,7 +517,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 9, "id": "9c2bd89a-c90a-4795-aab5-b5c21906a0de", "metadata": {}, "outputs": [], @@ -534,6 +534,109 @@ ")" ] }, + { + "cell_type": "code", + "execution_count": 10, + "id": "7d97e65d", + "metadata": {}, + "outputs": [], + "source": [ + "# This is for testing the loss function\n", + "x_test, (test_event, test_time) = next(iter(dataloader_train))" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "5d102dad", + "metadata": {}, + "outputs": [], + "source": [ + "log_hz = cox_model(x_test)" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "id": "210e6755", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "x_test torch.Size([32, 9])\n", + "events torch.Size([32])\n", + "times torch.Size([32])\n", + "\n", + "time_sorted torch.Size([32])\n", + "log_hz_sorted torch.Size([32, 1])\n", + "event_sorted torch.Size([32])\n", + "time_unique torch.Size([30])\n", + "------------------------------\n", + "covariates torch.Size([32, 9])\n", + "cov_inner torch.Size([32, 32])\n", + "log_nom_left torch.Size([1, 32])\n", + "bracket torch.Size([32, 9])\n", + "log_nom_right torch.Size([32, 32])\n", + "sum_nom torch.Size([1, 32])\n", + "log_denom torch.Size([1, 32])\n", + "last_bit torch.Size([1, 32])\n" + ] + }, + { + "data": { + "text/plain": [ + "tensor([[-1.4683e+04, -2.9827e+03, -2.9461e+04, -4.0582e+04, -1.7949e+04,\n", + " -1.4714e+05, -1.4940e+03, -7.7085e+04, -5.3855e+04, -9.3090e+03,\n", + " -9.8543e+03, -1.8929e+05, -5.1617e+03, -4.4286e+03, -9.6604e+04,\n", + " -1.5469e+04, -2.7680e+04, -6.3136e+04, -1.2045e+05, -9.3347e+04,\n", + " -1.7911e+05, -1.3205e+05, -1.6203e+05, -3.0884e+04, -2.3050e+03,\n", + " -2.1324e+05, -1.7852e+06, -1.7429e+04, -2.9495e+05, -8.4400e+03,\n", + " -5.5583e+04, 1.2975e+05]], grad_fn=)" + ] + }, + "execution_count": 46, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "print('x_test', x_test.shape)\n", + "print('events', test_event.shape)\n", + "print('times', test_time.shape)\n", + "\n", + "time_sorted, idx = torch.sort(time)\n", + "log_hz_sorted = log_hz[idx]\n", + "event_sorted = event[idx]\n", + "time_unique = torch.unique(time_sorted)\n", + "print('')\n", + "print(\"time_sorted\", time_sorted.shape)\n", + "print('log_hz_sorted', log_hz_sorted.shape)\n", + "print('event_sorted', event_sorted.shape)\n", + "print(\"time_unique\", time_unique.shape)\n", + "\n", + "print('-'*30)\n", + "cov_fake = torch.clone(x_test)\n", + "print('covariates', cov_fake.shape)\n", + "covariates_sorted = cov_fake[idx, :]\n", + "covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T)\n", + "print('cov_inner', covariate_inner_product.shape)\n", + "log_nominator_left = torch.matmul(log_hz_sorted.T, covariate_inner_product)\n", + "print('log_nom_left', log_nominator_left.shape)\n", + "bracket = torch.mul(log_hz_sorted, covariates_sorted)\n", + "print('bracket', bracket.shape)\n", + "log_nominator_right = torch.matmul(bracket, bracket.T)\n", + "print('log_nom_right', log_nominator_right.shape)\n", + "sum_nominator_right = torch.sum(log_nominator_right, dim=0).unsqueeze(0)\n", + "print('sum_nom', sum_nominator_right.shape)\n", + "log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0).T\n", + "print('log_denom', log_denominator.shape)\n", + "last_bit = torch.div(log_nominator_left - sum_nominator_right, log_denominator)\n", + "print('last_bit', last_bit.shape)\n", + "last_bit\n" + ] + }, { "cell_type": "markdown", "id": "97c90244", @@ -544,7 +647,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 15, "id": "d7889dc1-1cfa-424e-a586-481cbc789581", "metadata": {}, "outputs": [ @@ -552,16 +655,16 @@ "name": "stdout", "output_type": "stream", "text": [ - "Epoch: 000, Training loss: 12.75\n", - "Epoch: 010, Training loss: 12.02\n", - "Epoch: 020, Training loss: 11.79\n", - "Epoch: 030, Training loss: 11.84\n", - "Epoch: 040, Training loss: 11.61\n", - "Epoch: 050, Training loss: 11.61\n", - "Epoch: 060, Training loss: 11.46\n", - "Epoch: 070, Training loss: 11.57\n", - "Epoch: 080, Training loss: 11.56\n", - "Epoch: 090, Training loss: 11.20\n" + "Epoch: 000, Training loss: 31.85\n", + "Epoch: 010, Training loss: 30.18\n", + "Epoch: 020, Training loss: 29.73\n", + "Epoch: 030, Training loss: 29.84\n", + "Epoch: 040, Training loss: 29.04\n", + "Epoch: 050, Training loss: 29.61\n", + "Epoch: 060, Training loss: 29.46\n", + "Epoch: 070, Training loss: 28.94\n", + "Epoch: 080, Training loss: 29.31\n", + "Epoch: 090, Training loss: 28.00\n" ] } ], @@ -1183,7 +1286,7 @@ ], "metadata": { "kernelspec": { - "display_name": "torchsurv_env", + "display_name": "Python 3", "language": "python", "name": "python3" }, @@ -1197,7 +1300,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.14" + "version": "3.10.15" } }, "nbformat": 4, diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index 9e57ccc..e54bdef 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -9,19 +9,11 @@ def neg_partial_time_log_likelihood( event: torch.Tensor, time: torch.Tensor, ties_method: str = "efron", - reduction: str = "mean", - checks: bool = True, + reduction: str = "mean" ) -> torch.Tensor: ''' - THIS FUNCTION IS NOT DONE, i HAVENT TESTED THE NEGATIVE PART YET + needs further work ''' - if checks: - _check_inputs(log_hz, event, time) - - if any([event.sum() == 0, len(log_hz.size()) == 0]): - warnings.warn("No events OR single sample. Returning zero loss for the batch") - return torch.tensor(0.0, requires_grad=True) - # sort data by time-to-event or censoring time_sorted, idx = torch.sort(time) log_hz_sorted = log_hz[idx] diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index dc05839..4bd6be9 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -21,8 +21,10 @@ "\n", "### Context and statistical set-up\n", "\n", - "Let $T^*_i$ be the be the failure time of interest for subject $i$ and $C$ be the censoring time. Let $T_i = min(T^*, C)$. We use $\\delta_i$ to denote whether $T*_i$ was observed. We will use $Z(t)$ to denote the value of of covariate $Z$ and time $t$. \n", - "We use $Z(t) to denote the value of Z at time $t$ and $\\overline{Z}(t)$ to denote the set of covariates from the beggining up to time $t$: $ \\overline{Z}(t) = \\{ Z(s): 0 \\leq s \\leq t\\}$.\n", + "Let $i$ e the index for some subject $i$ with a failute time denoted as $\\tau^*_i$ and $C$ be the censoring time. For the moment $C$ remains constant but there are extensions that allow for $C$ to vary over $i$. Let $\\tau_i = min(\\tau^*_i, C)$. We use $\\delta_i$ to denote whether $\\tau^*_i$ was observed. \n", + "\n", + "We will use $Z(t)$ to denote the value of of covariate $Z$ and time $t$. \n", + "We use $Z(t)$ to denote the value of Z at time $t$ and $\\overline{Z}(t)$ to denote the set of covariates from the beggining up to time $t$: $ \\overline{Z}(t) = \\{ Z(s): 0 \\leq s \\leq t\\}$.\n", "Let $t_k$ for $k \\in \\{1, \\dots, K\\} denote the time points at which the covariates are observed. For the moment, we assume that all subjects have been observed on the same time grid. $R_k$ is the set of individuals who are at risk at $t_k$. \n", "\n", "The conditional hazard function of $T$ given $\\overline{Z}(t)$ is defined as\n", @@ -64,7 +66,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -77,7 +79,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -88,7 +90,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -113,12 +115,18 @@ "source": [ "## Simulating a dataset\n", "\n", - "We will simulate a dataset of 100 subjects with 6 follow up times where a covariate is observed. The covariates will change over time slightly but will be generated from one random variable per subject so that " + "We will simulate a dataset of 100 subjects with 10 follow up times where a covariate is observed. The covariates will follow a trigonometric function over time and will be dependant on a random variable to differentiate between subjects.\n", + "\n", + "For each $i$ the covariate follows the function:\n", + "\n", + "$$ Z_i(t) = a_i \\cos(2 \\pi t) $$\n", + "\n", + "where $a_i \\sim N(5, 2.5)$." ] }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ @@ -130,8 +138,8 @@ "\n", "# create random variables following a normal distribution N(1,1) for each subject \n", "mean = 5\n", - "standard_dev = 5\n", - "random_vars = torch.randn(sample_size)#*standard_dev + mean\n", + "standard_dev = 2.5\n", + "random_vars = torch.randn(sample_size)*standard_dev + mean\n", "\n", "# using the random variables from above, we create a set of covariates for each subject \n", "t = torch.linspace(0, 2*math.pi, obs_time) # Generating 6 equidistant time points from 0 to 2*pi\n", @@ -168,21 +176,19 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# make random positive time to event\n", - "time = torch.floor(random_vars * 10)+4\n", - "time[time>9]=9\n", - "time[time<1]=0\n", - "#print(time) \n", + "time = torch.floor(random_vars)\n", + "# print(time) \n", "# tensor([1.2792e+01, -7.7415e+00, 9.2325e+00, 1.0845e+01, 7.6460e+00, ...\n", "\n", - "# decide who has an event, here we cosnider those whose time is greater than one (this means some a small subroup has not experienced an event)\n", - "events = time > 1\n", + "# decide who has an event, here we cosnider those whose time is greater than one and smaller than 9\n", + "events = (time > 1) & (time < 8)\n", "# tensor([ True, True, False, False, True, ...\n", - "#print(events)\n", + "# print(events)\n", "\n", "# remove the covariates for those who have observed an event\n", "\n", @@ -191,33 +197,43 @@ " time_cap = int(time[i])\n", " covars[i, time_cap:] = torch.zeros(obs_time-time_cap)\n", "\n", - "# covars should be tensor([[ 3.3737e-01, 2.5844e-01, 5.8584e-02, -1.6869e-01, -3.1702e-01, ... and zeros after an event occured\n", - "#print(covars)" + "# covars should be tensor([[ 3.3737e-01, 2.5844e-01, 5.8584e-02, -1.6869e-01, -3.1702e-01, ... \n", + "# and zeros after an event occured\n", + "\n", + "# print(covars)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Training the RNN \n", + "\n", + "Below we will give an example set up of how to use the partial log likelihood in a loss function. We import the python file containg the loss and set up an RNN to work with our simulated data." ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 24, "metadata": {}, "outputs": [], "source": [ - "from loss_time_covariates import _partial_likelihood_time_cox" + "from loss_time_covariates import neg_partial_time_log_likelihood" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 22, "metadata": {}, "outputs": [ { - "ename": "ImportError", - "evalue": "cannot import name 'time_covariates' from 'torchsurv.loss' (/home/demboso1/conda-env2/lib/python3.10/site-packages/torchsurv/loss/__init__.py)", - "output_type": "error", - "traceback": [ - "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", - "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)", - "Cell \u001b[0;32mIn[160], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtorchsurv\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mloss\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m time_covariates\n\u001b[1;32m 2\u001b[0m \u001b[38;5;66;03m#from torchsurv.metrics.cindex import ConcordanceIndex\u001b[39;00m\n\u001b[1;32m 3\u001b[0m \n\u001b[1;32m 4\u001b[0m \u001b[38;5;66;03m# Parameters\u001b[39;00m\n\u001b[1;32m 5\u001b[0m input_size \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m1\u001b[39m\n", - "\u001b[0;31mImportError\u001b[0m: cannot import name 'time_covariates' from 'torchsurv.loss' (/home/demboso1/conda-env2/lib/python3.10/site-packages/torchsurv/loss/__init__.py)" + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.Size([10, 100, 1])\n", + "torch.Size([10, 100, 1])\n", + "torch.Size([2, 100, 1])\n", + "torch.Size([10, 100, 1])\n" ] } ], diff --git a/src/torchsurv/loss/time_covariates.py b/src/torchsurv/loss/time_covariates.py deleted file mode 100644 index 5f6cb59..0000000 --- a/src/torchsurv/loss/time_covariates.py +++ /dev/null @@ -1,45 +0,0 @@ -import sys -import warnings - -import torch - -def time_partial_log_likelihood( - log_hz: torch.Tensor, #nx1 vector - event: torch.Tensor, #n vector (i think) - time: torch.Tensor, #n vector (i think) - covariates: torch.Tensor, #nxp vector, p number of params -) -> torch.Tensor: - - # sort data by time-to-event or censoring - time_sorted, idx = torch.sort(time) - log_hz_sorted = log_hz[idx] - event_sorted = event[idx] - - #keep log if we can - exp_log_hz = torch.exp(log_hz_sorted) - #remove mean over time from covariates - #sort covariates so that the rows match the ordering - covariates_sorted = covariates[idx, :] - covariates.mean(dim=0) - - #the left hand side (HS) of the equation - #below is Z_k Z_k^T - i think it should be a vector matrix dim nxn - covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T) - - #pointwise multiplication of vectors to get the nominator of left HS - #outcome in a vector of length n - # Ends up being (1, n) - log_nominator_left = torch.matmul(exp_log_hz.T, covariate_inner_product) - - #right hand size of the equation - #formulate the brackets \sum exp(theta)Z_k - bracket = torch.mul(exp_log_hz, covariates_sorted) - nominator_right = torch.matmul(bracket, bracket.T) #nxn matrix - ###not sure if the next line is this - #log_nominator_right = torch.sum(nominator_right, dim=0).unsqueeze(0) - ### or this - log_nominator_right = nominator_right[0,].unsqueeze(0) - #the denominator is the same on both sides - log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) #dim=0 sums over the oth dimension - partial_log_likelihood = torch.div(log_nominator_left - log_nominator_right, log_denominator) # (n, n) - - return (partial_log_likelihood)[event_sorted] \ No newline at end of file From a141f8d7cea8484f923ca4b7007e30c668252211 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Wed, 18 Dec 2024 14:55:56 +0100 Subject: [PATCH 05/19] loss function works, added function documentation --- docs/notebooks/loss_time_covariates.py | 102 ++++++++++++------- docs/notebooks/time_varying.ipynb | 131 ++++++++++++++----------- 2 files changed, 141 insertions(+), 92 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index e54bdef..d744431 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -5,23 +5,16 @@ def neg_partial_time_log_likelihood( - log_hz: torch.Tensor, - event: torch.Tensor, - time: torch.Tensor, - ties_method: str = "efron", + log_hz: torch.Tensor, #Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time + time: torch.Tensor, #n length vector, time at which someone experiences event + events: torch.Tensor, #n length vector, boolean, true or false to determine if someone had an event reduction: str = "mean" ) -> torch.Tensor: ''' needs further work ''' - # sort data by time-to-event or censoring - time_sorted, idx = torch.sort(time) - log_hz_sorted = log_hz[idx] - event_sorted = event[idx] - time_unique = torch.unique(time_sorted) # time-to-event or censoring without ties - # only consider theta at tiem of - pll = _partial_likelihood_time_cox(log_hz_sorted, event_sorted) + pll = _partial_likelihood_time_cox(log_hz, time, events) # Negative partial log likelihood pll = torch.neg(pll) @@ -38,47 +31,82 @@ def neg_partial_time_log_likelihood( return loss def _partial_likelihood_time_cox( - log_hz: torch.Tensor, #nxTxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time - event: torch.Tensor, #n length vector, boolean, true or false to determine if someone had an event + log_hz: torch.Tensor, #Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time time: torch.Tensor, #n length vector, time at which someone experiences event + events: torch.Tensor, #n length vector, boolean, true or false to determine if someone had an event + ) -> torch.Tensor: - """Calculate the partial log likelihood for the Cox proportional hazards model + """ + Calculate the partial log likelihood for the Cox proportional hazards model with time-varying covariates and in the absence of ties in event time. - For time-varying covariates, the haard ratio is no longer assumed to be constant, - but the partial log likelihood only cares about the covariate value at time of death. - - Hence, despite taking in a whole vector of stuff, we only take the last value - into consideration for the partial log likelihood. - - Requirements we want: - - time vector must somehow correspond to the T dimension in the log_hz tensor, i.e. for those who experience an event, + Args: + log_hz (torch.Tensor, float): + Log relative hazard of dimension T x n_samples x P. + T is the time series dimension, P is the number of parameters observed over time. + event (torch.Tensor, bool): + Event indicator of length n_samples (= True if event occured). + time (torch.Tensor): + Time-to-event or censoring of length n_samples. + + Returns: + (torch.tensor, float): + Vector of the partial log likelihood, length n_samples. + + Note: + For each subject :math:`i \in \{1, \cdots, N\}`, denote :math:`\tau^*_i` as the survival time and :math:`C_i` as the + censoring time. Survival data consist of the event indicator, :math:`\delta_i=1(\tau^*_i\leq C_i)` + (argument ``event``) and the time-to-event or censoring, :math:`\tau_i = \min(\{ \tau^*_i,D_i \})` + (argument ``time``). + + Consider some covariate :math:`Z(t)` with covariate history denoted as :math:`H_Z` and a general form of the cox proportional hazards model: + .. math:: + + \log \lambda_i (t|H_Z) = lambda_0(t) \theta(Z(t)) + + A network that maps the input covariates $Z(t)$ to the log relative hazards: :math:`\log \theta(Z(t))`. + The partial likelihood with repsect to :math:`\log \theta(Z(t))` is written as: + + .. math:: + + \log L(\theta) = \sum_j \Big( \log \theta(Z_i(\tau_j)) - \log [\sum_{j \in R_i} \theta (Z_i(\tau_j))] \Big) + + and it only considers the values of te covariate :math:`Z` at event time :math:`\tau_i` + + Remarks: + - values inside the time vector must be strictly zero or positive as they are used to identify values of + covariates at event time + - the maximum value inside the vector time cannt exceed T-1 for indexing reasons + - this function was not tested for P>1 but it should be possile for an extension + - the values of Z at event time should not be null, a reasonable imputation method should be used, + unless the network fullfills that role + - future formatting: time vector must somehow correspond to the T dimension in the log_hz tensor, i.e. for those who experience an event, we want to identify the index of the covariate upon failure. We could either consider the last covariate before a series of zeros (requires special data formatting but could reduce issues as it automatically contains event time information). - - this version doesn't allow for P>1 but it can be considered as an additional dimension and then in the final - step you can take the mean across p - - we want values of the covariate at event time to not be null, maybe there could be some automation function that imputes the latest values if possible - - maybe some guidance can go here on how to format the covariates, right now its just a tensor. - """ + + """ + # time cannot be smaller than zero, and maximum value cannot exceed the T dimension for this to work + # somehwere here it might be good to make sure maximum values in time do not exceed T and raise a warning time_sorted, idx = torch.sort(time) - #sort the output of the RNN by the subjects who have earlier event time - #we want a tensor out - log_hz_sorted = outputs[:,idx,:] - event_sorted = events[idx] + + # sort the output of the RNN by the subjects who have earlier event time + # we want a tensor out + log_hz_sorted = log_hz[:,idx,:] + events_sorted = events[idx] #format the time so we can use it to index #in the next step we want to pick out the covariate at event time for each subject for each covariate p - #this line is just to be able to index - can be changed depending on how time is formatted time_sorted=time_sorted.type(torch.int64) - # below is pseudocode of what to do to geth the log likelihood - #as an outcome we want an nx1xp tensor aka. time is reduced and we only cosnider Z(tau_j) - log_hz_sorted_tj = log_hz_sorted[time_sorted, :, :] + + #as an outcome we want an 1xnxp tensor aka. we only cosnider Z(tau_j), a covariate at time of event + log_hz_sorted_tj = log_hz_sorted.gather(0, time_sorted.unsqueeze(0).unsqueeze(-1)) #same step as in normal cox loss, just again, we consider Z(tau_j) where tau_j denotes event time to subject j log_denominator_tj = torch.logcumsumexp(log_hz_sorted_tj.flip(0), dim=0).flip(0) - - return (log_hz_sorted_tj - log_denominator_tj)[event_sorted] + #give the mask the same dimensions as the log_hz and log_denominator vectors + event_mask = events_sorted.unsqueeze(0).unsqueeze(-1) + return (log_hz_sorted_tj - log_denominator_tj)[event_mask] def _time_varying_covariance( diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 4bd6be9..2550a87 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -6,18 +6,14 @@ "source": [ "# Implementing time-varying covariates\n", "\n", - "In this notebook, we analyse a simulated dataset with time-varying covariates and survival outcomes. `TorchSurv` is used to train a model that predicts relative risk of subjects based on covariates observed over time. We will attempt to thoroughly explain the necessary elements to understand our implementation, but for a detailed read on time-varying survival models refer to Chapter 6 of [Dynamic Regression Models for Survival Data](https://link.springer.com/book/10.1007/0-387-33960-4). \n", - "\n", - "### Dependencies\n", - "\n", - "To run this notebook, dependencies must be installed. the recommended method is to use our developpment conda environment (**preffered**). Instruction can be found [here](https://opensource.nibr.com/torchsurv/devnotes.html#set-up-a-development-environment-via-conda) to install all optional dependancies. The other method is to install only required packages using the command line below:" + "In this notebook, we analyse a simulated dataset with time-varying covariates and survival outcomes. `TorchSurv` is used to train a model that predicts relative risk of subjects based on covariates observed over time. We will attempt to thoroughly explain the necessary elements to understand our implementation, but for a detailed read on time-varying survival models refer to Chapter 6 of [Dynamic Regression Models for Survival Data](https://link.springer.com/book/10.1007/0-387-33960-4). For a more brief explanation, please refer to these [slides](https://ms.uky.edu/~mai/sta635/Cox%20model.pdf). Below is a summary of the necessary information." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Implementing partial log likelihood for time-varying covariates\n", + "## Partial log likelihood for time-varying covariates\n", "\n", "### Context and statistical set-up\n", "\n", @@ -64,6 +60,15 @@ "$$ \\log L(\\theta) = \\sum_j \\Big( \\phi(Z_i(\\tau_j)) - \\log [\\sum_{j \\in R_i} \\exp \\phi(Z_i(\\tau_j))] \\Big).$$\n" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Dependencies\n", + "\n", + "To run this notebook, dependencies must be installed. the recommended method is to use our developpment conda environment (**preffered**). Instruction can be found [here](https://opensource.nibr.com/torchsurv/devnotes.html#set-up-a-development-environment-via-conda) to install all optional dependancies. The other method is to install only required packages using the command line below:" + ] + }, { "cell_type": "code", "execution_count": 1, @@ -79,7 +84,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -90,7 +95,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -126,7 +131,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -182,6 +187,10 @@ "source": [ "# make random positive time to event\n", "time = torch.floor(random_vars)\n", + "# this is a workaround the loss function. This is done so that when we find the right\n", + "# indices in the log_hz we don't try to pick up things that are out of bounds.\n", + "time[time<0] = 0\n", + "time[time>9] = 9\n", "# print(time) \n", "# tensor([1.2792e+01, -7.7415e+00, 9.2325e+00, 1.0845e+01, 7.6460e+00, ...\n", "\n", @@ -214,16 +223,21 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ - "from loss_time_covariates import neg_partial_time_log_likelihood" + "from importlib import reload\n", + "import loss_time_covariates\n", + "\n", + "reload(loss_time_covariates)\n", + "log_likelihood = loss_time_covariates._partial_likelihood_time_cox\n", + "neg_loss_function = loss_time_covariates.neg_partial_time_log_likelihood" ] }, { "cell_type": "code", - "execution_count": 22, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -266,7 +280,7 @@ "# print(f\"Estimate shape for {batch_size} samples = {estimates.size()}\") # Estimate shape for 8 samples = torch.Size([8, 1])\n", "\n", "\n", - "# loss = cox.neg_partial_log_likelihood(estimates, events, time)\n", + "#loss = neg_loss_function(outputs, events, time)\n", "# print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n", "\n", "# cindex = ConcordanceIndex()\n", @@ -284,7 +298,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 26, "metadata": {}, "outputs": [], "source": [ @@ -304,7 +318,7 @@ "#times = []\n", "start = []\n", "stop = []\n", - "events = []\n", + "event = []\n", "subjs = []\n", "for i in range(sample_size):\n", " subj_counter = 0\n", @@ -316,34 +330,41 @@ " #times.append(j)\n", " start.append(j-1)\n", " stop.append(j)\n", - " events.append(False)\n", + " event.append(False)\n", " subj_counter += 1\n", " subjs.extend([i] * subj_counter)\n", - " events[-1]=True\n", + " if events[i]==True: event[-1]=True\n", "\n", "df = pd.DataFrame({\n", " \"subj\": subjs,\n", " #\"times\": times,\n", " \"start\":start,\n", " \"stop\": stop,\n", - " \"events\": events,\n", + " \"events\": event,\n", " \"var\": vars, \n", "})\n" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Fitting a cox regression model using the lifelines package.\n" + ] + }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "Iteration 1: norm_delta = 2.11e-02, step_size = 0.9500, log_lik = -363.73938, newton_decrement = 5.00e-02, seconds_since_start = 0.0\n", - "Iteration 2: norm_delta = 1.05e-03, step_size = 0.9500, log_lik = -363.68954, newton_decrement = 1.24e-04, seconds_since_start = 0.0\n", - "Iteration 3: norm_delta = 5.24e-05, step_size = 0.9500, log_lik = -363.68942, newton_decrement = 3.09e-07, seconds_since_start = 0.0\n", - "Iteration 4: norm_delta = 2.76e-06, step_size = 1.0000, log_lik = -363.68942, newton_decrement = 7.73e-10, seconds_since_start = 0.0\n", + "Iteration 1: norm_delta = 7.81e-03, step_size = 0.9500, log_lik = -309.16572, newton_decrement = 1.92e-03, seconds_since_start = 0.1\n", + "Iteration 2: norm_delta = 3.93e-04, step_size = 0.9500, log_lik = -309.16381, newton_decrement = 4.85e-06, seconds_since_start = 0.1\n", + "Iteration 3: norm_delta = 1.96e-05, step_size = 0.9500, log_lik = -309.16380, newton_decrement = 1.21e-08, seconds_since_start = 0.1\n", + "Iteration 4: norm_delta = 1.03e-06, step_size = 1.0000, log_lik = -309.16380, newton_decrement = 3.03e-11, seconds_since_start = 0.1\n", "Convergence completed after 4 iterations.\n" ] }, @@ -380,23 +401,23 @@ " \n", " \n", " number of subjects\n", - " 100\n", + " 95\n", " \n", " \n", " number of periods\n", - " 778\n", + " 476\n", " \n", " \n", " number of events\n", - " 100\n", + " 80\n", " \n", " \n", " partial log-likelihood\n", - " -363.69\n", + " -309.16\n", " \n", " \n", " time fit was run\n", - " 2024-12-13 16:55:56 UTC\n", + " 2024-12-17 12:40:01 UTC\n", " \n", " \n", "\n", @@ -420,17 +441,17 @@ " \n", " \n", " var\n", - " -0.03\n", - " 0.97\n", - " 0.09\n", - " -0.21\n", - " 0.15\n", - " 0.81\n", - " 1.16\n", + " -0.00\n", + " 1.00\n", + " 0.03\n", + " -0.06\n", + " 0.05\n", + " 0.94\n", + " 1.06\n", " 0.00\n", - " -0.32\n", - " 0.75\n", - " 0.41\n", + " -0.06\n", + " 0.95\n", + " 0.07\n", " \n", " \n", "
\n", @@ -451,15 +472,15 @@ " \n", " \n", " Partial AIC\n", - " 729.38\n", + " 620.33\n", " \n", " \n", " log-likelihood ratio test\n", - " 0.10 on 1 df\n", + " 0.00 on 1 df\n", " \n", " \n", " -log2(p) of ll-ratio test\n", - " 0.41\n", + " 0.07\n", " \n", " \n", "\n", @@ -469,31 +490,31 @@ "\\begin{tabular}{lrrrrrrrrrrr}\n", " & coef & exp(coef) & se(coef) & coef lower 95% & coef upper 95% & exp(coef) lower 95% & exp(coef) upper 95% & cmp to & z & p & -log2(p) \\\\\n", "covariate & & & & & & & & & & & \\\\\n", - "var & -0.03 & 0.97 & 0.09 & -0.21 & 0.15 & 0.81 & 1.16 & 0.00 & -0.32 & 0.75 & 0.41 \\\\\n", + "var & -0.00 & 1.00 & 0.03 & -0.06 & 0.05 & 0.94 & 1.06 & 0.00 & -0.06 & 0.95 & 0.07 \\\\\n", "\\end{tabular}\n" ], "text/plain": [ - "\n", + "\n", " event col = 'events'\n", " penalizer = 0.1\n", - "number of subjects = 100\n", - " number of periods = 778\n", - " number of events = 100\n", - "partial log-likelihood = -363.69\n", - " time fit was run = 2024-12-13 16:55:56 UTC\n", + "number of subjects = 95\n", + " number of periods = 476\n", + " number of events = 80\n", + "partial log-likelihood = -309.16\n", + " time fit was run = 2024-12-17 12:40:01 UTC\n", "\n", "---\n", " coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%\n", "covariate \n", - "var -0.03 0.97 0.09 -0.21 0.15 0.81 1.16\n", + "var -0.00 1.00 0.03 -0.06 0.05 0.94 1.06\n", "\n", " cmp to z p -log2(p)\n", "covariate \n", - "var 0.00 -0.32 0.75 0.41\n", + "var 0.00 -0.06 0.95 0.07\n", "---\n", - "Partial AIC = 729.38\n", - "log-likelihood ratio test = 0.10 on 1 df\n", - "-log2(p) of ll-ratio test = 0.41" + "Partial AIC = 620.33\n", + "log-likelihood ratio test = 0.00 on 1 df\n", + "-log2(p) of ll-ratio test = 0.07" ] }, "metadata": {}, @@ -505,13 +526,13 @@ "" ] }, - "execution_count": 11, + "execution_count": 27, "metadata": {}, "output_type": "execute_result" }, { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiQAAAGwCAYAAACZ7H64AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAr1ElEQVR4nO3de1xVdb7/8ffmjgIbEBU01LxrqWM1klrTzXHULp6yGSsd0xrtojk52d3Mxpqc9GhOU1M6JztNTh5nbMrHybLL5JRGNpllqfXQDggYoAjsDQpy+/7+6OH+tRcCgsAXWq/n48EjXXvtxfsTwnqz9lp7eYwxRgAAABaF2A4AAABAIQEAANZRSAAAgHUUEgAAYB2FBAAAWEchAQAA1lFIAACAdWG2A5yKmpoaffvtt4qNjZXH47EdBwAAnAJjjEpKStStWzeFhNR/DKRdFJJvv/1WqamptmMAAIAmyM7O1hlnnFHvOu2ikMTGxkr6bqC4uDjLaQD80GVmZmrx4sV66KGH1KtXL9txgHbL7/crNTU1sB+vT7soJCdepomLi6OQAGhxsbGxCg8PV2xsLD9zgGZwKqdbcFIrAACwjkICAA6hoaGKjY1VaGio7SiAa3jaw91+/X6/vF6vfD4fh08BAGgnGrP/5ggJAACwjkICAA45OTmaN2+ecnJybEcBXINCAgAOlZWVys/PV2Vlpe0ogGtQSAAAgHUUEgAAYB2FBAAAWEchAQCH5ORk3XfffUpOTrYdBXCNdvHW8QDQmqKjozV06FDbMQBX4QgJADgUFxdrw4YNKi4uth0FcA0KCQA4FBUVacOGDSoqKrIdBXANCgkAALCOQgIAAKyjkAAAAOsoJADgEBMTo9GjRysmJsZ2FMA1PMYYYztEQxpz+2IAANA2NGb/zRESAHDg5npA66OQAIBDTk6O5s2bp5ycHNtRANegkAAAAOsoJAAAwDoKCQAAsI5CAgAArOOyXwAA0CK47BcAALQrFBIAcMjNzdXChQuVm5trOwrgGhQSAHAoLy/X/v37VV5ebjsK4BoUEgAAYB2FBAAAWEchAQAA1lFIAMChc+fOuv3229W5c2fbUQDXCLMdAADampiYGF1wwQW2YwCuwhESAHDw+/1666235Pf7bUcBXINCAgAOR44c0QsvvKAjR47YjgK4BoUEAABYRyEBAADWUUgAAIB1FBIAcIiOjtbQoUMVHR1tOwrgGh5jjLEdoiGNuX0xAABoGxqz/+YICQA41NTUqKysTDU1NbajAK5BIQEAhwMHDujmm2/WgQMHbEcBXINCAgAArKOQAAAA6ygkAADAOgoJAACwjrv9AoBDjx499Oyzz6pjx462owCuQSEBAIfQ0FDe8whoZbxkAwAO+fn5WrZsmfLz821HAVyDQgIADseOHdOnn36qY8eO2Y4CuAaFBAAAWEchAQAA1lFIAACAdRQSAHBITEzU1KlTlZiYaDsK4Bpc9gsADl6vVxMmTLAdA3AVjpAAgMPRo0e1fft2HT161HYUwDUoJADgcOjQIa1cuVKHDh2yHQVwDQoJAACwjkICAACso5AAAADrKCQA4BAREaFevXopIiLCdhTANTzGGGM7REP8fr+8Xq98Ph934AQAoJ1ozP6bIyQAAMA6CgkAOGRmZmratGnKzMy0HQVwDQoJADgYY1RVVaV28Io28INBIQEAANZRSAAAgHUUEgAAYB13+wUAh+7du+uJJ55Qly5dbEcBXINCAgAOEREROuOMM2zHAFyFl2wAwKGgoECrVq1SQUGB7SiAa1BIAMChpKREW7ZsUUlJie0ogGtQSAAAgHUUEgAAYB2FBAAAWEchAQAHr9erq666Sl6v13YUwDW47BcAHBITE3XdddfZjgG4CkdIAMChvLxce/bsUXl5ue0ogGtQSADAITc3V48++qhyc3NtRwFcg0ICAACso5AAAADrKCQAAMA6CgkAOISFhSkxMVFhYVyICLQWjzHG2A7REL/fL6/XK5/Pp7i4ONtxAADAKWjM/psjJAAAwDoKCQA4ZGdna86cOcrOzrYdBXANCgkAOFRVVamwsFBVVVW2owCuQSEBAADWUUgAAIB1FBIAAGAdhQQAHFJSUrRgwQKlpKTYjgK4Bu/6AwAOUVFRGjx4sO0YgKtwhAQAHAoLC7Vu3ToVFhbajgK4BoUEABx8Pp82btwon89nOwrgGhQSAABgHYUEAABYRyEBAADWUUgAwCE2NlYXX3yxYmNjbUcBXMNjjDG2QzSkMbcvBgAAbUNj9t8cIQEAh4qKCuXk5KiiosJ2FMA1KCQA4HDw4EHdc889OnjwoO0ogGtQSAAAgHUUEgAAYB2FBAAAWEchAQAHj8ejsLAweTwe21EA1+CyXwAA0CK47BcAALQrFBIAcDh48KAeeOABLvsFWhGFBAAcKioqlJmZyRujAa2IQgIAAKyjkAAAAOsoJAAAwDoKCQA4dOnSRb/+9a/VpUsX21EA1wizHQAA2pqOHTsqLS3NdgzAVThCAgAOPp9PmzZtks/nsx0FcA0KCQA4FBYW6qWXXlJhYaHtKIBrUEgAAIB1FBIAAGAdhQQAAFhHIQEAhw4dOuicc85Rhw4dbEcBXMNjjDG2QzSkMbcvBgAAbUNj9t8cIQEAh+rqavn9flVXV9uOArgGhQQAHLKysnTrrbcqKyvLdhTANSgkAADAOgoJAACwjkICAACso5AAAADruOwXABxqamp0/PhxRUZGKiSE39uApmrM/juslTIBQLsREhKi6Oho2zEAV6H6A4BDXl6elixZory8PNtRANegkACAQ1lZmXbt2qWysjLbUQDXoJAAAADrKCQAAMA6CgkAALCOQgIADp06ddL06dPVqVMn21EA1+CyXwBwiIuL09ixY23HAFyFIyQA4FBaWqqtW7eqtLTUdhTANSgkAOBw+PBhPfPMMzp8+LDtKIBrUEgAAIB1FBIAAGAdhQQAAFhHIQEAh6ioKPXt21dRUVG2owCu4THGGNshGtKY2xcDAIC2oTH7b46QAAAA6ygkAOCQkZGhG264QRkZGbajAK5BIQEAANZRSAAAgHUUEgAAYB2FBAAAWMfdfgHA4YwzztCKFSuUmJhoOwrgGhQSAHAIDw9X165dbccAXIWXbADA4fDhw3r66ae52y/QiigkAOBQWlqqbdu2qbS01HYUwDUoJAAAwDoKCQAAsI5CAgAArKOQAIBDQkKCJk2apISEBNtRANfgsl8AcIiPj9ekSZNsxwBchSMkAOBQVlamXbt2qayszHYUwDUoJADgkJeXpyVLligvL892FMA1KCQAAMA6CgkAALCOQgIAAKyjkACAw4mb64WHh9uOAriGxxhjbIdoiN/vl9frlc/nU1xcnO04AADgFDRm/80REgAAYB2FBAAcsrKydMsttygrK8t2FMA1KCQA4FBdXa2SkhJVV1fbjgK4BoUEAABYRyEBAADWUUgAAIB1FBIAcEhJSdEjjzyilJQU21EA1wizHQAA2pqoqCj169fPdgzAVThCAgAOhYWFeumll1RYWGg7CuAaFBIAcPD5fNq0aZN8Pp/tKIBrUEgAAIB1FBIAAGAdJ7UCaFUXXXSRsrOz610nNTVV//rXv1opEYC2wPVHSHr37q3evXvbjgG4RnZ2dr33iMnKymqwsLS02NhY/fSnP1VsbKzVHEBraQv7QqtHSCoqKhQREWEzAgALevToof/7v/876WO2fyhKUlJSkmbMmGE7BuAqp3yEZNWqVerWrZtqamqClk+cOFE33XSTvvnmG02cOFFdu3ZVTEyMfvzjH+udd94JWrdXr15avHixpk2bpri4OM2aNat5pgCAZnT8+HFlZGTo+PHjtqMAruExxphTWbGoqEjJycnatGmTLrvsMknfXaufkpKiTZs2KSkpSR999JFGjx6tyMhIvfjii1q2bJm+/vpr9ejRQ9J3haSoqEgLFy7Uf/zHf0iS+vTpU+tzHT9+POgHgd/vV2pqqnw+n+Li4k535iC9e/dWdna2UlNTm3W7AE7uxPdbfUdIbH9PVlZWqqioSAkJCQoPD7eWA2gtDX1fNpXf75fX6z2l/fcpHyFJSEjQ+PHj9de//jWw7O9//7uSkpJ0ySWXaNiwYbrlllt09tlnq1+/flq8eLH69OmjjRs3Bm3n0ksv1V133aU+ffqctIxI0uOPPy6v1xv4oCwAAPDD1qhzSKZMmaKZM2fqmWeeUWRkpNauXavrrrtOISEhKi0t1aJFi/T6668rNzdXVVVVKisrq3Xy2nnnndfg57n//vv1m9/8JvD3E0dIWkpLtEIAJ3cq54jY/p7MyMjQgw8+qMcee0xnnnmmtRxAa2kL5241qpBceeWVMsbo9ddf149//GN98MEHWrFihSRp/vz5evvtt7Vs2TL17dtX0dHRuvbaa1VRURG0jY4dOzb4eSIjIxUZGdmYaAAAoB1rVCGJiorSNddco7Vr12r//v0aMGCAzjnnHEnStm3bNH36dF199dWSpNLSUmVmZjZ7YADtX1ZWVp2/kWVlZQXOO7MlJCREUVFRCglx/TsjAK2m0Zf9TpkyRVdccYV2796tqVOnBpb369dPr7zyiq688kp5PB499NBDta7IaYt4qQZoXQ29/NqjRw/r54317NlTzz//vNUMQGtqC/vCRheSSy+9VImJifr66691ww03BJYvX75cN910k0aNGqWkpCTde++98vv9zRoWQPvHO7ACOJlTvuzXpsZcNgQAp+vgwYN68skndeedd6p79+624wDtVotc9gsAblFRUaGDBw/WOikfQMuhkAAAAOsoJAAAwDoKCQAAsI5CAgAOXbt21V133aWuXbvajgK4RqMv+wWAH7oOHTro3HPPtR0DcBWOkACAQ3FxsV577TUVFxfbjgK4BoUEAByKior0P//zPyoqKrIdBXANCgkAALCOQgIAAKyjkAAAAOsoJADg0LFjR6Wlpaljx462owCuwc31AABAi+DmegBwGqqqqlRYWKiqqirbUQDXoJAAgEN2drbmzJmj7Oxs21EA16CQAAAA6ygkAADAOgoJAACwjkICAACs47JfAHAwxqiqqkphYWHyeDy24wDtVmP232GtlAkA2g2Px6Pw8HDbMQBX4SUbAHDIzc3V4sWLlZubazsK4BoUEgBwKC8v1969e1VeXm47CuAaFBIAAGAdhQQAAFhHIQEAANZRSADAISkpSTNnzlRSUpLtKIBrcNkvADjExsbqkksusR0DcBWOkACAQ0lJid577z2VlJTYjgK4BoUEABwKCgq0evVqFRQU2I4CuAaFBAAAWEchAQAA1lFIAACAdRQSAHCIiorSoEGDFBUVZTsK4BoeY4yxHaIhjbl9MQAAaBsas//mCAkAOBhjVFlZqXbw+xrwg0EhAQCHzMxM3XjjjcrMzLQdBXANCgkAALCOQgIAAKyjkAAAAOsoJAAAwDru9gsADqmpqfrjH//I2wwArYhCAgAOYWFhSkxMtB0DcBVesgEAh0OHDmnlypU6dOiQ7SiAa1BIAMDh6NGj2r59u44ePWo7CuAaFBIAAGAdhQQAAFhHIQEAANZRSADAISEhQZMnT1ZCQoLtKIBrcNkvADjEx8dr4sSJtmMArsIREgBwOHbsmHbs2KFjx47ZjgK4BoUEABzy8/P1n//5n8rPz7cdBXANCgkAALCOQgIAAKyjkAAAAOsoJADgEBERoe7duysiIsJ2FMA1PMYYYztEQ/x+v7xer3w+H7cDBwCgnWjM/psjJAAAwDoKCQA4HDhwQDfddJMOHDhgOwrgGhQSAHCoqalReXm5ampqbEcBXINCAgAArKOQAAAA6ygkAADAOgoJADh069ZNjz32mLp162Y7CuAaYbYDAEBbExkZqTPPPNN2DMBVOEICAA4FBQVas2aNCgoKbEcBXINCAgAOJSUlevvtt1VSUmI7CuAaFBIAAGAdhQQAAFhHIQEAANZRSADAwev1asKECfJ6vbajAK7BZb8A4JCYmKipU6fajgG4CkdIAMChvLxc+/btU3l5ue0ogGtQSADAITc3Vw8//LByc3NtRwFcg0ICAACso5AAAADrKCQAAMA6CgkAOISGhio2NlahoaG2owCu4THGGNshGuL3++X1euXz+RQXF2c7DgAAOAWN2X9zhAQAAFhHIQEAh5ycHM2bN085OTm2owCuQSEBAIfKykrl5+ersrLSdhTANSgkAADAOgoJAACwjkICAACso5AAgENycrLuu+8+JScn244CuEaY7QAA0NZER0dr6NChtmMArsIREgBwKC4u1oYNG1RcXGw7CuAaFBIAcCgqKtKGDRtUVFRkOwrgGhQSAABgHYUEAABYRyEBAADWUUgAwCEmJkajR49WTEyM7SiAa3iMMcZ2iIY05vbFAACgbWjM/psjJADgwM31gNZHIQEAh5ycHM2bN085OTm2owCuQSEBAADWUUgAAIB1FBIAAGAdhQQAAFjHZb8AAKBFcNkvAABoVygkAOCQm5urhQsXKjc313YUwDUoJADgUF5erv3796u8vNx2FMA1KCQAAMA6CgkAALCOQgIAAKyjkACAQ+fOnXX77berc+fOtqMArhFmOwAAtDUxMTG64IILbMcAXIUjJADg4Pf79dZbb8nv99uOArgGhQQAHI4cOaIXXnhBR44csR0FcA0KCQAAsI5CAgAArKOQAAAA6ygkAOAQHR2toUOHKjo62nYUwDU8xhhjO0RDGnP7YgAA0DY0Zv/NERIAcKipqVFZWZlqampsRwFcg0ICAA4HDhzQzTffrAMHDtiOArgGhQQAAFhHIQEAANZRSAAAgHUUEgAAYB13+wUAhx49eujZZ59Vx44dbUcBXINCAgAOoaGhvOcR0Mp4yQYAHPLz87Vs2TLl5+fbjgK4BoUEAByOHTumTz/9VMeOHbMdBXANCgkAALCOQgIAAKyjkAAAAOsoJADgkJiYqKlTpyoxMdF2FMA1uOwXABy8Xq8mTJhgOwbgKhwhAQCHo0ePavv27Tp69KjtKIBrUEgAwOHQoUNauXKlDh06ZDsK4BoUEgAAYB2FBAAAWEchAQAA1lFIAMAhIiJCvXr1UkREhO0ogGt4jDHGdoiG+P1+eb1e+Xw+7sAJAEA70Zj9N0dIAACAdRQSAHDIzMzUtGnTlJmZaTsK4BoUEgBwMMaoqqpK7eAVbeAHg0ICAACso5AAAADrKCQAAMA67vYLAA7du3fXE088oS5dutiOArgGhQQAHCIiInTGGWfYjgG4Ci/ZAIBDQUGBVq1apYKCAttRANegkACAQ0lJibZs2aKSkhLbUQDXoJAAAADrKCQAAMA6CgkAALCOQgIADl6vV1dddZW8Xq/tKIBrcNkvADgkJibquuuusx0DcBWOkACAQ3l5ufbs2aPy8nLbUQDXoJAAgENubq4effRR5ebm2o4CuAaFBAAAWEchAQAA1lFIAACAdRQSAHAICwtTYmKiwsK4EBFoLR5jjLEdoiF+v19er1c+n09xcXG24wAAgFPQmP03R0gAAIB1FBIAcMjOztacOXOUnZ1tOwrgGhQSAHCoqqpSYWGhqqqqbEcBXINCAgAArKOQAAAA6ygkAADAOgoJADikpKRowYIFSklJsR0FcA3e9QcAHKKiojR48GDbMQBX4QgJADgUFhZq3bp1KiwstB0FcA0KCQA4+Hw+bdy4UT6fz3YUwDUoJAAAwDoKCQAAsI5CAgAArKOQAIBDbGysLr74YsXGxtqOAriGxxhjbIdoSGNuXwwAANqGxuy/OUICAA4VFRXKyclRRUWF7SiAa1BIAMDh4MGDuueee3Tw4EHbUQDXaBfv1HriVSW/3285CQA3KCkpUWVlpUpKSvi5A5yGE98/p3J2SLs4hyQnJ0epqam2YwAAgCbIzs7WGWecUe867aKQ1NTU6Ntvv1VsbKw8Ho/tOPL7/UpNTVV2drarTrJ169ySe2dnbuZ2A+ZuubmNMSopKVG3bt0UElL/WSLt4iWbkJCQBpuVDXFxca76x3uCW+eW3Ds7c7sLc7tLS8/t9XpPaT1OagUAANZRSAAAgHUUkiaIjIzUww8/rMjISNtRWpVb55bcOztzM7cbMHfbmLtdnNQKAAB+2DhCAgAArKOQAAAA6ygkAADAOgoJAACwjkJSh8LCQk2ZMkVxcXGKj4/XzTffrNLS0nrXv+OOOzRgwABFR0erR48emjt3rnw+X9B6WVlZuvzyy9WhQwd16dJFd999t6qqqlp6nFPW2LkladWqVbr44osVFxcnj8ej4uLiWuv06tVLHo8n6GPJkiUtNEXjtdTcTdlua2pKvvLycs2ePVudOnVSTEyMJk2apPz8/KB1nF9rj8ejdevWteQo9Xr66afVq1cvRUVFKS0tTR9//HG96//tb3/TwIEDFRUVpSFDhmjTpk1BjxtjtHDhQqWkpCg6OlpjxozRvn37WnKEJmnuuadPn17r6zpu3LiWHKFJGjP37t27NWnSpMDPqCeffPK0t2lLc8+9aNGiWl/vgQMHttwABic1btw4M2zYMPPRRx+ZDz74wPTt29dcf/31da7/xRdfmGuuucZs3LjR7N+/37z77rumX79+ZtKkSYF1qqqqzNlnn23GjBljdu7caTZt2mSSkpLM/fff3xojnZLGzm2MMStWrDCPP/64efzxx40kU1RUVGudnj17mt/+9rcmNzc38FFaWtpCUzReS83dlO22pqbku/XWW01qaqp59913zSeffGLOP/98M2rUqKB1JJk1a9YEfb3LyspacpQ6rVu3zkRERJjnn3/e7N6928ycOdPEx8eb/Pz8k66/bds2Exoaap544gmzZ88es2DBAhMeHm6++OKLwDpLliwxXq/XvPrqq+bzzz83V111lTnzzDOtzXgyLTH3jTfeaMaNGxf0dS0sLGytkU5JY+f++OOPzfz5883LL79skpOTzYoVK057mza0xNwPP/ywOeuss4K+3ocPH26xGSgkJ7Fnzx4jyfz73/8OLHvjjTeMx+MxBw8ePOXtrF+/3kRERJjKykpjjDGbNm0yISEhJi8vL7DOn/70JxMXF2eOHz/efAM00enO/d5779VbSE72D74taKm5m+vfUUtpSr7i4mITHh5u/va3vwWW7d2710gy6enpgWWSzD/+8Y8Wy94YI0aMMLNnzw78vbq62nTr1s08/vjjJ13/F7/4hbn88suDlqWlpZlbbrnFGGNMTU2NSU5ONkuXLg08XlxcbCIjI83LL7/cAhM0TXPPbcx3hWTixIktkre5NHbu76vr59TpbLO1tMTcDz/8sBk2bFgzpqwfL9mcRHp6uuLj43XeeecFlo0ZM0YhISHavn37KW/H5/MpLi5OYWFhge0OGTJEXbt2Dazzs5/9TH6/X7t3726+AZqoueauy5IlS9SpUycNHz5cS5cubTMvVbXU3C39//N0NSXfjh07VFlZqTFjxgSWDRw4UD169FB6enrQurNnz1ZSUpJGjBih559//pRuP97cKioqtGPHjqC8ISEhGjNmTK28J6SnpwetL333fXpi/YyMDOXl5QWt4/V6lZaWVuc2W1tLzH3Cli1b1KVLFw0YMEC33Xabjhw50vwDNFFT5raxzebWkhn37dunbt26qXfv3poyZYqysrJON26d2sXN9VpbXl6eunTpErQsLCxMiYmJysvLO6VtFBQUaPHixZo1a1bQdr9fRiQF/n6q221JzTF3XebOnatzzjlHiYmJ+vDDD3X//fcrNzdXy5cvP63tNoeWmrsl/382h6bky8vLU0REhOLj44OWd+3aNeg5v/3tb3XppZeqQ4cOeuutt3T77bertLRUc+fObfY56lNQUKDq6uqTft999dVXJ31OXd+nJ+Y78d/61rGtJeaWpHHjxumaa67RmWeeqW+++UYPPPCAxo8fr/T0dIWGhjb/II3UlLltbLO5tVTGtLQ0vfDCCxowYIByc3P1yCOP6MILL9SXX36p2NjY041di6sKyX333aff//739a6zd+/e0/48fr9fl19+uQYPHqxFixad9vZOV2vNXZ/f/OY3gT8PHTpUERERuuWWW/T444+32NsWt4W5bWgLcz/00EOBPw8fPlxHjx7V0qVLW72QoHldd911gT8PGTJEQ4cOVZ8+fbRlyxZddtllFpOhJYwfPz7w56FDhyotLU09e/bU+vXrdfPNNzf753NVIbnrrrs0ffr0etfp3bu3kpOTdejQoaDlVVVVKiwsVHJycr3PLykp0bhx4xQbG6t//OMfCg8PDzyWnJxc66znE1cnNLTd09EaczdWWlqaqqqqlJmZqQEDBjTrtk+wPXdr/v/8vpacOzk5WRUVFSouLg46SpKfn1/vTGlpaVq8eLGOHz/eqvfNSEpKUmhoaK2rgOrLm5ycXO/6J/6bn5+vlJSUoHV+9KMfNWP6pmuJuU+md+/eSkpK0v79+9tEIWnK3Da22dxaK2N8fLz69++v/fv3N9s2g7Ta2SrtyImT/T755JPAss2bNzd4MqLP5zPnn3++ueiii8zRo0drPX7ipNbvn/X83HPPmbi4OFNeXt68QzRBU+c+ob6TWp1eeuklExIS0ibO0G+puU93uy2tKflOnNT697//PbDsq6++qnVSq9Ojjz5qEhISmi98I4wYMcLMmTMn8Pfq6mrTvXv3ek/uvOKKK4KWjRw5stZJrcuWLQs87vP52uRJrc0598lkZ2cbj8djXnvtteYJ3QwaO/f31XdSa1O32VpaYm6nkpISk5CQYFauXHk6UetEIanDuHHjzPDhw8327dvN1q1bTb9+/YIuh8zJyTEDBgww27dvN8Z89wMpLS3NDBkyxOzfvz/oMqmqqipjzP+/7Hfs2LHms88+M2+++abp3Llzm7vstzFzG2NMbm6u2blzp1m9erWRZN5//32zc+dOc+TIEWOMMR9++KFZsWKF+eyzz8w333xjXnrpJdO5c2czbdq0Vp+vLi0x96ls17amzH3rrbeaHj16mH/+85/mk08+MSNHjjQjR44MPL5x40azevVq88UXX5h9+/aZZ555xnTo0MEsXLiwVWc7Yd26dSYyMtK88MILZs+ePWbWrFkmPj4+cLXbL3/5S3PfffcF1t+2bZsJCwszy5YtM3v37jUPP/zwSS/7jY+PN6+99prZtWuXmThxYpu87Lc55y4pKTHz58836enpJiMjw7zzzjvmnHPOMf369WsTv1Cd0Ni5jx8/bnbu3Gl27txpUlJSzPz5883OnTvNvn37TnmbbUFLzH3XXXeZLVu2mIyMDLNt2zYzZswYk5SUZA4dOtQiM1BI6nDkyBFz/fXXm5iYGBMXF2dmzJhhSkpKAo9nZGQYSea9994zxvz/35JP9pGRkRF4XmZmphk/fryJjo42SUlJ5q677gpcFtwWNHZuY767NOxkc69Zs8YYY8yOHTtMWlqa8Xq9JioqygwaNMj87ne/a1M/xFpi7lPZrm1NmbusrMzcfvvtJiEhwXTo0MFcffXVJjc3N/D4G2+8YX70ox+ZmJgY07FjRzNs2DDz7LPPmurq6tYcLchTTz1levToYSIiIsyIESPMRx99FHjsoosuMjfeeGPQ+uvXrzf9+/c3ERER5qyzzjKvv/560OM1NTXmoYceMl27djWRkZHmsssuM19//XVrjNIozTn3sWPHzNixY03nzp1NeHi46dmzp5k5c2ab2imf0Ji5T/wbd35cdNFFp7zNtqK55548ebJJSUkxERERpnv37mby5Mlm//79LZbfY4yFa/EAAAC+h/chAQAA1lFIAACAdRQSAABgHYUEAABYRyEBAADWUUgAAIB1FBIAAGAdhQQAAFhHIQHauIsvvlh33nlni2z7Jz/5if7617+2yLYrKirUq1cvffLJJ6e0/kMPPaRZs2a1SBZbzj//fG3YsMF2DKBdoJAALrVx40bl5+cH3VK+V69eevLJJ2utu2jRoqA72S5atEgej0cej0ehoaFKTU3VrFmzVFhYGFgnIiJC8+fP17333ttglry8PK1cuVIPPvhgYFlJSYnuvPNO9ezZU9HR0Ro1apT+/e9/Bz1v+vTpgRwnPsaNGxd4/Pjx4/rlL3+puLg49e/fX++8807Q85cuXao77rijwXyS5Pf79eCDD2rgwIGKiopScnKyxowZo1deeUUn3vDaWR4XLFig++67TzU1Naf0OQA3o5AALvWHP/xBM2bMUEhI034MnHXWWcrNzVVWVpbWrFmjN998U7fddlvQOlOmTNHWrVu1e/fuerf15z//WaNGjVLPnj0Dy371q1/p7bff1l/+8hd98cUXGjt2rMaMGaODBw8GPXfcuHHKzc0NfLz88suBx1atWqUdO3YoPT1ds2bN0g033BAoDxkZGVq9erUee+yxBmctLi7WqFGj9OKLL+r+++/Xp59+qvfff1+TJ0/WPffcI5/Pd9LnjR8/XiUlJXrjjTca/ByA21FIgHamqKhI06ZNU0JCgjp06KDx48dr3759QeusXr1aqamp6tChg66++motX75c8fHxgccPHz6sf/7zn7ryyiubnCMsLEzJycnq3r27xowZo5///Od6++23g9ZJSEjQ6NGjtW7dunq3tW7duqAsZWVl2rBhg5544gn95Cc/Ud++fbVo0SL17dtXf/rTn4KeGxkZqeTk5MBHQkJC4LG9e/fqqquu0llnnaXZs2fr8OHDKigokCTddttt+v3vf6+4uLgGZ33ggQeUmZmp7du368Ybb9TgwYPVv39/zZw5U5999pliYmJO+rzQ0FBNmDChwfkBUEiAdmf69On65JNPtHHjRqWnp8sYowkTJqiyslKStG3bNt1666369a9/rc8++0w//elPax0F2Lp1qzp06KBBgwY1S6bMzExt3rxZERERtR4bMWKEPvjggzqfW1hYqD179ui8884LLKuqqlJ1dbWioqKC1o2OjtbWrVuDlm3ZskVdunTRgAEDdNttt+nIkSOBx4YNG6atW7eqrKxMmzdvVkpKipKSkrR27VpFRUXp6quvbnC2mpoarVu3TlOmTFG3bt1qPR4TE6OwsLA6n9/Q/AC+U/d3EYA2Z9++fdq4caO2bdumUaNGSZLWrl2r1NRUvfrqq/r5z3+up556SuPHj9f8+fMlSf3799eHH36o//3f/w1s58CBA+ratetJX6659957tWDBgqBlFRUVGjx4cNCyL774QjExMaqurlZ5ebkkafny5bW2161bNx04cKDOmbKysmSMCdrZx8bGauTIkVq8eLEGDRqkrl276uWXX1Z6err69u0bWG/cuHG65pprdOaZZ+qbb77RAw88oPHjxys9PV2hoaG66aabtGvXLg0ePFhJSUlav369ioqKtHDhQm3ZskULFizQunXr1KdPHz3//PPq3r17rXwFBQUqKirSwIED65yhPt26dVN2drZqamqa/PIY4AYUEqAd2bt3r8LCwpSWlhZY1qlTJw0YMEB79+6VJH399de1fvMfMWJEUCEpKyurdfThhLvvvlvTp08PWvaHP/xB77//ftCyAQMGaOPGjSovL9dLL72kzz777KQniEZHR+vYsWN1zlRWViZJtfL85S9/0U033aTu3bsrNDRU55xzjq6//nrt2LEjsM73T8gdMmSIhg4dqj59+mjLli267LLLFB4erqeffjpouzNmzNDcuXO1c+dOvfrqq/r888/1xBNPaO7cuSe9IubEOSdNFR0drZqaGh0/flzR0dGntS3gh4y6DrhQUlKSioqK6nysb9++QR+JiYm11ouIiFDfvn119tlna8mSJQoNDdUjjzxSa73CwkJ17ty53iySauXp06eP/vWvf6m0tFTZ2dn6+OOPVVlZqd69e9e5rd69eyspKUn79+8/6ePvvfeedu/erTlz5mjLli2aMGGCOnbsqF/84hfasmXLSZ/TuXNnxcfH66uvvqrz89ansLBQHTt2pIwADaCQAO3IoEGDVFVVpe3btweWHTlyRF9//XXgJZUBAwbUujzW+ffhw4crLy+vzlLSFAsWLNCyZcv07bffBi3/8ssvNXz48Dqf16dPH8XFxWnPnj0nfbxjx45KSUlRUVGRNm/erIkTJ9a5rZycHB05ckQpKSm1HisvL9fs2bP13HPPKTQ0VNXV1YHzbiorK1VdXX3SbYaEhOi6667T2rVra80mSaWlpaqqqqozU0PzA/gOhQRoR/r166eJEydq5syZ2rp1qz7//HNNnTpV3bt3D+yo77jjDm3atEnLly/Xvn379Nxzz+mNN96Qx+MJbGf48OFKSkrStm3bmi3byJEjNXToUP3ud78LWv7BBx9o7NixdT4vJCREY8aMqXWy6ubNm/Xmm28qIyNDb7/9ti655BINHDhQM2bMkPRdEbj77rv10UcfKTMzU++++64mTpyovn376mc/+1mtz7N48WJNmDAhUA5Gjx6tV155Rbt27dIf//hHjR49us6Mjz32mFJTU5WWlqYXX3xRe/bs0b59+/T8889r+PDhKi0trfO5Dc0P4DsUEqCdWbNmjc4991xdccUVGjlypIwx2rRpk8LDwyV9t6N99tlntXz5cg0bNkxvvvmm5s2bF3SORmhoqGbMmKG1a9c2a7Z58+bpz3/+s7KzsyVJ6enp8vl8uvbaa+t93q9+9SutW7cu6A3EfD6fZs+erYEDB2ratGm64IILtHnz5sCcoaGh2rVrl6666ir1799fN998s84991x98MEHioyMDNr+l19+qfXr1we9pHTttdfq8ssv14UXXqhdu3Zp5cqVdeZLTEzURx99pKlTp+rRRx/V8OHDdeGFF+rll1/W0qVL5fV6T/q8gwcP6sMPPwyUKAB185jTPWMLQJs3c+ZMffXVV0GXn+bl5emss87Sp59+GvSGZM1p8uTJGjZsmB544IF61zPGKC0tTfPmzdP111/fIllsuPfee1VUVKRVq1bZjgK0eRwhAX6Ali1bps8//1z79+/XU089pf/+7//WjTfeGLROcnKy/uu//ktZWVktkqGiokJDhgzRvHnzGlzX4/Fo1apV9Z6L0R516dJFixcvth0DaBc4QgL8AJ24aqSkpES9e/fWHXfcoVtvvdV2LACoE4UEAABYx0s2AADAOgoJAACwjkICAACso5AAAADrKCQAAMA6CgkAALCOQgIAAKyjkAAAAOv+H8yAnND9Kr3dAAAAAElFTkSuQmCC", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjEAAAGwCAYAAABYazQUAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAArS0lEQVR4nO3de3QV5aH+8WcnIRdy2SEGSAIBhHCvULyAgK1aUYEiEbUKgghYEAWtHEVQUfSgLUWKINVarMhRKWiFhawjiqhgAQELHLnIZYEl5EISCEl2EkjIZb+/P/ojy0gggNnMO/D9rJXFyuzZs595V8I8effMHo8xxggAAMBlgpwOAAAAcD4oMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJVCnA5wNvx+vw4dOqTo6Gh5PB6n4wAAgLNgjFFxcbGSkpIUFFT/8yauKDGHDh1ScnKy0zEAAMB5yMjIUPPmzet9u64oMdHR0ZL+MwgxMTEOpwFwoaSlpWnatGl69tln1apVK6fjADhHRUVFSk5Orj6O1zdXlJiTbyHFxMRQYoBLSHR0tBo0aKDo6Gh+9wEXC9SpIJzYCwAAXIkSA8BawcHBio6OVnBwsNNRAFjI44a7WBcVFcnr9crn8zGlDACASwT6+M1MDAAAcCVKDABrZWZmasKECcrMzHQ6CgALUWIAWKuiokK5ubmqqKhwOgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFgrISFBkydPVkJCgtNRAFjIFbcdAHBpioiIUJcuXZyOAcBSzMQAsFZhYaGWLFmiwsJCp6MAsBAlBoC1CgoKtGTJEhUUFDgdBYCFKDEAAMCVKDEAAMCVKDEAAMCVKDEArBUVFaXevXsrKirK6SgALOQxxhinQ9Ql0LfyBgAA9S/Qx29mYgBYixtAAjgTSgwAa2VmZmrChAnKzMx0OgoAC1FiAACAK1FiAACAK1FiAACAK1FiAACAK3GJNQAACAgusQYAAKgFJQaAtbKzs/Xcc88pOzvb6SgALESJAWCtsrIy7d+/X2VlZU5HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAazVu3FgPP/ywGjdu7HQUABYKcToAAJxOVFSUrrvuOqdjALAUMzEArFVUVKTPPvtMRUVFTkcBYCFKDABrHT16VAsWLNDRo0edjgLAQpQYAADgSpQYAADgSpQYAADgSpQYANaKiIhQly5dFBER4XQUABbyGGOM0yHqEuhbeQMAgPoX6OM3MzEArOX3+1VaWiq/3+90FAAWosQAsNbBgwf1wAMP6ODBg05HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAAHAl7mINwFotWrTQG2+8ocjISKejALAQJQaAtYKDg/lsKACnxdtJAKyVm5urmTNnKjc31+koACxEiQFgrePHj2vr1q06fvy401EAWIgSAwAAXIkSAwAAXIkSAwAAXIkSA8BacXFxGjZsmOLi4pyOAsBCXGINwFper1f9+/d3OgYASzETA8Bax44d06ZNm3Ts2DGnowCwECUGgLUOHz6sOXPm6PDhw05HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAa4WGhqpVq1YKDQ11OgoAC3mMMcbpEHUpKiqS1+uVz+fjjrYAALhEoI/fzMQAAABXosQAsFZaWpqGDx+utLQ0p6MAsBAlBoC1jDGqrKyUC971BuAASgwAAHAlSgwAAHAlSgwAAHAl7mINwFrNmjXTjBkz1KRJE6ejALAQJQaAtUJDQ9W8eXOnYwCwFG8nAbBWXl6e5s2bp7y8PKejALAQJQaAtYqLi7VmzRoVFxc7HQWAhSgxAADAlSgxAADAlSgxAADAlSgxAKzl9Xo1cOBAeb1ep6MAsBCXWAOwVlxcnAYPHux0DACWYiYGgLXKysq0a9culZWVOR0FgIUoMQCslZ2drRdffFHZ2dlORwFgIUoMAABwJUoMAABwJUoMAABwJUoMAGuFhIQoLi5OISFcSAngVB5jjHE6RF2Kiork9Xrl8/kUExPjdBwAAHAWAn38ZiYGAAC4EiUGgLUyMjI0fvx4ZWRkOB0FgIUoMQCsVVlZqfz8fFVWVjodBYCFKDEAAMCVKDEAAMCVKDEAAMCVKDEArJWYmKgpU6YoMTHR6SgALMQnSAGwVnh4uDp16uR0DACWYiYGgLXy8/O1ePFi5efnOx0FgIUoMQCs5fP5tHz5cvl8PqejALAQJQYAALgSJQYAALgSJQYAALgSJQaAtaKjo3XDDTcoOjra6SgALOQxxhinQ9Ql0LfyBgAA9S/Qx29mYgBYq7y8XJmZmSovL3c6CgALUWIAWCsrK0tPPvmksrKynI4CwEKUGAAA4EqUGAAA4EqUGAAA4EqUGADW8ng8CgkJkcfjcToKAAtxiTUAAAgILrEGAACoBSUGgLWysrL09NNPc4k1gFpRYgBYq7y8XGlpaXzYHYBaUWIAAIArUWIAAIArUWIAAIArUWIAWKtJkyb63e9+pyZNmjgdBYCFQpwOAACnExkZqR49ejgdA4ClmIkBYC2fz6cVK1bI5/M5HQWAhSgxAKyVn5+v9957T/n5+U5HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAazVs2FBXXnmlGjZs6HQUABbyGGOM0yHqEuhbeQMAgPoX6OM3MzEArFVVVaWioiJVVVU5HQWAhSgxAKyVnp6usWPHKj093ekoACxEiQEAAK5EiQEAAK5EiQEAAK5EiQEAAK7EJdYArOX3+3XixAmFhYUpKIi/uQC3CfTxO6TetwgA9SQoKEgRERFOxwBgKf60AWCtnJwcTZ8+XTk5OU5HAWAhSgwAa5WWlmr79u0qLS11OgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFjrsssu04gRI3TZZZc5HQWAhbjEGoC1YmJidMsttzgdA4ClmIkBYK2SkhKtW7dOJSUlTkcBYCFKDABrHTlyRK+//rqOHDnidBQAFqLEAAAAV6LEAAAAV6LEAAAAV6LEALBWeHi4UlJSFB4e7nQUABbyGGOM0yHqEuhbeQMAgPoX6OM3MzEAAMCVKDEArHXgwAHde++9OnDggNNRAFiIEgMAAFyJEgMAAFyJEgMAAFyJEgMAAFyJu1gDsFbz5s31yiuvKC4uzukoACxEiQFgrQYNGqhp06ZOxwBgKd5OAmCtI0eO6LXXXuMu1gBqRYkBYK2SkhKtX79eJSUlTkcBYCFKDAAAcCVKDAAAcCVKDAAAcCVKDABrNWrUSHfeeacaNWrkdBQAFuISawDWio2N1Z133ul0DACWYiYGgLVKS0u1fft2lZaWOh0FgIUoMQCslZOTo+nTpysnJ8fpKAAsRIkBAACuRIkBAACuRIkBAACuRIkBYK2TN4Bs0KCB01EAWMhjjDFOh6hLUVGRvF6vfD6fYmJinI4DAADOQqCP38zEAAAAV6LEALBWenq6HnzwQaWnpzsdBYCFKDEArFVVVaXi4mJVVVU5HQWAhSgxAADAlSgxAADAlSgxAADAlSgxAKyVmJioF154QYmJiU5HAWChEKcDAMDphIeHq23btk7HAGApZmIAWCs/P1/vvfee8vPznY4CwEKUGADW8vl8WrFihXw+n9NRAFiIEgMAAFyJEgMAAFyJE3sBXFDXX3+9MjIyzrhOcnKyvvrqqwuUCIBbXfIzMa1bt1br1q2djgFcMjIyMs54L6T09PTqkhMdHa2bb75Z0dHRFyoecEly67HQ0ZmY8vJyhYaGOhkBgANatGihf//737U+9sP/SOPj4zVy5MgLFQuAy5z1TMy8efOUlJQkv99fY3lqaqpGjRql77//XqmpqWratKmioqJ0zTXX6PPPP6+xbqtWrTRt2jQNHz5cMTExGjNmTP3sBYCL0okTJ3TgwAGdOHHC6SgALOQxxpizWbGgoEAJCQlasWKFbrrpJkn/+QyHxMRErVixQvHx8dq4caN69+6tsLAwvfPOO5o5c6b27t2rFi1aSPpPiSkoKNBzzz2n22+/XZLUpk2bU17rxIkTNf7TKioqUnJysnw+n2JiYn7qPtfQunVrZWRkKDk5uV63C6B2J3/fzjQTc3KdiooKFRQUqFGjRmrQoMEFTgpcOur6vTxfRUVF8nq9ATl+S+cwE9OoUSP169dPf//736uXffjhh4qPj9eNN96orl276sEHH9TPfvYztW3bVtOmTVObNm20fPnyGtv51a9+pccff1xt2rSptcBI0h/+8Ad5vd7qLwoGAAD4sXM6J2bo0KEaPXq0Xn/9dYWFhWnhwoUaPHiwgoKCVFJSoueff14ff/yxsrOzVVlZqdLS0lNO4Lv66qvrfJ2nnnpK//Vf/1X9/cmZmEAJRPsEULuzOXnw5O/kgQMH9Mwzz+ill17S5ZdffgHSAZcmN57UK51jibnttttkjNHHH3+sa665RmvXrtUrr7wiSXriiSe0atUqzZw5UykpKYqIiNBdd92l8vLyGtuIjIys83XCwsIUFhZ2LtEAAMAl5pxKTHh4uO644w4tXLhQ+/fvV/v27XXllVdKktavX68RI0Zo0KBBkqSSkhKlpaXVe2AA7peenn7av/zS09Orz6MLCgpSeHi4goIu+U+DAFCLc77EeujQoRowYIC+++47DRs2rHp527ZttXTpUt12223yeDx69tlnT7mSyUa8jQRcWHW9NdyiRYvqdVq2bKn58+dfiFjAJc2tx8JzLjG/+tWvFBcXp7179+ree++tXj5r1iyNGjVKvXr1Unx8vCZNmqSioqJ6DQvA/fgkXgD15awvsXZSoC/RAmCnrKwszZ49W4899piaNWvmdBwA58iaS6wB4EIrLy9XVlbWKRcIAIBEiQEAAC5FiQEAAK5EiQEAAK5EiQFgraZNm+rxxx9X06ZNnY4CwELnfIk1AFwoDRs21FVXXeV0DACWYiYGgLUKCwv10UcfqbCw0OkoACxEiQFgrYKCAr3//vsqKChwOgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFgrMjJSPXr0UGRkpNNRAFiIG0ACAICA4AaQAC5ZlZWVys/PV2VlpdNRAFiIEgPAWhkZGRo/frwyMjKcjgLAQpQYAADgSpQYAADgSpQYAADgSpQYAADgSlxiDcBaxhhVVlYqJCREHo/H6TgAzlGgj98h9b5FAKgnHo9HDRo0cDoGAEvxdhIAa2VnZ2vatGnKzs52OgoAC1FiAFirrKxMu3fvVllZmdNRAFiIEgMAAFyJEgMAAFyJEgMAAFyJEgPAWvHx8Ro9erTi4+OdjgLAQlxiDcBa0dHRuvHGG52OAcBSzMQAsFZxcbFWr16t4uJip6MAsBAlBoC18vLy9OabbyovL8/pKAAsRIkBAACuRIkBAACuRIkBAACuRIkBYK3w8HB17NhR4eHhTkcBYCGPMcY4HaIugb6VNwAAqH+BPn4zEwPAWsYYVVRUyAV/awFwACUGgLXS0tJ0//33Ky0tzekoACxEiQEAAK5EiQEAAK5EiQEAAK5EiQEAAK7EXawBWCs5OVl//vOf+WgFALWixACwVkhIiOLi4pyOAcBSvJ0EwFqHDx/WnDlzdPjwYaejALAQJQaAtY4dO6ZNmzbp2LFjTkcBYCFKDAAAcCVKDAAAcCVKDAAAcCVKDABrNWrUSPfcc48aNWrkdBQAFuISawDWio2NVWpqqtMxAFiKmRgA1jp+/Li2bNmi48ePOx0FgIUoMQCslZubqz/96U/Kzc11OgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFgrNDRUzZo1U2hoqNNRAFjIY4wxToeoS1FRkbxer3w+n2JiYpyOAwAAzkKgj9/MxAAAAFeixACw1sGDBzVq1CgdPHjQ6SgALESJAWAtv9+vsrIy+f1+p6MAsBAlBgAAuBIlBgAAuBIlBgAAuBIlBoC1kpKS9NJLLykpKcnpKAAsFOJ0AAA4nbCwMF1++eVOxwBgKWZiAFgrLy9Pb7/9tvLy8pyOAsBClBgA1iouLtaqVatUXFzsdBQAFqLEAAAAV6LEAAAAV6LEAAAAV6LEALCW1+tV//795fV6nY4CwEJcYg3AWnFxcRo2bJjTMQBYipkYANYqKyvTvn37VFZW5nQUABaixACwVnZ2tqZOnars7GynowCwECUGAAC4EiUGAAC4EiUGAAC4EiUGgLWCg4MVHR2t4OBgp6MAsJDHGGOcDlGXoqIieb1e+Xw+xcTEOB0HAACchUAfv5mJAQAArkSJAWCtzMxMTZgwQZmZmU5HAWAhSgwAa1VUVCg3N1cVFRVORwFgIUoMAABwJUoMAABwJUoMAABwJUoMAGslJCRo8uTJSkhIcDoKAAuFOB0AAE4nIiJCXbp0cToGAEsxEwPAWoWFhVqyZIkKCwudjgLAQpQYANYqKCjQkiVLVFBQ4HQUABaixAAAAFeixAAAAFeixAAAAFeixACwVlRUlHr37q2oqCinowCwkMcYY5wOUZdA38obAADUv0Afv5mJAWAtbgAJ4EwoMQCslZmZqQkTJigzM9PpKAAsRIkBAACuRIkBAACuRIkBAACuRIkBAACuxCXWAAAgILjEGgAAoBaUGADWys7O1nPPPafs7GynowCwECUGgLXKysq0f/9+lZWVOR0FgIUoMQAAwJUoMQAAwJUoMQAAwJUoMQCs1bhxYz388MNq3Lix01EAWCjE6QAAcDpRUVG67rrrnI4BwFLMxACwVlFRkT777DMVFRU5HQWAhSgxAKx19OhRLViwQEePHnU6CgALUWIAAIArUWIAAIArUWIAAIArUWIAWCsiIkJdunRRRESE01EAWMhjjDFOh6hLoG/lDQAA6l+gj9/MxACwlt/vV2lpqfx+v9NRAFiIEgPAWgcPHtQDDzyggwcPOh0FgIUoMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJW4izUAa7Vo0UJvvPGGIiMjnY4CwEKUGADWCg4O5rOhAJwWbycBsFZubq5mzpyp3Nxcp6MAsBAlBoC1jh8/rq1bt+r48eNORwFgIUoMAABwJUoMAABwJUoMAABwJUoMAGvFxcVp2LBhiouLczoKAAtxiTUAa3m9XvXv39/pGAAsxUwMAGsdO3ZMmzZt0rFjx5yOAsBClBgA1jp8+LDmzJmjw4cPOx0FgIUoMQAAwJUoMQAAwJUoMQAAwJUoMQCsFRoaqlatWik0NNTpKAAs5DHGGKdD1KWoqEher1c+n4872gIA4BKBPn4zEwMAAFyJEgPAWmlpaRo+fLjS0tKcjgLAQpQYANYyxqiyslIueNcbgAMoMQAAwJUoMQAAwJUoMQAAwJW4izUAazVr1kwzZsxQkyZNnI4CwEKUGADWCg0NVfPmzZ2OAcBSvJ0EwFp5eXmaN2+e8vLynI4CwEKUGADWKi4u1po1a1RcXOx0FAAWosQAAABXosQAAABXosQAAABXosQAsJbX69XAgQPl9XqdjgLAQlxiDcBacXFxGjx4sNMxAFiKmRgA1iorK9OuXbtUVlbmdBQAFqLEALBWdna2XnzxRWVnZzsdBYCFKDEAAMCVKDEAAMCVKDEAAMCVKDEArBUSEqK4uDiFhHAhJYBTeYwxxukQdSkqKpLX65XP51NMTIzTcQAAwFkI9PGbmRgAAOBKlBgA1srIyND48eOVkZHhdBQAFqLEALBWZWWl8vPzVVlZ6XQUABaixAAAAFeixAAAAFeixAAAAFeixACwVmJioqZMmaLExESnowCwEJ8gBcBa4eHh6tSpk9MxAFiKmRgA1srPz9fixYuVn5/vdBQAFqLEALCWz+fT8uXL5fP5nI4CwEKUGAAA4EqUGAAA4EqUGAAA4EqUGADWio6O1g033KDo6GinowCwkMcYY5wOUZdA38obAADUv0Afv5mJAWCt8vJyZWZmqry83OkoACxEiQFgraysLD355JPKyspyOgoAC7niE3tPvuNVVFTkcBIAF1JxcbEqKipUXFzM7z/gQid/bwN15oorzonJzMxUcnKy0zEAAMB5yMjIUPPmzet9u64oMX6/X4cOHVJ0dLQ8Hs95baOoqEjJycnKyMjg5OBzxNj9NIzf+WPsfhrG76dh/M7fybFLT0+Xx+NRUlKSgoLq/wwWV7ydFBQUVG8NLiYmhh/G88TY/TSM3/lj7H4axu+nYfzOn9frDejYcWIvAABwJUoMAABwpUumxISFhWnq1KkKCwtzOorrMHY/DeN3/hi7n4bx+2kYv/N3ocbOFSf2AgAA/NglMxMDAAAuLpQYAADgSpQYAADgSpQYAADgShdNicnPz9fQoUMVExOj2NhYPfDAAyopKTnjc8rKyjRu3DhddtllioqK0p133qnc3NxT1luwYIG6dOmi8PBwNWnSROPGjQvUbjgmkOMnSUePHlXz5s3l8XhUWFgYgD1wTiDGbtu2bRoyZIiSk5MVERGhjh07as6cOYHelQvitddeU6tWrRQeHq4ePXrom2++OeP6//jHP9ShQweFh4friiuu0IoVK2o8bozRc889p8TEREVERKhPnz7at29fIHfBUfU5fhUVFZo0aZKuuOIKRUZGKikpScOHD9ehQ4cCvRuOqO+fvR8aO3asPB6PZs+eXc+p7RGI8du9e7cGDhwor9eryMhIXXPNNUpPTz/7UOYi0bdvX9O1a1ezceNGs3btWpOSkmKGDBlyxueMHTvWJCcnmy+++MJs3rzZXHvttaZXr1411vnTn/5kkpKSzMKFC83+/fvNtm3bzEcffRTIXXFEoMbvpNTUVNOvXz8jyRQUFARgD5wTiLF76623zKOPPmrWrFljvv/+e/Puu++aiIgIM3fu3EDvTkAtXrzYhIaGmvnz55vvvvvOjB492sTGxprc3Nxa11+/fr0JDg42M2bMMLt27TJTpkwxDRo0MDt27KheZ/r06cbr9Zply5aZbdu2mYEDB5rLL7/clJaWXqjdumDqe/wKCwtNnz59zPvvv2/27NljNmzYYLp3726uuuqqC7lbF0QgfvZOWrp0qenatatJSkoyr7zySoD3xBmBGL/9+/ebuLg4M3HiRLN161azf/9+89FHH512m7W5KErMrl27jCTzr3/9q3rZJ598Yjwej8nKyqr1OYWFhaZBgwbmH//4R/Wy3bt3G0lmw4YNxhhj8vPzTUREhPn8888DuwMOC9T4nfT666+b66+/3nzxxRcXXYkJ9Nj90MMPP2xuvPHG+gvvgO7du5tx48ZVf19VVWWSkpLMH/7wh1rXv/vuu82vf/3rGst69OhhHnzwQWOMMX6/3yQkJJiXX365+vHCwkITFhZmFi1aFIA9cFZ9j19tvvnmGyPJHDx4sH5CWyJQY5eZmWmaNWtmdu7caVq2bHnRlphAjN8999xjhg0b9pNyXRRvJ23YsEGxsbG6+uqrq5f16dNHQUFB2rRpU63P2bJliyoqKtSnT5/qZR06dFCLFi20YcMGSdKqVavk9/uVlZWljh07qnnz5rr77ruVkZER2B26wAI1fpK0a9cu/fd//7feeeedgNz8y2mBHLsf8/l8iouLq7/wF1h5ebm2bNlSY7+DgoLUp0+f0+73hg0baqwvSbfeemv1+gcOHFBOTk6Ndbxer3r06HHGsXSjQIxfbXw+nzwej2JjY+sltw0CNXZ+v1/33XefJk6cqM6dOwcmvAUCMX5+v18ff/yx2rVrp1tvvVVNmjRRjx49tGzZsnPKdlEcVXJyctSkSZMay0JCQhQXF6ecnJzTPic0NPSUX9SmTZtWP+ff//63/H6/fv/732v27Nn68MMPlZ+fr5tvvlnl5eUB2RcnBGr8Tpw4oSFDhujll19WixYtApLdaYEaux/7+uuv9f7772vMmDH1ktsJeXl5qqqqUtOmTWssP9N+5+TknHH9k/+eyzbdKhDj92NlZWWaNGmShgwZclHd8DBQY/fHP/5RISEhevTRR+s/tEUCMX6HDx9WSUmJpk+frr59++qzzz7ToEGDdMcdd+irr74662xWl5jJkyfL4/Gc8WvPnj0Be32/36+Kigq9+uqruvXWW3Xttddq0aJF2rdvn1avXh2w160vTo/fU089pY4dO2rYsGEBe41AcXrsfmjnzp1KTU3V1KlTdcstt1yQ18Slp6KiQnfffbeMMfrLX/7idBzrbdmyRXPmzNGCBQvk8XicjuM6fr9fkpSamqoJEybo5z//uSZPnqwBAwbojTfeOOvthAQqYH14/PHHNWLEiDOu07p1ayUkJOjw4cM1lldWVio/P18JCQm1Pi8hIUHl5eUqLCys8Rdxbm5u9XMSExMlSZ06dap+vHHjxoqPjz+3s6cd4vT4ffnll9qxY4c+/PBDSf+5ikSS4uPj9cwzz+iFF144zz0LPKfH7qRdu3bppptu0pgxYzRlypTz2hdbxMfHKzg4+JQr2Grb75MSEhLOuP7Jf3Nzc6t/X09+//Of/7we0zsvEON30skCc/DgQX355ZcX1SyMFJixW7t2rQ4fPlxjlrmqqkqPP/64Zs+erbS0tPrdCQcFYvzi4+MVEhJS4/gqSR07dtS6devOPtxPOqPGEidPrty8eXP1spUrV57VyZUffvhh9bI9e/bUOLly7969RlKNE3uPHj1qgoKCzMqVKwO0NxdeoMZv//79ZseOHdVf8+fPN5LM119/fU5nn9ssUGNnjDE7d+40TZo0MRMnTgzcDlxg3bt3N+PHj6/+vqqqyjRr1uyMJwcOGDCgxrKePXuecmLvzJkzqx/3+XwX9Ym99Tl+xhhTXl5ubr/9dtO5c2dz+PDhwAS3QH2PXV5eXo3/33bs2GGSkpLMpEmTzJ49ewK3Iw4JxM9ez549Tzmx9/bbb6/z6s4fuihKjDH/ucy1W7duZtOmTWbdunWmbdu2NQYiMzPTtG/f3mzatKl62dixY02LFi3Ml19+aTZv3mx69uxpevbsWWO7qamppnPnzmb9+vVmx44dZsCAAaZTp06mvLz8gu3bhRCo8fuh1atXX3RXJxkTmLHbsWOHady4sRk2bJjJzs6u/nL7QWbx4sUmLCzMLFiwwOzatcuMGTPGxMbGmpycHGOMMffdd5+ZPHly9frr1683ISEhZubMmWb37t1m6tSptV5iHRsbaz766COzfft2k5qaelFfYl2f41deXm4GDhxomjdvbr799tsaP2snTpxwZB8DJRA/ez92MV+dFIjxW7p0qWnQoIGZN2+e2bdvn5k7d64JDg42a9euPetcF02JOXr0qBkyZIiJiooyMTExZuTIkaa4uLj68QMHDhhJZvXq1dXLSktLzcMPP2waNWpkGjZsaAYNGmSys7NrbNfn85lRo0aZ2NhYExcXZwYNGmTS09Mv1G5dMIEavx+6WEtMIMZu6tSpRtIpXy1btryAexYYc+fONS1atDChoaGme/fuZuPGjdWPXX/99eb++++vsf4HH3xg2rVrZ0JDQ03nzp3Nxx9/XONxv99vnn32WdO0aVMTFhZmbrrpJrN3794LsSuOqM/xO/mzWdvXD39eLxb1/bP3YxdziTEmMOP31ltvmZSUFBMeHm66du1qli1bdk6ZPMb8/xMVAAAAXMTqq5MAAABOhxIDAABciRIDAABciRIDAABciRIDAABciRIDAABciRIDAABciRIDAABciRIDWO6GG27QY489FpBt//KXv9Tf//73gGy7vLxcrVq10ubNm89q/WeffVZjxowJSBanXHvttVqyZInTMYCLFiUGuEQtX75cubm5Gjx4cPWyVq1aafbs2aes+/zzz9e4K/Tzzz8vj8cjj8ej4OBgJScna8yYMcrPz69eJzQ0VE888YQmTZpUZ5acnBzNmTNHzzzzTPWy4uJiPfbYY2rZsqUiIiLUq1cv/etf/6rxvBEjRlTnOPnVt2/f6sdPnDih++67TzExMWrXrp0+//zzGs9/+eWX9cgjj9SZT5KKior0zDPPqEOHDgoPD1dCQoL69OmjpUuXVt+h/ceFc8qUKZo8ebL8fv9ZvQaAc0OJAS5Rr776qkaOHKmgoPP7b6Bz587Kzs5Wenq63n77bX366ad66KGHaqwzdOhQrVu3Tt99990Zt/W3v/1NvXr1UsuWLauX/fa3v9WqVav07rvvaseOHbrlllvUp08fZWVl1Xhu3759lZ2dXf21aNGi6sfmzZunLVu2aMOGDRozZozuvffe6sJx4MABvfnmm3rppZfq3NfCwkL16tVL77zzjp566ilt3bpV//znP3XPPffoySeflM/nq/V5/fr1U3FxsT755JM6XwPAuaPEAC5TUFCg4cOHq1GjRmrYsKH69eunffv21VjnzTffVHJysho2bKhBgwZp1qxZio2NrX78yJEj+vLLL3Xbbbedd46QkBAlJCSoWbNm6tOnj37zm99o1apVNdZp1KiRevfurcWLF59xW4sXL66RpbS0VEuWLNGMGTP0y1/+UikpKXr++eeVkpKiv/zlLzWeGxYWpoSEhOqvRo0aVT+2e/duDRw4UJ07d9a4ceN05MgR5eXlSZIeeugh/fGPf1RMTEyd+/r0008rLS1NmzZt0v33369OnTqpXbt2Gj16tL799ltFRUXV+rzg4GD179+/zv0HcH4oMYDLjBgxQps3b9by5cu1YcMGGWPUv39/VVRUSJLWr1+vsWPH6ne/+52+/fZb3XzzzafMNqxbt04NGzZUx44d6yVTWlqaVq5cqdDQ0FMe6969u9auXXva5+bn52vXrl26+uqrq5dVVlaqqqpK4eHhNdaNiIjQunXraixbs2aNmjRpovbt2+uhhx7S0aNHqx/r2rWr1q1bp9LSUq1cuVKJiYmKj4/XwoULFR4erkGDBtW5b36/X4sXL9bQoUOVlJR0yuNRUVEKCQk57fPr2n8A5+/0v3kArLNv3z4tX75c69evV69evSRJCxcuVHJyspYtW6bf/OY3mjt3rvr166cnnnhCktSuXTt9/fXX+t///d/q7Rw8eFBNmzat9a2kSZMmacqUKTWWlZeXq1OnTjWW7dixQ1FRUaqqqlJZWZkkadasWadsLykpSQcPHjztPqWnp8sYU6MgREdHq2fPnpo2bZo6duyopk2batGiRdqwYYNSUlKq1+vbt6/uuOMOXX755fr+++/19NNPq1+/ftqwYYOCg4M1atQobd++XZ06dVJ8fLw++OADFRQU6LnnntOaNWs0ZcoULV68WG3atNH8+fPVrFmzU/Ll5eWpoKBAHTp0OO0+nElSUpIyMjLk9/vP+607ALWjxAAusnv3boWEhKhHjx7Vyy677DK1b99eu3fvliTt3bv3lBmG7t271ygxpaWlp8xynDRx4kSNGDGixrJXX31V//znP2ssa9++vZYvX66ysjK99957+vbbb2s9STYiIkLHjx8/7T6VlpZK0il53n33XY0aNUrNmjVTcHCwrrzySg0ZMkRbtmypXueHJyVfccUV6tKli9q0aaM1a9bopptuUoMGDfTaa6/V2O7IkSP16KOP6v/+7/+0bNkybdu2TTNmzNCjjz5a65VEJ8+hOV8RERHy+/06ceKEIiIiftK2ANTEnwXAJSg+Pl4FBQWnfSwlJaXGV1xc3CnrhYaGKiUlRT/72c80ffp0BQcH64UXXjhlvfz8fDVu3PiMWSSdkqdNmzb66quvVFJSooyMDH3zzTeqqKhQ69atT7ut1q1bKz4+Xvv376/18dWrV+u7777T+PHjtWbNGvXv31+RkZG6++67tWbNmlqf07hxY8XGxmrPnj2nfd0zyc/PV2RkJAUGCABKDOAiHTt2VGVlpTZt2lS97OjRo9q7d2/12z3t27c/5VLkH3/frVs35eTknLbInI8pU6Zo5syZOnToUI3lO3fuVLdu3U77vDZt2igmJka7du2q9fHIyEglJiaqoKBAK1euVGpq6mm3lZmZqaNHjyoxMfGUx8rKyjRu3Dj99a9/VXBwsKqqqqrPI6qoqFBVVVWt2wwKCtLgwYO1cOHCU/ZNkkpKSlRZWXnaTHXtP4DzR4kBXKRt27ZKTU3V6NGjtW7dOm3btk3Dhg1Ts2bNqg/ujzzyiFasWKFZs2Zp3759+utf/6pPPvlEHo+nejvdunVTfHy81q9fX2/ZevbsqS5duuj3v/99jeVr167VLbfcctrnBQUFqU+fPqecsLty5Up9+umnOnDggFatWqUbb7xRHTp00MiRIyX9pzxMnDhRGzduVFpamr744gulpqYqJSVFt9566ymvM23aNPXv37+6UPTu3VtLly7V9u3b9ec//1m9e/c+bcaXXnpJycnJ6tGjh9555x3t2rVL+/bt0/z589WtWzeVlJSc9rl17T+A80eJAVzm7bff1lVXXaUBAwaoZ8+eMsZoxYoVatCggaT/HJzfeOMNzZo1S127dtWnn36qCRMm1DjnJDg4WCNHjtTChQvrNduECRP0t7/9TRkZGZKkDRs2yOfz6a677jrj8377299q8eLFNT4Uzufzady4cerQoYOGDx+u6667TitXrqzez+DgYG3fvl0DBw5Uu3bt9MADD+iqq67S2rVrFRYWVmP7O3fu1AcffFDj7a677rpLv/71r/WLX/xC27dv15w5c06bLy4uThs3btSwYcP04osvqlu3bvrFL36hRYsW6eWXX5bX6631eVlZWfr666+rixeA+uUxP/WsNQDWGz16tPbs2VPjUt+cnBx17txZW7durfEhc/XpnnvuUdeuXfX000+fcT1jjHr06KEJEyZoyJAhAcnihEmTJqmgoEDz5s1zOgpwUWImBrgIzZw5U9u2bdP+/fs1d+5c/c///I/uv//+GuskJCTorbfeUnp6ekAylJeX64orrtCECRPqXNfj8WjevHlnPLfEjZo0aaJp06Y5HQO4aDETA1yETl5tU1xcrNatW+uRRx7R2LFjnY4FAPWKEgMAAFyJt5MAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIAr/T8D3i+tuR89QgAAAABJRU5ErkJggg==", "text/plain": [ "
" ] From 288d41c9b774060e818b1a267aa5b4976f20d155 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Thu, 19 Dec 2024 13:40:57 +0100 Subject: [PATCH 06/19] updating synthetics data generation --- docs/notebooks/time_varying.ipynb | 113 +++++++++++++++++++++++++++++- 1 file changed, 111 insertions(+), 2 deletions(-) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 2550a87..70fe456 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -118,7 +118,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Simulating a dataset\n", + "## Simulating a dataset: first approach to test dimensions but doesn't guarantee meaningful results\n", "\n", "We will simulate a dataset of 100 subjects with 10 follow up times where a covariate is observed. The covariates will follow a trigonometric function over time and will be dependant on a random variable to differentiate between subjects.\n", "\n", @@ -126,7 +126,109 @@ "\n", "$$ Z_i(t) = a_i \\cos(2 \\pi t) $$\n", "\n", - "where $a_i \\sim N(5, 2.5)$." + "where $a_i \\sim N(5, 2.5)$.\n", + "\n", + "## Proper simulation guidance: data that can be interpreted\n", + "\n", + "A good approach for simulating data is described in detail by [Ngwa et al 2020](https://pmc.ncbi.nlm.nih.gov/articles/PMC7731987/). If this is not yet implemented, it would be a good way of starting to ensure that both methods work as expected. There are tow parts in simulating such a dataset. First, simulating the longitudina lobservational data and then the survival data. Below we describe methodologies for both.\n", + "\n", + "### Longitudinal data (covariates)\n", + "\n", + "We use $i \\in \\{1, \\dots, n\\}$ to index subjects and $j \\in \\{1, \\dots, m_i\\}$ to index time points where $m_i$ is the final time point for subject $i$.\n", + "We simulate covariates independantly:\n", + "- age at baseline $Age_i \\sim N(35,5)$\n", + "- sex $\\sim Bernoulli(p=0.54)$\n", + "\n", + "Generate expected longitudinal trajectories $\\varphi_{\\beta}(t_{ij})$:\n", + "\n", + "$$ \\varphi_{\\beta}(t_{ij}) = b_{i1} + b_{i2} \\cdot t_{ij} + \\alpha Age_i, $$\n", + "\n", + "where $b_{i1}, b_{i2}$ are random effects\n", + "\n", + "We will generate $b_{i1}, b_{i2}$ from multivariate normal distribution with a covariance matrix $G = [[0.29, -0.00465],[-0.00465, 0.000320]]$. Sample from this multivariate normal distribution (with mean zero) to get the random intercept and slope.\n", + "\n", + "The observed longitudinal measures measures $Y_{ij}(t_{ij})$ from a multivariate normal distribution with mean $ \\varphi_{\\beta}(t_{ij})$ and variance $V$:\n", + "\n", + "$$ V = Z_i GZ_i ^T + R_i, \\text{ where }Z_i = [[1,1,1,1,1,1]^T, [0,5,10,15,20,25]^T]$$\n", + "\n", + "and $R_i = diag(\\sigma^2)$ and $\\sigma^2$ is set to $0.1161$." + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[33.7853, 34.1568, 33.7249, 33.9724, 34.4417, 34.4528],\n", + " [33.1087, 33.4781, 32.5054, 33.1090, 32.9212, 33.4908],\n", + " [31.8224, 31.8031, 32.1202, 32.3814, 31.4848, 31.9074],\n", + " [36.1902, 35.9910, 36.4153, 36.2511, 35.8788, 36.4300]])\n" + ] + } + ], + "source": [ + "import torch.distributions as dist\n", + "\n", + "# Set random seed for reproducibility\n", + "torch.manual_seed(123)\n", + "\n", + "n = 100 # Number of subjects\n", + "T = 6 # Number of time points\n", + "\n", + "# Simulation parameters\n", + "age_mean = 35\n", + "age_std = 5\n", + "sex_prob = 0.54\n", + "G = torch.tensor([[0.29, -0.00465],[-0.00465, 0.000320]])\n", + "Z = torch.tensor([[1, 1, 1, 1, 1, 1], [0, 5, 10, 15, 20, 25]], dtype=torch.float32).T\n", + "sigma = torch.tensor([0.1161])\n", + "alpha = 1\n", + "\n", + "# Simulate age at baseline\n", + "age_dist = dist.Normal(age_mean, age_std)\n", + "age = age_dist.sample((n,))\n", + "\n", + "# Simulate sex\n", + "sex_dist = dist.Bernoulli(probs=sex_prob)\n", + "sex = sex_dist.sample((n,))\n", + "\n", + "# Simulate random effects\n", + "random_effects_dist = dist.MultivariateNormal(torch.zeros(2), G)\n", + "random_effects = random_effects_dist.sample((n,))\n", + "\n", + "# Generate expected longitudinal trajectories\n", + "# quite frakly this is useless now - it was based on my bad understanding of the algorithm\n", + "trajectories = random_effects[:, 0].unsqueeze(1) + random_effects[:, 1].unsqueeze(1) * Z[:,1] + alpha * age.unsqueeze(1)\n", + "\n", + "# Simulate observed longitudinal measures\n", + "R = torch.diag_embed(sigma.repeat(T))\n", + "V = torch.matmul(torch.matmul(Z, G), Z.T) + R\n", + "\n", + "#get a mean trajectory\n", + "b1 = torch.tensor([4.250])\n", + "b2 = torch.tensor([0.250])\n", + "mean_trajectory = b1.item() + b2.item() * Z[:,1] + alpha * age_mean\n", + "\n", + "#define the distribution to sample the trajectories from\n", + "observed_data_dist = dist.MultivariateNormal(trajectories, V)\n", + "\n", + "#sample from the distribution to get an n x T matrix of observations/covariates\n", + "observed_data = observed_data_dist.sample((1,)).squeeze()\n", + "\n", + "print(observed_data[1:5, :])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Survival data (outcomes)\n", + "\n", + "here I will describe how to get the survival and censoring for all the subjects from above. then I will code it up in python." ] }, { @@ -160,6 +262,13 @@ "covars = matrix * random_vars[:, None]\n" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, { "cell_type": "markdown", "metadata": {}, From 089af63b6846491ea0af1bf4693cc1c89d320831 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Thu, 2 Jan 2025 15:19:51 +0100 Subject: [PATCH 07/19] outcome simulation described --- docs/notebooks/time_varying.ipynb | 64 ++++++++++++++++--------------- 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 70fe456..70b7cc2 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -84,7 +84,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -156,7 +156,7 @@ }, { "cell_type": "code", - "execution_count": 40, + "execution_count": 4, "metadata": {}, "outputs": [ { @@ -228,38 +228,36 @@ "source": [ "### Survival data (outcomes)\n", "\n", - "here I will describe how to get the survival and censoring for all the subjects from above. then I will code it up in python." + "here I will describe how to get the survival and censoring for all the subjects from above. then I will code it up in python.\n", + "\n", + "Specify (varying) values for the parameter estimates for $Age$, $sex$ and the link parameter $\\gamma$, which measures the strength of the association between the longitudinal measures $Y_{ij}(t_{ij})$ and the time-to-event $\\tau_j$.\n", + "\n", + "Let $Q \\sim Unif(0,1)$ be a random variable that determines the hazard of a subject. Then using the time varying cox model it can be expressd as:\n", + "\n", + "$$ Q(t;X,Y) = \\exp[-H_0(t)\\cdot \\exp(X^T\\alpha + \\gamma (b_{i1} + b_{i2} \\cdot t))],$$\n", + "$X^T$ is a vector of tine-invariant covariates, $\\alpha$ a vector of regression coefficients.\n", + "\n", + "$H_o(t) = \\lambda t$ and if $h_0(t)>0$ for all $t$, then $H_0$ can be inverted:\n", + "$$-\\log(Q) = \\lambda t \\cdot \\exp[X^T \\alpha + \\gamma (b_{i1} + b_{i2} \\cdot t) ] $$\n", + "This expression can be rearranged to generate the times-to-event.\n", + "\n", + "Generate the time-to-event $\\tau_j$ using the following equations for the Cox Exponential model:\n", + "$$ t = \\frac{1}{\\gamma \\cdot b_{i2}} W \\Big( \\frac{-\\gamma(b_{i2}) \\log(Q)}{\\lambda \\exp (X^T \\alpha + \\gamma(b_{i1}))} \\Big). $$\n", + "\n", + "Where $W$ is the Lambert W function (LWF) first proposed by [Corless et al. 1996](https://link.springer.com/article/10.1007/BF02124750) provide a history, theory and applications of the LWF. The LWF also known as Omega function is the inverse of the function $f(p) = p \\cdot \\exp(p) $.\n", + "\n", + "Generate the censoring variable $C \\sim Unif⁡(25, 30)$ for censoring to occur later in study. From the survival and censoring times, we obtain the censoring indicator $\\delta_i$ which is defined as 1 if $\\tau_j < C_i$ and 0 otherwise.\n" ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ - "torch.manual_seed(123)\n", - "\n", - "# defining parameters\n", - "sample_size = 100 #number of subjects to generate\n", - "obs_time = 10 #number of observations over time for each subject\n", - "\n", - "# create random variables following a normal distribution N(1,1) for each subject \n", - "mean = 5\n", - "standard_dev = 2.5\n", - "random_vars = torch.randn(sample_size)*standard_dev + mean\n", - "\n", - "# using the random variables from above, we create a set of covariates for each subject \n", - "t = torch.linspace(0, 2*math.pi, obs_time) # Generating 6 equidistant time points from 0 to 2*pi\n", + "#import lmbert W function\n", "\n", - "# Creating the matrix\n", - "matrix = torch.zeros(sample_size, obs_time)\n", - "\n", - "# Filling the matrix with sin values\n", - "for i in range(obs_time):\n", - " matrix[:, i] = torch.cos(t[i])\n", - "\n", - "# Multiplying with a vector of random variables, dim sample_size x obs_time\n", - "covars = matrix * random_vars[:, None]\n" + "from scipy.special import lambertw" ] }, { @@ -267,15 +265,21 @@ "execution_count": null, "metadata": {}, "outputs": [], - "source": [] + "source": [ + "alpha = torch.tensor([0.5, -0.2]) # Time-invariant covariates (e.g., Age, sex)\n", + "gamma = torch.tensor(0.3) # Association strength between longitudinal measures and time-to-event\n", + "lambda_ = torch.tensor(0.1) # Baseline hazard rate\n", + "\n", + "# Generate the random variables for hazard of a subject and censoring\n", + "Q = dist.Uniform(0, 1).sample() # Random variable for hazard (Q)\n", + "C = dist.Uniform(25, 30).sample() # Random variable for censoring" + ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Now we create outcome variables for the dataset based on the random variables we generated initially. This is so that the observations are related to the outcome in some way so that our network can distinguish some pattern.\n", "\n", - "We use the random variables ot determine how long someone has been observed and when they experience an event (if they experience one). Then we remove observations for the times beyond their event time.\n", "\n", "### Data Format\n", "\n", From 8c754bc2ac4b61475cfee3a0454c96f27be8f54a Mon Sep 17 00:00:00 2001 From: Dembowska Date: Thu, 2 Jan 2025 16:20:07 +0100 Subject: [PATCH 08/19] outcome simulation coded --- docs/notebooks/time_varying.ipynb | 114 ++++++++++++++++++++++-------- 1 file changed, 83 insertions(+), 31 deletions(-) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 70b7cc2..582ddb2 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -264,15 +264,66 @@ "cell_type": "code", "execution_count": null, "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([2.6643e-08+0.j, 7.9076e-08+0.j, 1.0829e-07+0.j, 2.2052e-07+0.j, 2.5905e-08+0.j,\n", + " 9.7290e-09+0.j, 8.7800e-08+0.j, 1.5979e-07+0.j, 7.4499e-09+0.j, 1.2207e-06+0.j,\n", + " 8.8470e-09+0.j, 1.7203e-06+0.j, 3.2379e-08+0.j, 4.4401e-10+0.j, 1.7946e-08+0.j,\n", + " 2.5867e-08+0.j, 7.5077e-08+0.j, 7.9405e-07+0.j, 9.4477e-08+0.j, 6.2436e-09+0.j,\n", + " 9.7389e-10+0.j, 1.9608e-09+0.j, 2.0911e-09+0.j, 1.0234e-07+0.j, 1.7900e-08+0.j,\n", + " 3.1549e-06+0.j, 4.5246e-09+0.j, 8.9455e-07+0.j, 9.1886e-07+0.j, 2.7448e-08+0.j,\n", + " 3.0028e-07+0.j, 7.5052e-05+0.j, 1.7000e-06+0.j, 3.9642e-08+0.j, 1.2840e-05+0.j,\n", + " 4.4597e-09+0.j, 1.6356e-07+0.j, 5.2418e-07+0.j, 1.7659e-07+0.j, 7.6708e-07+0.j,\n", + " 6.8524e-09+0.j, 1.0414e-09+0.j, 1.2928e-08+0.j, 7.8624e-08+0.j, 3.9656e-08+0.j,\n", + " 6.4054e-08+0.j, 3.1831e-08+0.j, 2.1889e-07+0.j, 5.6625e-09+0.j, 8.7863e-10+0.j,\n", + " 2.0402e-06+0.j, 2.9838e-09+0.j, 1.0714e-06+0.j, 1.8990e-09+0.j, 4.4492e-09+0.j,\n", + " 4.3770e-08+0.j, 2.2484e-10+0.j, 3.7806e-07+0.j, 1.2497e-07+0.j, 9.0410e-09+0.j,\n", + " 3.2210e-07+0.j, 4.5522e-07+0.j, 3.2021e-08+0.j, 2.8577e-08+0.j, 1.7105e-08+0.j,\n", + " 5.1632e-09+0.j, 9.6815e-08+0.j, 9.2736e-07+0.j, 6.9194e-08+0.j, 1.0694e-09+0.j,\n", + " 2.0602e-05+0.j, 3.3414e-09+0.j, 9.4600e-09+0.j, 1.0943e-08+0.j, 6.6366e-07+0.j,\n", + " 7.4981e-08+0.j, 7.4259e-08+0.j, 3.7868e-08+0.j, 2.0600e-08+0.j, 2.3038e-06+0.j,\n", + " 2.6782e-08+0.j, 2.4470e-07+0.j, 4.1181e-09+0.j, 6.4102e-09+0.j, 6.7062e-08+0.j,\n", + " 2.0507e-07+0.j, 7.4554e-09+0.j, 5.7818e-08+0.j, 5.4998e-08+0.j, 6.2367e-07+0.j,\n", + " 8.7814e-07+0.j, 2.0683e-05+0.j, 1.0754e-05+0.j, 4.9900e-08+0.j, 7.3498e-07+0.j,\n", + " 1.0718e-08+0.j, 5.9085e-07+0.j, 3.0981e-08+0.j, 1.0097e-08+0.j, 5.8124e-07+0.j],\n", + " dtype=torch.complex128)" + ] + }, + "execution_count": 38, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "alpha = torch.tensor([0.5, -0.2]) # Time-invariant covariates (e.g., Age, sex)\n", - "gamma = torch.tensor(0.3) # Association strength between longitudinal measures and time-to-event\n", - "lambda_ = torch.tensor(0.1) # Baseline hazard rate\n", + "# Specify the values for parameters, generate the random variables and call on relevant variables defined previously\n", + "\n", + "alpha = torch.tensor([0.5, -0.2]) # regression coefficient for time-invariant covariates\n", + "gamma = torch.tensor(0.3) # association strength between longitudinal measures and time-to-event\n", + "lambda_0 = torch.tensor(0.1) # baseline hazard rate\n", "\n", "# Generate the random variables for hazard of a subject and censoring\n", "Q = dist.Uniform(0, 1).sample() # Random variable for hazard (Q)\n", - "C = dist.Uniform(25, 30).sample() # Random variable for censoring" + "C = dist.Uniform(25, 30).sample() # Random variable for censoring\n", + "\n", + "# age and sex are the names of variables corresponding to those covariates\n", + "# create the X matrix of covariates\n", + "XX = torch.stack((age, sex), dim=1)\n", + "\n", + "# b1 = torch.tensor([4.250]), b2 = torch.tensor([0.250])\n", + "\n", + "# Generate time to event T using the equation above\n", + "log_Q = torch.log(Q)\n", + "lambert_W_nominator = gamma*b2*log_Q\n", + "lambert_W_denominator = torch.exp(alpha@XX.T + gamma*b1)\n", + "# below should give a vector of length sample_size \n", + "lambert_W = lambertw(-lambert_W_nominator/(lambda_0*lambert_W_denominator))\n", + "time_to_event = lambert_W/(gamma*b2)\n", + "\n", + "# implement censoring with some level of intensity\n", + "time_to_event\n", + "#needs to be scaled and floored to be a reasonable time to event I think " ] }, { @@ -298,31 +349,32 @@ "metadata": {}, "outputs": [], "source": [ - "# make random positive time to event\n", - "time = torch.floor(random_vars)\n", - "# this is a workaround the loss function. This is done so that when we find the right\n", - "# indices in the log_hz we don't try to pick up things that are out of bounds.\n", - "time[time<0] = 0\n", - "time[time>9] = 9\n", - "# print(time) \n", - "# tensor([1.2792e+01, -7.7415e+00, 9.2325e+00, 1.0845e+01, 7.6460e+00, ...\n", - "\n", - "# decide who has an event, here we cosnider those whose time is greater than one and smaller than 9\n", - "events = (time > 1) & (time < 8)\n", - "# tensor([ True, True, False, False, True, ...\n", - "# print(events)\n", - "\n", - "# remove the covariates for those who have observed an event\n", - "\n", - "for i in range(sample_size):\n", - " if events[i]==True:\n", - " time_cap = int(time[i])\n", - " covars[i, time_cap:] = torch.zeros(obs_time-time_cap)\n", - "\n", - "# covars should be tensor([[ 3.3737e-01, 2.5844e-01, 5.8584e-02, -1.6869e-01, -3.1702e-01, ... \n", - "# and zeros after an event occured\n", - "\n", - "# print(covars)" + "# CODE BELOW IS OLD, WILL REMOVE SOON\n", + "# # make random positive time to event\n", + "# time = torch.floor(random_vars)\n", + "# # this is a workaround the loss function. This is done so that when we find the right\n", + "# # indices in the log_hz we don't try to pick up things that are out of bounds.\n", + "# time[time<0] = 0\n", + "# time[time>9] = 9\n", + "# # print(time) \n", + "# # tensor([1.2792e+01, -7.7415e+00, 9.2325e+00, 1.0845e+01, 7.6460e+00, ...\n", + "\n", + "# # decide who has an event, here we cosnider those whose time is greater than one and smaller than 9\n", + "# events = (time > 1) & (time < 8)\n", + "# # tensor([ True, True, False, False, True, ...\n", + "# # print(events)\n", + "\n", + "# # remove the covariates for those who have observed an event\n", + "\n", + "# for i in range(sample_size):\n", + "# if events[i]==True:\n", + "# time_cap = int(time[i])\n", + "# covars[i, time_cap:] = torch.zeros(obs_time-time_cap)\n", + "\n", + "# # covars should be tensor([[ 3.3737e-01, 2.5844e-01, 5.8584e-02, -1.6869e-01, -3.1702e-01, ... \n", + "# # and zeros after an event occured\n", + "\n", + "# # print(covars)" ] }, { @@ -350,7 +402,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 37, "metadata": {}, "outputs": [ { From 8a81ae3dfe8932be0c4001352e00af6b9ad73194 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Fri, 3 Jan 2025 14:54:44 +0100 Subject: [PATCH 09/19] outcome simulation parameters improved --- docs/notebooks/introduction.ipynb | 2 +- docs/notebooks/time_varying.ipynb | 196 +++++++++++++++++------------- 2 files changed, 115 insertions(+), 83 deletions(-) diff --git a/docs/notebooks/introduction.ipynb b/docs/notebooks/introduction.ipynb index b42a79d..6f3a731 100644 --- a/docs/notebooks/introduction.ipynb +++ b/docs/notebooks/introduction.ipynb @@ -1286,7 +1286,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "conda-env2", "language": "python", "name": "python3" }, diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 582ddb2..5ad97cb 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -71,7 +71,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -84,7 +84,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 4, "metadata": {}, "outputs": [], "source": [ @@ -118,17 +118,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Simulating a dataset: first approach to test dimensions but doesn't guarantee meaningful results\n", - "\n", - "We will simulate a dataset of 100 subjects with 10 follow up times where a covariate is observed. The covariates will follow a trigonometric function over time and will be dependant on a random variable to differentiate between subjects.\n", - "\n", - "For each $i$ the covariate follows the function:\n", - "\n", - "$$ Z_i(t) = a_i \\cos(2 \\pi t) $$\n", - "\n", - "where $a_i \\sim N(5, 2.5)$.\n", - "\n", - "## Proper simulation guidance: data that can be interpreted\n", + "## Simulating realistic data\n", "\n", "A good approach for simulating data is described in detail by [Ngwa et al 2020](https://pmc.ncbi.nlm.nih.gov/articles/PMC7731987/). If this is not yet implemented, it would be a good way of starting to ensure that both methods work as expected. There are tow parts in simulating such a dataset. First, simulating the longitudina lobservational data and then the survival data. Below we describe methodologies for both.\n", "\n", @@ -151,12 +141,71 @@ "\n", "$$ V = Z_i GZ_i ^T + R_i, \\text{ where }Z_i = [[1,1,1,1,1,1]^T, [0,5,10,15,20,25]^T]$$\n", "\n", - "and $R_i = diag(\\sigma^2)$ and $\\sigma^2$ is set to $0.1161$." + "and $R_i = diag(\\sigma^2)$ and $\\sigma^2$ is set to $0.1161$.\n", + "\n", + "Note: Compared to the paper, we slightly adjust steps 3 and 4 from the simulation algorithm section (6.1) to avoid fitting a random effects model which adds more complexity in terms of data formatting. " ] }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[34.2016, 34.2866, 34.3716, 34.4566, 34.5416, 34.6266],\n", + " [33.4380, 33.4018, 33.3657, 33.3295, 33.2933, 33.2572],\n", + " [31.5581, 31.5498, 31.5415, 31.5332, 31.5248, 31.5165],\n", + " [35.7813, 35.8513, 35.9212, 35.9912, 36.0611, 36.1310]])\n" + ] + } + ], + "source": [ + "import torch.distributions as dist\n", + "\n", + "# Set random seed for reproducibility\n", + "torch.manual_seed(123)\n", + "\n", + "n = 100 # Number of subjects\n", + "T = 6 # Number of time points\n", + "time_vec = torch.tensor([0, 5, 10, 15, 20, 25])\n", + "\n", + "# Simulation parameters\n", + "age_mean = 35\n", + "age_std = 5\n", + "sex_prob = 0.54\n", + "G = torch.tensor([[0.29, -0.00465],[-0.00465, 0.000320]])\n", + "Z = torch.tensor([[1, 1, 1, 1, 1, 1], time_vec], dtype=torch.float32).T\n", + "sigma = torch.tensor([0.1161])\n", + "alpha = 1\n", + "\n", + "# Simulate age at baseline\n", + "age_dist = dist.Normal(age_mean, age_std)\n", + "age = age_dist.sample((n,))\n", + "\n", + "# Simulate sex\n", + "sex_dist = dist.Bernoulli(probs=sex_prob)\n", + "sex = sex_dist.sample((n,))\n", + "\n", + "# Simulate random effects\n", + "random_effects_dist = dist.MultivariateNormal(torch.zeros(2), G)\n", + "random_effects = random_effects_dist.sample((n,))\n", + "\n", + "# sample random error\n", + "error_sample = dist.Normal(0, sigma).sample((n,))\n", + "\n", + "# Generate expected longitudinal trajectories\n", + "# quite frakly this is useless now - it was based on my bad understanding of the algorithm\n", + "trajectories = random_effects[:, 0].unsqueeze(1) + random_effects[:, 1].unsqueeze(1) * Z[:,1] + alpha * age.unsqueeze(1) + error_sample\n", + "\n", + "print(trajectories[1:5, :])" + ] + }, + { + "cell_type": "code", + "execution_count": 8, "metadata": {}, "outputs": [ { @@ -178,13 +227,14 @@ "\n", "n = 100 # Number of subjects\n", "T = 6 # Number of time points\n", + "time_vec = torch.tensor([0, 5, 10, 15, 20, 25])\n", "\n", "# Simulation parameters\n", "age_mean = 35\n", "age_std = 5\n", "sex_prob = 0.54\n", "G = torch.tensor([[0.29, -0.00465],[-0.00465, 0.000320]])\n", - "Z = torch.tensor([[1, 1, 1, 1, 1, 1], [0, 5, 10, 15, 20, 25]], dtype=torch.float32).T\n", + "Z = torch.tensor([[1, 1, 1, 1, 1, 1], time_vec], dtype=torch.float32).T\n", "sigma = torch.tensor([0.1161])\n", "alpha = 1\n", "\n", @@ -244,14 +294,14 @@ "Generate the time-to-event $\\tau_j$ using the following equations for the Cox Exponential model:\n", "$$ t = \\frac{1}{\\gamma \\cdot b_{i2}} W \\Big( \\frac{-\\gamma(b_{i2}) \\log(Q)}{\\lambda \\exp (X^T \\alpha + \\gamma(b_{i1}))} \\Big). $$\n", "\n", - "Where $W$ is the Lambert W function (LWF) first proposed by [Corless et al. 1996](https://link.springer.com/article/10.1007/BF02124750) provide a history, theory and applications of the LWF. The LWF also known as Omega function is the inverse of the function $f(p) = p \\cdot \\exp(p) $.\n", + "Where $W$ is the Lambert W function (LWF) first proposed by [Corless et al. 1996](https://link.springer.com/article/10.1007/BF02124750) provide a history, theory and applications of the LWF. The LWF is the inverse of the function $f(p) = p \\cdot \\exp(p) $.\n", "\n", "Generate the censoring variable $C \\sim Unif⁡(25, 30)$ for censoring to occur later in study. From the survival and censoring times, we obtain the censoring indicator $\\delta_i$ which is defined as 1 if $\\tau_j < C_i$ and 0 otherwise.\n" ] }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ @@ -260,6 +310,17 @@ "from scipy.special import lambertw" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note: pre-determined parameters such as $\\alpha, \\gamma, \\lambda_0$ have a large effect on the event time outcomes, the values used here are:\n", + "- $\\alpha_{age} = 0.05$,\n", + "- $\\alpha_{sex} = -0.5$,\n", + "- $\\gamma = 0.1$,\n", + "- $\\lambda_0 = 0.05$\n" + ] + }, { "cell_type": "code", "execution_count": null, @@ -268,30 +329,22 @@ { "data": { "text/plain": [ - "tensor([2.6643e-08+0.j, 7.9076e-08+0.j, 1.0829e-07+0.j, 2.2052e-07+0.j, 2.5905e-08+0.j,\n", - " 9.7290e-09+0.j, 8.7800e-08+0.j, 1.5979e-07+0.j, 7.4499e-09+0.j, 1.2207e-06+0.j,\n", - " 8.8470e-09+0.j, 1.7203e-06+0.j, 3.2379e-08+0.j, 4.4401e-10+0.j, 1.7946e-08+0.j,\n", - " 2.5867e-08+0.j, 7.5077e-08+0.j, 7.9405e-07+0.j, 9.4477e-08+0.j, 6.2436e-09+0.j,\n", - " 9.7389e-10+0.j, 1.9608e-09+0.j, 2.0911e-09+0.j, 1.0234e-07+0.j, 1.7900e-08+0.j,\n", - " 3.1549e-06+0.j, 4.5246e-09+0.j, 8.9455e-07+0.j, 9.1886e-07+0.j, 2.7448e-08+0.j,\n", - " 3.0028e-07+0.j, 7.5052e-05+0.j, 1.7000e-06+0.j, 3.9642e-08+0.j, 1.2840e-05+0.j,\n", - " 4.4597e-09+0.j, 1.6356e-07+0.j, 5.2418e-07+0.j, 1.7659e-07+0.j, 7.6708e-07+0.j,\n", - " 6.8524e-09+0.j, 1.0414e-09+0.j, 1.2928e-08+0.j, 7.8624e-08+0.j, 3.9656e-08+0.j,\n", - " 6.4054e-08+0.j, 3.1831e-08+0.j, 2.1889e-07+0.j, 5.6625e-09+0.j, 8.7863e-10+0.j,\n", - " 2.0402e-06+0.j, 2.9838e-09+0.j, 1.0714e-06+0.j, 1.8990e-09+0.j, 4.4492e-09+0.j,\n", - " 4.3770e-08+0.j, 2.2484e-10+0.j, 3.7806e-07+0.j, 1.2497e-07+0.j, 9.0410e-09+0.j,\n", - " 3.2210e-07+0.j, 4.5522e-07+0.j, 3.2021e-08+0.j, 2.8577e-08+0.j, 1.7105e-08+0.j,\n", - " 5.1632e-09+0.j, 9.6815e-08+0.j, 9.2736e-07+0.j, 6.9194e-08+0.j, 1.0694e-09+0.j,\n", - " 2.0602e-05+0.j, 3.3414e-09+0.j, 9.4600e-09+0.j, 1.0943e-08+0.j, 6.6366e-07+0.j,\n", - " 7.4981e-08+0.j, 7.4259e-08+0.j, 3.7868e-08+0.j, 2.0600e-08+0.j, 2.3038e-06+0.j,\n", - " 2.6782e-08+0.j, 2.4470e-07+0.j, 4.1181e-09+0.j, 6.4102e-09+0.j, 6.7062e-08+0.j,\n", - " 2.0507e-07+0.j, 7.4554e-09+0.j, 5.7818e-08+0.j, 5.4998e-08+0.j, 6.2367e-07+0.j,\n", - " 8.7814e-07+0.j, 2.0683e-05+0.j, 1.0754e-05+0.j, 4.9900e-08+0.j, 7.3498e-07+0.j,\n", - " 1.0718e-08+0.j, 5.9085e-07+0.j, 3.0981e-08+0.j, 1.0097e-08+0.j, 5.8124e-07+0.j],\n", - " dtype=torch.complex128)" + "tensor([ 9.6428, 6.8019, 7.1837, 8.1690, 10.6510, 5.2226, 7.0858, 11.3846,\n", + " 5.4684, 14.3864, 5.1831, 9.3314, 6.3816, 3.8954, 10.0959, 6.0119,\n", + " 11.7046, 12.7777, 10.7462, 8.4370, 4.6285, 4.8617, 4.3450, 10.8670,\n", + " 10.1935, 16.2546, 5.4758, 8.5248, 9.2135, 9.4407, 11.7310, 21.0234,\n", + " 14.0767, 9.1752, 18.8326, 8.9085, 11.2594, 8.9873, 7.5456, 8.4984,\n", + " 9.0333, 4.8472, 9.4688, 7.7191, 6.2192, 6.4989, 9.8902, 7.8185,\n", + " 5.2405, 4.2516, 9.3067, 5.0147, 8.3767, 4.9315, 8.5749, 11.3669,\n", + " 6.0864, 7.5788, 11.8391, 8.8440, 12.2118, 13.6110, 6.2863, 5.8571,\n", + " 9.5126, 8.6607, 6.8886, 15.5586, 10.6941, 7.2345, 18.2753, 5.4170,\n", + " 5.2679, 9.0509, 12.9154, 11.2252, 7.4939, 6.5494, 10.3731, 14.2850,\n", + " 5.7533, 12.2423, 5.6055, 5.2892, 11.0855, 11.8667, 5.5114, 11.4350,\n", + " 10.3182, 12.8253, 14.6775, 19.0688, 17.0049, 6.3822, 14.5267, 8.7058,\n", + " 8.2680, 10.7909, 5.2648, 12.7710], dtype=torch.float64)" ] }, - "execution_count": 38, + "execution_count": 23, "metadata": {}, "output_type": "execute_result" } @@ -299,19 +352,21 @@ "source": [ "# Specify the values for parameters, generate the random variables and call on relevant variables defined previously\n", "\n", - "alpha = torch.tensor([0.5, -0.2]) # regression coefficient for time-invariant covariates\n", - "gamma = torch.tensor(0.3) # association strength between longitudinal measures and time-to-event\n", - "lambda_0 = torch.tensor(0.1) # baseline hazard rate\n", + "alpha = torch.tensor([0.05, -0.5]) # regression coefficient for time-invariant covariates\n", + "gamma = torch.tensor(0.1) # association strength between longitudinal measures and time-to-event\n", + "lambda_0 = torch.tensor(0.05) # baseline hazard rate\n", "\n", "# Generate the random variables for hazard of a subject and censoring\n", "Q = dist.Uniform(0, 1).sample() # Random variable for hazard (Q)\n", - "C = dist.Uniform(25, 30).sample() # Random variable for censoring\n", + "C = dist.Uniform(20, 30).sample() # Random variable for censoring\n", "\n", "# age and sex are the names of variables corresponding to those covariates\n", "# create the X matrix of covariates\n", "XX = torch.stack((age, sex), dim=1)\n", "\n", - "# b1 = torch.tensor([4.250]), b2 = torch.tensor([0.250])\n", + "# get b1 and b2 from the random sample we made before\n", + "b1 = random_effects[:, 0]\n", + "b2 = random_effects[:, 1]\n", "\n", "# Generate time to event T using the equation above\n", "log_Q = torch.log(Q)\n", @@ -321,9 +376,20 @@ "lambert_W = lambertw(-lambert_W_nominator/(lambda_0*lambert_W_denominator))\n", "time_to_event = lambert_W/(gamma*b2)\n", "\n", + "#take the real part of the LBF, the complex part is =0\n", + "outcome_LWF = time_to_event.real\n", + "\n", "# implement censoring with some level of intensity\n", - "time_to_event\n", - "#needs to be scaled and floored to be a reasonable time to event I think " + "outcome_LWF\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A simpler method for generating the time-to-event where the covariate is assumed to have a more straightforward relation in time $Z(t) = kt$ for some $k>0$. This approach is sugested by [Peter C. Austin 2012](https://pmc.ncbi.nlm.nih.gov/articles/PMC3546387/pdf/sim0031-3946.pdf) and here \n", + "$$ t = \\frac{1}{\\gamma k} \\log \\Big ( 1 + \\frac{\\gamma k (-log(u))}{\\lambda \\exp(\\alpha X)}\\Big). $$\n", + "The above equation has been adapted to remain consistent with the parameters defined before. In our case, $k$ could be replaced with $b_{i2}$ if $b_{i2}$ would be sampled such that it is strictly positive. In the above configuration that is not the case." ] }, { @@ -343,40 +409,6 @@ "- impute based on some model." ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# CODE BELOW IS OLD, WILL REMOVE SOON\n", - "# # make random positive time to event\n", - "# time = torch.floor(random_vars)\n", - "# # this is a workaround the loss function. This is done so that when we find the right\n", - "# # indices in the log_hz we don't try to pick up things that are out of bounds.\n", - "# time[time<0] = 0\n", - "# time[time>9] = 9\n", - "# # print(time) \n", - "# # tensor([1.2792e+01, -7.7415e+00, 9.2325e+00, 1.0845e+01, 7.6460e+00, ...\n", - "\n", - "# # decide who has an event, here we cosnider those whose time is greater than one and smaller than 9\n", - "# events = (time > 1) & (time < 8)\n", - "# # tensor([ True, True, False, False, True, ...\n", - "# # print(events)\n", - "\n", - "# # remove the covariates for those who have observed an event\n", - "\n", - "# for i in range(sample_size):\n", - "# if events[i]==True:\n", - "# time_cap = int(time[i])\n", - "# covars[i, time_cap:] = torch.zeros(obs_time-time_cap)\n", - "\n", - "# # covars should be tensor([[ 3.3737e-01, 2.5844e-01, 5.8584e-02, -1.6869e-01, -3.1702e-01, ... \n", - "# # and zeros after an event occured\n", - "\n", - "# # print(covars)" - ] - }, { "cell_type": "markdown", "metadata": {}, From 3b24c75cd4378610958cd62d0b7f712cb45292d8 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Fri, 3 Jan 2025 11:41:09 -0500 Subject: [PATCH 10/19] minimal test --- docs/notebooks/loss_time_covariates.py | 152 +++++++++++++++++-------- 1 file changed, 102 insertions(+), 50 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index d744431..60a72c8 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -3,19 +3,21 @@ import torch +MAX_TIME = 1e6 + def neg_partial_time_log_likelihood( - log_hz: torch.Tensor, #Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time - time: torch.Tensor, #n length vector, time at which someone experiences event - events: torch.Tensor, #n length vector, boolean, true or false to determine if someone had an event - reduction: str = "mean" + log_hz: torch.Tensor, # Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time + time: torch.Tensor, # n length vector, time at which someone experiences event + events: torch.Tensor, # n length vector, boolean, true or false to determine if someone had an event + reduction: str = "mean", ) -> torch.Tensor: - ''' + """ needs further work - ''' + """ # only consider theta at tiem of pll = _partial_likelihood_time_cox(log_hz, time, events) - + # Negative partial log likelihood pll = torch.neg(pll) if reduction.lower() == "mean": @@ -30,11 +32,11 @@ def neg_partial_time_log_likelihood( ) return loss + def _partial_likelihood_time_cox( - log_hz: torch.Tensor, #Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time - time: torch.Tensor, #n length vector, time at which someone experiences event - events: torch.Tensor, #n length vector, boolean, true or false to determine if someone had an event - + log_hz: torch.Tensor, # Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time + time: torch.Tensor, # n length vector, time at which someone experiences event + events: torch.Tensor, # n length vector, boolean, true or false to determine if someone had an event ) -> torch.Tensor: """ Calculate the partial log likelihood for the Cox proportional hazards model @@ -63,7 +65,7 @@ def _partial_likelihood_time_cox( .. math:: \log \lambda_i (t|H_Z) = lambda_0(t) \theta(Z(t)) - + A network that maps the input covariates $Z(t)$ to the log relative hazards: :math:`\log \theta(Z(t))`. The partial likelihood with repsect to :math:`\log \theta(Z(t))` is written as: @@ -74,77 +76,127 @@ def _partial_likelihood_time_cox( and it only considers the values of te covariate :math:`Z` at event time :math:`\tau_i` Remarks: - - values inside the time vector must be strictly zero or positive as they are used to identify values of + - values inside the time vector must be strictly zero or positive as they are used to identify values of covariates at event time - the maximum value inside the vector time cannt exceed T-1 for indexing reasons - this function was not tested for P>1 but it should be possile for an extension - - the values of Z at event time should not be null, a reasonable imputation method should be used, + - the values of Z at event time should not be null, a reasonable imputation method should be used, unless the network fullfills that role - future formatting: time vector must somehow correspond to the T dimension in the log_hz tensor, i.e. for those who experience an event, - we want to identify the index of the covariate upon failure. We could either consider the last covariate before a series of zeros + we want to identify the index of the covariate upon failure. We could either consider the last covariate before a series of zeros (requires special data formatting but could reduce issues as it automatically contains event time information). """ + # Last dimension must be equal to 1 if shape == 3 + if len(log_hz.shape) == 3: + assert log_hz.shape[-1] == 1, "Last dimension of log_hz must be equal to 1." + log_hz = log_hz.squeeze(-1) + # time cannot be smaller than zero, and maximum value cannot exceed the T dimension for this to work - # somehwere here it might be good to make sure maximum values in time do not exceed T and raise a warning - time_sorted, idx = torch.sort(time) - - # sort the output of the RNN by the subjects who have earlier event time - # we want a tensor out - log_hz_sorted = log_hz[:,idx,:] + if time.min() < 0: + raise ValueError("Time values must be greater or equal to zero.") + + # Maximum values in time do not exceed MAX_TIME and raise a warning + if time.max() > MAX_TIME: + warnings.warn( + f"Maximum value {MAX_TIME} in time vector exceeds the time dimension of the log_hz tensor." + ) + + # Sort the time vector and the output of the RNN by the subjects who have earlier event time + _, idx = torch.sort(time) + + # Sort the output of the RNN by the subjects who have earlier event time + log_hz_sorted = log_hz[:, idx] events_sorted = events[idx] - #format the time so we can use it to index - #in the next step we want to pick out the covariate at event time for each subject for each covariate p - time_sorted=time_sorted.type(torch.int64) + # as an outcome we want an 1xn tensor aka. we only consider Z(tau_j), a covariate at time of event + log_hz_sorted_tj = torch.gather(log_hz_sorted, 1, idx.expand(log_hz_sorted.size())) + + # same step as in normal cox loss, just again, we consider Z(tau_j) where tau_j denotes event time to subject j + log_denominator_tj = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) - #as an outcome we want an 1xnxp tensor aka. we only cosnider Z(tau_j), a covariate at time of event - log_hz_sorted_tj = log_hz_sorted.gather(0, time_sorted.unsqueeze(0).unsqueeze(-1)) + # Keep only patients with events + include = events_sorted.expand(log_hz_sorted.size()) - #same step as in normal cox loss, just again, we consider Z(tau_j) where tau_j denotes event time to subject j - log_denominator_tj = torch.logcumsumexp(log_hz_sorted_tj.flip(0), dim=0).flip(0) - #give the mask the same dimensions as the log_hz and log_denominator vectors - event_mask = events_sorted.unsqueeze(0).unsqueeze(-1) - return (log_hz_sorted_tj - log_denominator_tj)[event_mask] + # return the partial log likelihood + return (log_hz_sorted_tj - log_denominator_tj)[include] def _time_varying_covariance( - log_hz: torch.Tensor, #nx1 vector - event: torch.Tensor, #n vector (i think) - time: torch.Tensor, #n vector (i think) - covariates: torch.Tensor, #nxp vector, p number of params + log_hz: torch.Tensor, # nx1 vector + event: torch.Tensor, # n vector (i think) + time: torch.Tensor, # n vector (i think) + covariates: torch.Tensor, # nxp vector, p number of params ) -> torch.Tensor: - """ Calculate the covariance matrix for the outcome thetas from a network in - in the case of time-varying covariates. Returns a nxn matrix with n being the batch size.""" + """Calculate the covariance matrix for the outcome thetas from a network in + in the case of time-varying covariates. Returns a nxn matrix with n being the batch size. + """ # sort data by time-to-event or censoring time_sorted, idx = torch.sort(time) log_hz_sorted = log_hz[idx] event_sorted = event[idx] - #keep log if we can + # keep log if we can exp_log_hz = torch.exp(log_hz_sorted) - #remove mean over time from covariates - #sort covariates so that the rows match the ordering + # remove mean over time from covariates + # sort covariates so that the rows match the ordering covariates_sorted = covariates[idx, :] - covariates.mean(dim=0) - #the left hand side (HS) of the equation - #below is Z_k Z_k^T - i think it should be a vector matrix dim nxn + # the left hand side (HS) of the equation + # below is Z_k Z_k^T - i think it should be a vector matrix dim nxn covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T) - - #pointwise multiplication of vectors to get the nominator of left HS - #outcome in a vector of length n + + # pointwise multiplication of vectors to get the nominator of left HS + # outcome in a vector of length n # Ends up being (1, n) log_nominator_left = torch.matmul(exp_log_hz.T, covariate_inner_product) - #right hand size of the equation - #formulate the brackets \sum exp(theta)Z_k + # right hand size of the equation + # formulate the brackets \sum exp(theta)Z_k bracket = torch.mul(exp_log_hz, covariates_sorted) - covariance_matrix = torch.matmul(bracket, bracket.T) #nxn matrix + covariance_matrix = torch.matmul(bracket, bracket.T) # nxn matrix # ###nbelow is commented out as it does not apply but I wanted to keep it for the functions # #log_nominator_right = torch.sum(nominator_right, dim=0).unsqueeze(0) # log_nominator_right = nominator_right[0,].unsqueeze(0) # log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) #dim=0 sums over the oth dimension # partial_log_likelihood = torch.div(log_nominator_left - log_nominator_right, log_denominator) # (n, n) - - return (covariance_matrix) \ No newline at end of file + + return covariance_matrix + + +if __name__ == "__main__": + import torch + from torchsurv.loss import cox + from torchsurv.metrics.cindex import ConcordanceIndex + + # set seed + torch.manual_seed(123) + + # Parameters + input_size = 16 # Irrelevant to the loss function + output_size = 1 # always 1 for Cox + seq_length = 2 # number of time steps + batch_size = 3 # number of samples + + # make random boolean events + events = torch.rand(batch_size) > 0.5 + print(events) + + # make random positive time to event + time = torch.rand(batch_size) * 100 + print(time) + + # Create simple RNN model + rnn = torch.nn.RNN(input_size, output_size, seq_length) + rnn = torch.compile(rnn) + inputs = torch.randn(seq_length, batch_size, input_size) + h0 = torch.randn(seq_length, batch_size, output_size) + + # Forward pass time series input + outputs, _ = rnn(inputs, h0) + print(f"outputs shape = {outputs.size()}") + + # Loss + loss = neg_partial_time_log_likelihood(outputs, time, events) + print(f"loss = {loss}") From f5c82713e546b11e9251dd20be82c6c2329a2df0 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Fri, 3 Jan 2025 20:33:53 +0100 Subject: [PATCH 11/19] notebook running --- docs/notebooks/time_varying.ipynb | 257 ++++++++++++++---------------- 1 file changed, 119 insertions(+), 138 deletions(-) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 5ad97cb..44cddcf 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -9,6 +9,28 @@ "In this notebook, we analyse a simulated dataset with time-varying covariates and survival outcomes. `TorchSurv` is used to train a model that predicts relative risk of subjects based on covariates observed over time. We will attempt to thoroughly explain the necessary elements to understand our implementation, but for a detailed read on time-varying survival models refer to Chapter 6 of [Dynamic Regression Models for Survival Data](https://link.springer.com/book/10.1007/0-387-33960-4). For a more brief explanation, please refer to these [slides](https://ms.uky.edu/~mai/sta635/Cox%20model.pdf). Below is a summary of the necessary information." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Future project ideas:\n", + "Future projects can take on various themes: testing edge cases of this implementation, improving the code to become more robust to different data types, include weibull distribution and compare this approate.\n", + "Testing edge cases:\n", + "- use the simulated data and change different parameters to see how it affects performance, this can help guide appropriate use\n", + "- design slightly different simulations known for being difficult or easy in specific scenarios\n", + "- use a dataset with known properties\n", + "\n", + "Improving code to be more robust:\n", + "- generalising the loss functions and overall defining the formatting required or it to work, generalise for different time scales etc.\n", + "- extend the method to deal with multiple types of covariates in one loss function or a combination of multiple losses. This can extend to multiple time-varyin covariates and mixing time-invarint and varying ones.\n", + "\n", + "Weibull:\n", + "- Extent the cox loss function to also include the Weibull distribution, this is described for both the log-likelihood and the simulation\n", + "\n", + "Comparison\n", + "- One could compare this approach to other loss functions or statistical model to get an idea of what it brings as a benefit and a challenge. Note this comparison can be done via simulation or some dataset.\n" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -84,7 +106,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -95,7 +117,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -148,17 +170,17 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 55, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "tensor([[34.2016, 34.2866, 34.3716, 34.4566, 34.5416, 34.6266],\n", - " [33.4380, 33.4018, 33.3657, 33.3295, 33.2933, 33.2572],\n", - " [31.5581, 31.5498, 31.5415, 31.5332, 31.5248, 31.5165],\n", - " [35.7813, 35.8513, 35.9212, 35.9912, 36.0611, 36.1310]])\n" + "tensor([[34.2016, 34.2186, 34.2356, 34.2526, 34.2696, 34.2866],\n", + " [33.4380, 33.4308, 33.4235, 33.4163, 33.4091, 33.4018],\n", + " [31.5581, 31.5565, 31.5548, 31.5531, 31.5515, 31.5498],\n", + " [35.7813, 35.7953, 35.8093, 35.8233, 35.8373, 35.8513]])\n" ] } ], @@ -169,8 +191,8 @@ "torch.manual_seed(123)\n", "\n", "n = 100 # Number of subjects\n", - "T = 6 # Number of time points\n", - "time_vec = torch.tensor([0, 5, 10, 15, 20, 25])\n", + "T = torch.tensor(6) # Number of time points\n", + "time_vec = torch.tensor([0, 1, 2, 3, 4, 5])\n", "\n", "# Simulation parameters\n", "age_mean = 35\n", @@ -205,71 +227,28 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 1, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "tensor([[33.7853, 34.1568, 33.7249, 33.9724, 34.4417, 34.4528],\n", - " [33.1087, 33.4781, 32.5054, 33.1090, 32.9212, 33.4908],\n", - " [31.8224, 31.8031, 32.1202, 32.3814, 31.4848, 31.9074],\n", - " [36.1902, 35.9910, 36.4153, 36.2511, 35.8788, 36.4300]])\n" - ] - } - ], + "outputs": [], "source": [ - "import torch.distributions as dist\n", + "## ANOTHER WAY OF GENERATING DATA\n", "\n", - "# Set random seed for reproducibility\n", - "torch.manual_seed(123)\n", + "# # Simulate observed longitudinal measures\n", + "# R = torch.diag_embed(sigma.repeat(T))\n", + "# V = torch.matmul(torch.matmul(Z, G), Z.T) + R\n", "\n", - "n = 100 # Number of subjects\n", - "T = 6 # Number of time points\n", - "time_vec = torch.tensor([0, 5, 10, 15, 20, 25])\n", + "# #get a mean trajectory\n", + "# b1 = torch.tensor([4.250])\n", + "# b2 = torch.tensor([0.250])\n", + "# mean_trajectory = b1.item() + b2.item() * Z[:,1] + alpha * age_mean\n", "\n", - "# Simulation parameters\n", - "age_mean = 35\n", - "age_std = 5\n", - "sex_prob = 0.54\n", - "G = torch.tensor([[0.29, -0.00465],[-0.00465, 0.000320]])\n", - "Z = torch.tensor([[1, 1, 1, 1, 1, 1], time_vec], dtype=torch.float32).T\n", - "sigma = torch.tensor([0.1161])\n", - "alpha = 1\n", - "\n", - "# Simulate age at baseline\n", - "age_dist = dist.Normal(age_mean, age_std)\n", - "age = age_dist.sample((n,))\n", - "\n", - "# Simulate sex\n", - "sex_dist = dist.Bernoulli(probs=sex_prob)\n", - "sex = sex_dist.sample((n,))\n", - "\n", - "# Simulate random effects\n", - "random_effects_dist = dist.MultivariateNormal(torch.zeros(2), G)\n", - "random_effects = random_effects_dist.sample((n,))\n", - "\n", - "# Generate expected longitudinal trajectories\n", - "# quite frakly this is useless now - it was based on my bad understanding of the algorithm\n", - "trajectories = random_effects[:, 0].unsqueeze(1) + random_effects[:, 1].unsqueeze(1) * Z[:,1] + alpha * age.unsqueeze(1)\n", - "\n", - "# Simulate observed longitudinal measures\n", - "R = torch.diag_embed(sigma.repeat(T))\n", - "V = torch.matmul(torch.matmul(Z, G), Z.T) + R\n", - "\n", - "#get a mean trajectory\n", - "b1 = torch.tensor([4.250])\n", - "b2 = torch.tensor([0.250])\n", - "mean_trajectory = b1.item() + b2.item() * Z[:,1] + alpha * age_mean\n", + "# #define the distribution to sample the trajectories from\n", + "# observed_data_dist = dist.MultivariateNormal(trajectories, V)\n", "\n", - "#define the distribution to sample the trajectories from\n", - "observed_data_dist = dist.MultivariateNormal(trajectories, V)\n", + "# #sample from the distribution to get an n x T matrix of observations/covariates\n", + "# observed_data = observed_data_dist.sample((1,)).squeeze()\n", "\n", - "#sample from the distribution to get an n x T matrix of observations/covariates\n", - "observed_data = observed_data_dist.sample((1,)).squeeze()\n", - "\n", - "print(observed_data[1:5, :])" + "# print(observed_data[1:5, :])" ] }, { @@ -301,7 +280,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ @@ -323,28 +302,25 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 63, "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "tensor([ 9.6428, 6.8019, 7.1837, 8.1690, 10.6510, 5.2226, 7.0858, 11.3846,\n", - " 5.4684, 14.3864, 5.1831, 9.3314, 6.3816, 3.8954, 10.0959, 6.0119,\n", - " 11.7046, 12.7777, 10.7462, 8.4370, 4.6285, 4.8617, 4.3450, 10.8670,\n", - " 10.1935, 16.2546, 5.4758, 8.5248, 9.2135, 9.4407, 11.7310, 21.0234,\n", - " 14.0767, 9.1752, 18.8326, 8.9085, 11.2594, 8.9873, 7.5456, 8.4984,\n", - " 9.0333, 4.8472, 9.4688, 7.7191, 6.2192, 6.4989, 9.8902, 7.8185,\n", - " 5.2405, 4.2516, 9.3067, 5.0147, 8.3767, 4.9315, 8.5749, 11.3669,\n", - " 6.0864, 7.5788, 11.8391, 8.8440, 12.2118, 13.6110, 6.2863, 5.8571,\n", - " 9.5126, 8.6607, 6.8886, 15.5586, 10.6941, 7.2345, 18.2753, 5.4170,\n", - " 5.2679, 9.0509, 12.9154, 11.2252, 7.4939, 6.5494, 10.3731, 14.2850,\n", - " 5.7533, 12.2423, 5.6055, 5.2892, 11.0855, 11.8667, 5.5114, 11.4350,\n", - " 10.3182, 12.8253, 14.6775, 19.0688, 17.0049, 6.3822, 14.5267, 8.7058,\n", - " 8.2680, 10.7909, 5.2648, 12.7710], dtype=torch.float64)" + "tensor([ True, False, False, False, True, True, True, True, True, True,\n", + " True, True, True, True, True, False, True, True, True, True,\n", + " True, True, True, True, False, True, True, True, True, True,\n", + " True, True, True, True, True, False, True, True, True, False,\n", + " False, True, True, True, True, False, True, True, True, True,\n", + " True, True, True, True, True, True, False, True, True, True,\n", + " False, True, False, True, False, True, False, False, True, True,\n", + " True, True, True, True, True, True, True, True, True, True,\n", + " True, True, False, True, False, True, True, True, True, True,\n", + " True, True, True, True, True, True, True, False, False, True])" ] }, - "execution_count": 23, + "execution_count": 63, "metadata": {}, "output_type": "execute_result" } @@ -353,12 +329,14 @@ "# Specify the values for parameters, generate the random variables and call on relevant variables defined previously\n", "\n", "alpha = torch.tensor([0.05, -0.5]) # regression coefficient for time-invariant covariates\n", - "gamma = torch.tensor(0.1) # association strength between longitudinal measures and time-to-event\n", - "lambda_0 = torch.tensor(0.05) # baseline hazard rate\n", + "gamma = torch.tensor(0.3) # association strength between longitudinal measures and time-to-event\n", + "lambda_0 = torch.tensor(0.1) # baseline hazard rate\n", + "\n", + "torch.manual_seed(456)\n", "\n", "# Generate the random variables for hazard of a subject and censoring\n", - "Q = dist.Uniform(0, 1).sample() # Random variable for hazard (Q)\n", - "C = dist.Uniform(20, 30).sample() # Random variable for censoring\n", + "Q = dist.Uniform(0, 1).sample((n,)) # Random variable for hazard (Q)\n", + "C = dist.Uniform(3,5.5).sample((n,)) # Random variable for censoring\n", "\n", "# age and sex are the names of variables corresponding to those covariates\n", "# create the X matrix of covariates\n", @@ -378,9 +356,12 @@ "\n", "#take the real part of the LBF, the complex part is =0\n", "outcome_LWF = time_to_event.real\n", + "outcome_LWF = torch.floor(outcome_LWF)\n", + "outcome_LWF\n", "\n", - "# implement censoring with some level of intensity\n", - "outcome_LWF\n" + "# implement censoring with some level of intensity below\n", + "events = C<5\n", + "events" ] }, { @@ -420,7 +401,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 66, "metadata": {}, "outputs": [], "source": [ @@ -434,17 +415,17 @@ }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 69, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "torch.Size([10, 100, 1])\n", - "torch.Size([10, 100, 1])\n", + "torch.Size([6, 100, 1])\n", + "torch.Size([6, 100, 1])\n", "torch.Size([2, 100, 1])\n", - "torch.Size([10, 100, 1])\n" + "torch.Size([6, 100, 1])\n" ] } ], @@ -456,13 +437,13 @@ "input_size = 1\n", "output_size = 1\n", "num_layers = 2\n", - "seq_length = obs_time\n", - "batch_size = sample_size\n", + "seq_length = T\n", + "batch_size = n\n", "\n", "# Create simple RNN model\n", "rnn = torch.nn.RNN(input_size, output_size, num_layers)\n", "inputs = torch.randn(seq_length, batch_size, input_size)\n", - "test = covars.T.unsqueeze(2)\n", + "test = trajectories.T.unsqueeze(2)\n", "print(test.shape)\n", "print(inputs.shape)\n", "\n", @@ -495,7 +476,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 64, "metadata": {}, "outputs": [], "source": [ @@ -505,11 +486,11 @@ "# the columns are padded so if a subject experiences an event, the remaining of the column is zero\n", "\n", "# Generating example torch matrix\n", - "torch_matrix = covars\n", + "torch_matrix = trajectories\n", "# Convert torch matrix to pandas dataframe\n", "\n", "#set time to integer\n", - "max_time = max(time.type(torch.int64))\n", + "max_time = max(time_vec.type(torch.int64))\n", "\n", "vars = []\n", "#times = []\n", @@ -517,7 +498,7 @@ "stop = []\n", "event = []\n", "subjs = []\n", - "for i in range(sample_size):\n", + "for i in range(n):\n", " subj_counter = 0\n", " for j in range(max_time):\n", " if torch_matrix[i,j] == 0:\n", @@ -551,17 +532,17 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 65, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "Iteration 1: norm_delta = 7.81e-03, step_size = 0.9500, log_lik = -309.16572, newton_decrement = 1.92e-03, seconds_since_start = 0.1\n", - "Iteration 2: norm_delta = 3.93e-04, step_size = 0.9500, log_lik = -309.16381, newton_decrement = 4.85e-06, seconds_since_start = 0.1\n", - "Iteration 3: norm_delta = 1.96e-05, step_size = 0.9500, log_lik = -309.16380, newton_decrement = 1.21e-08, seconds_since_start = 0.1\n", - "Iteration 4: norm_delta = 1.03e-06, step_size = 1.0000, log_lik = -309.16380, newton_decrement = 3.03e-11, seconds_since_start = 0.1\n", + "Iteration 1: norm_delta = 5.88e-02, step_size = 0.9500, log_lik = -324.39949, newton_decrement = 2.39e-01, seconds_since_start = 0.0\n", + "Iteration 2: norm_delta = 2.77e-03, step_size = 0.9500, log_lik = -324.16177, newton_decrement = 5.30e-04, seconds_since_start = 0.0\n", + "Iteration 3: norm_delta = 1.38e-04, step_size = 0.9500, log_lik = -324.16124, newton_decrement = 1.32e-06, seconds_since_start = 0.0\n", + "Iteration 4: norm_delta = 7.26e-06, step_size = 1.0000, log_lik = -324.16124, newton_decrement = 3.30e-09, seconds_since_start = 0.0\n", "Convergence completed after 4 iterations.\n" ] }, @@ -598,23 +579,23 @@ " \n", " \n", " number of subjects\n", - " 95\n", + " 100\n", " \n", " \n", " number of periods\n", - " 476\n", + " 500\n", " \n", " \n", " number of events\n", - " 80\n", + " 81\n", " \n", " \n", " partial log-likelihood\n", - " -309.16\n", + " -324.16\n", " \n", " \n", " time fit was run\n", - " 2024-12-17 12:40:01 UTC\n", + " 2025-01-03 19:18:44 UTC\n", " \n", " \n", "\n", @@ -638,17 +619,17 @@ " \n", " \n", " var\n", - " -0.00\n", - " 1.00\n", - " 0.03\n", - " -0.06\n", - " 0.05\n", - " 0.94\n", - " 1.06\n", - " 0.00\n", - " -0.06\n", + " -0.01\n", + " 0.99\n", + " 0.02\n", + " -0.05\n", + " 0.02\n", " 0.95\n", - " 0.07\n", + " 1.02\n", + " 0.00\n", + " -0.69\n", + " 0.49\n", + " 1.03\n", " \n", " \n", "
\n", @@ -669,15 +650,15 @@ " \n", " \n", " Partial AIC\n", - " 620.33\n", + " 650.32\n", " \n", " \n", " log-likelihood ratio test\n", - " 0.00 on 1 df\n", + " 0.48 on 1 df\n", " \n", " \n", " -log2(p) of ll-ratio test\n", - " 0.07\n", + " 1.03\n", " \n", " \n", "\n", @@ -687,31 +668,31 @@ "\\begin{tabular}{lrrrrrrrrrrr}\n", " & coef & exp(coef) & se(coef) & coef lower 95% & coef upper 95% & exp(coef) lower 95% & exp(coef) upper 95% & cmp to & z & p & -log2(p) \\\\\n", "covariate & & & & & & & & & & & \\\\\n", - "var & -0.00 & 1.00 & 0.03 & -0.06 & 0.05 & 0.94 & 1.06 & 0.00 & -0.06 & 0.95 & 0.07 \\\\\n", + "var & -0.01 & 0.99 & 0.02 & -0.05 & 0.02 & 0.95 & 1.02 & 0.00 & -0.69 & 0.49 & 1.03 \\\\\n", "\\end{tabular}\n" ], "text/plain": [ - "\n", + "\n", " event col = 'events'\n", " penalizer = 0.1\n", - "number of subjects = 95\n", - " number of periods = 476\n", - " number of events = 80\n", - "partial log-likelihood = -309.16\n", - " time fit was run = 2024-12-17 12:40:01 UTC\n", + "number of subjects = 100\n", + " number of periods = 500\n", + " number of events = 81\n", + "partial log-likelihood = -324.16\n", + " time fit was run = 2025-01-03 19:18:44 UTC\n", "\n", "---\n", " coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%\n", "covariate \n", - "var -0.00 1.00 0.03 -0.06 0.05 0.94 1.06\n", + "var -0.01 0.99 0.02 -0.05 0.02 0.95 1.02\n", "\n", " cmp to z p -log2(p)\n", "covariate \n", - "var 0.00 -0.06 0.95 0.07\n", + "var 0.00 -0.69 0.49 1.03\n", "---\n", - "Partial AIC = 620.33\n", - "log-likelihood ratio test = 0.00 on 1 df\n", - "-log2(p) of ll-ratio test = 0.07" + "Partial AIC = 650.32\n", + "log-likelihood ratio test = 0.48 on 1 df\n", + "-log2(p) of ll-ratio test = 1.03" ] }, "metadata": {}, @@ -723,13 +704,13 @@ "" ] }, - "execution_count": 27, + "execution_count": 65, "metadata": {}, "output_type": "execute_result" }, { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjEAAAGwCAYAAABYazQUAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAArS0lEQVR4nO3de3QV5aH+8WcnIRdy2SEGSAIBhHCvULyAgK1aUYEiEbUKgghYEAWtHEVQUfSgLUWKINVarMhRKWiFhawjiqhgAQELHLnIZYEl5EISCEl2EkjIZb+/P/ojy0gggNnMO/D9rJXFyuzZs595V8I8effMHo8xxggAAMBlgpwOAAAAcD4oMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJVCnA5wNvx+vw4dOqTo6Gh5PB6n4wAAgLNgjFFxcbGSkpIUFFT/8yauKDGHDh1ScnKy0zEAAMB5yMjIUPPmzet9u64oMdHR0ZL+MwgxMTEOpwFwoaSlpWnatGl69tln1apVK6fjADhHRUVFSk5Orj6O1zdXlJiTbyHFxMRQYoBLSHR0tBo0aKDo6Gh+9wEXC9SpIJzYCwAAXIkSA8BawcHBio6OVnBwsNNRAFjI44a7WBcVFcnr9crn8zGlDACASwT6+M1MDAAAcCVKDABrZWZmasKECcrMzHQ6CgALUWIAWKuiokK5ubmqqKhwOgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFgrISFBkydPVkJCgtNRAFjIFbcdAHBpioiIUJcuXZyOAcBSzMQAsFZhYaGWLFmiwsJCp6MAsBAlBoC1CgoKtGTJEhUUFDgdBYCFKDEAAMCVKDEAAMCVKDEAAMCVKDEArBUVFaXevXsrKirK6SgALOQxxhinQ9Ql0LfyBgAA9S/Qx29mYgBYixtAAjgTSgwAa2VmZmrChAnKzMx0OgoAC1FiAACAK1FiAACAK1FiAACAK1FiAACAK3GJNQAACAgusQYAAKgFJQaAtbKzs/Xcc88pOzvb6SgALESJAWCtsrIy7d+/X2VlZU5HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAazVu3FgPP/ywGjdu7HQUABYKcToAAJxOVFSUrrvuOqdjALAUMzEArFVUVKTPPvtMRUVFTkcBYCFKDABrHT16VAsWLNDRo0edjgLAQpQYAADgSpQYAADgSpQYAADgSpQYANaKiIhQly5dFBER4XQUABbyGGOM0yHqEuhbeQMAgPoX6OM3MzEArOX3+1VaWiq/3+90FAAWosQAsNbBgwf1wAMP6ODBg05HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAAHAl7mINwFotWrTQG2+8ocjISKejALAQJQaAtYKDg/lsKACnxdtJAKyVm5urmTNnKjc31+koACxEiQFgrePHj2vr1q06fvy401EAWIgSAwAAXIkSAwAAXIkSAwAAXIkSA8BacXFxGjZsmOLi4pyOAsBCXGINwFper1f9+/d3OgYASzETA8Bax44d06ZNm3Ts2DGnowCwECUGgLUOHz6sOXPm6PDhw05HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAa4WGhqpVq1YKDQ11OgoAC3mMMcbpEHUpKiqS1+uVz+fjjrYAALhEoI/fzMQAAABXosQAsFZaWpqGDx+utLQ0p6MAsBAlBoC1jDGqrKyUC971BuAASgwAAHAlSgwAAHAlSgwAAHAl7mINwFrNmjXTjBkz1KRJE6ejALAQJQaAtUJDQ9W8eXOnYwCwFG8nAbBWXl6e5s2bp7y8PKejALAQJQaAtYqLi7VmzRoVFxc7HQWAhSgxAADAlSgxAADAlSgxAADAlSgxAKzl9Xo1cOBAeb1ep6MAsBCXWAOwVlxcnAYPHux0DACWYiYGgLXKysq0a9culZWVOR0FgIUoMQCslZ2drRdffFHZ2dlORwFgIUoMAABwJUoMAABwJUoMAABwJUoMAGuFhIQoLi5OISFcSAngVB5jjHE6RF2Kiork9Xrl8/kUExPjdBwAAHAWAn38ZiYGAAC4EiUGgLUyMjI0fvx4ZWRkOB0FgIUoMQCsVVlZqfz8fFVWVjodBYCFKDEAAMCVKDEAAMCVKDEAAMCVKDEArJWYmKgpU6YoMTHR6SgALMQnSAGwVnh4uDp16uR0DACWYiYGgLXy8/O1ePFi5efnOx0FgIUoMQCs5fP5tHz5cvl8PqejALAQJQYAALgSJQYAALgSJQYAALgSJQaAtaKjo3XDDTcoOjra6SgALOQxxhinQ9Ql0LfyBgAA9S/Qx29mYgBYq7y8XJmZmSovL3c6CgALUWIAWCsrK0tPPvmksrKynI4CwEKUGAAA4EqUGAAA4EqUGAAA4EqUGADW8ng8CgkJkcfjcToKAAtxiTUAAAgILrEGAACoBSUGgLWysrL09NNPc4k1gFpRYgBYq7y8XGlpaXzYHYBaUWIAAIArUWIAAIArUWIAAIArUWIAWKtJkyb63e9+pyZNmjgdBYCFQpwOAACnExkZqR49ejgdA4ClmIkBYC2fz6cVK1bI5/M5HQWAhSgxAKyVn5+v9957T/n5+U5HAWAhSgwAAHAlSgwAAHAlSgwAAHAlSgwAazVs2FBXXnmlGjZs6HQUABbyGGOM0yHqEuhbeQMAgPoX6OM3MzEArFVVVaWioiJVVVU5HQWAhSgxAKyVnp6usWPHKj093ekoACxEiQEAAK5EiQEAAK5EiQEAAK5EiQEAAK7EJdYArOX3+3XixAmFhYUpKIi/uQC3CfTxO6TetwgA9SQoKEgRERFOxwBgKf60AWCtnJwcTZ8+XTk5OU5HAWAhSgwAa5WWlmr79u0qLS11OgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFjrsssu04gRI3TZZZc5HQWAhbjEGoC1YmJidMsttzgdA4ClmIkBYK2SkhKtW7dOJSUlTkcBYCFKDABrHTlyRK+//rqOHDnidBQAFqLEAAAAV6LEAAAAV6LEAAAAV6LEALBWeHi4UlJSFB4e7nQUABbyGGOM0yHqEuhbeQMAgPoX6OM3MzEAAMCVKDEArHXgwAHde++9OnDggNNRAFiIEgMAAFyJEgMAAFyJEgMAAFyJEgMAAFyJu1gDsFbz5s31yiuvKC4uzukoACxEiQFgrQYNGqhp06ZOxwBgKd5OAmCtI0eO6LXXXuMu1gBqRYkBYK2SkhKtX79eJSUlTkcBYCFKDAAAcCVKDAAAcCVKDAAAcCVKDABrNWrUSHfeeacaNWrkdBQAFuISawDWio2N1Z133ul0DACWYiYGgLVKS0u1fft2lZaWOh0FgIUoMQCslZOTo+nTpysnJ8fpKAAsRIkBAACuRIkBAACuRIkBAACuRIkBYK2TN4Bs0KCB01EAWMhjjDFOh6hLUVGRvF6vfD6fYmJinI4DAADOQqCP38zEAAAAV6LEALBWenq6HnzwQaWnpzsdBYCFKDEArFVVVaXi4mJVVVU5HQWAhSgxAADAlSgxAADAlSgxAADAlSgxAKyVmJioF154QYmJiU5HAWChEKcDAMDphIeHq23btk7HAGApZmIAWCs/P1/vvfee8vPznY4CwEKUGADW8vl8WrFihXw+n9NRAFiIEgMAAFyJEgMAAFyJE3sBXFDXX3+9MjIyzrhOcnKyvvrqqwuUCIBbXfIzMa1bt1br1q2djgFcMjIyMs54L6T09PTqkhMdHa2bb75Z0dHRFyoecEly67HQ0ZmY8vJyhYaGOhkBgANatGihf//737U+9sP/SOPj4zVy5MgLFQuAy5z1TMy8efOUlJQkv99fY3lqaqpGjRql77//XqmpqWratKmioqJ0zTXX6PPPP6+xbqtWrTRt2jQNHz5cMTExGjNmTP3sBYCL0okTJ3TgwAGdOHHC6SgALOQxxpizWbGgoEAJCQlasWKFbrrpJkn/+QyHxMRErVixQvHx8dq4caN69+6tsLAwvfPOO5o5c6b27t2rFi1aSPpPiSkoKNBzzz2n22+/XZLUpk2bU17rxIkTNf7TKioqUnJysnw+n2JiYn7qPtfQunVrZWRkKDk5uV63C6B2J3/fzjQTc3KdiooKFRQUqFGjRmrQoMEFTgpcOur6vTxfRUVF8nq9ATl+S+cwE9OoUSP169dPf//736uXffjhh4qPj9eNN96orl276sEHH9TPfvYztW3bVtOmTVObNm20fPnyGtv51a9+pccff1xt2rSptcBI0h/+8Ad5vd7qLwoGAAD4sXM6J2bo0KEaPXq0Xn/9dYWFhWnhwoUaPHiwgoKCVFJSoueff14ff/yxsrOzVVlZqdLS0lNO4Lv66qvrfJ2nnnpK//Vf/1X9/cmZmEAJRPsEULuzOXnw5O/kgQMH9Mwzz+ill17S5ZdffgHSAZcmN57UK51jibnttttkjNHHH3+sa665RmvXrtUrr7wiSXriiSe0atUqzZw5UykpKYqIiNBdd92l8vLyGtuIjIys83XCwsIUFhZ2LtEAAMAl5pxKTHh4uO644w4tXLhQ+/fvV/v27XXllVdKktavX68RI0Zo0KBBkqSSkhKlpaXVe2AA7peenn7av/zS09Orz6MLCgpSeHi4goIu+U+DAFCLc77EeujQoRowYIC+++47DRs2rHp527ZttXTpUt12223yeDx69tlnT7mSyUa8jQRcWHW9NdyiRYvqdVq2bKn58+dfiFjAJc2tx8JzLjG/+tWvFBcXp7179+ree++tXj5r1iyNGjVKvXr1Unx8vCZNmqSioqJ6DQvA/fgkXgD15awvsXZSoC/RAmCnrKwszZ49W4899piaNWvmdBwA58iaS6wB4EIrLy9XVlbWKRcIAIBEiQEAAC5FiQEAAK5EiQEAAK5EiQFgraZNm+rxxx9X06ZNnY4CwELnfIk1AFwoDRs21FVXXeV0DACWYiYGgLUKCwv10UcfqbCw0OkoACxEiQFgrYKCAr3//vsqKChwOgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFgrMjJSPXr0UGRkpNNRAFiIG0ACAICA4AaQAC5ZlZWVys/PV2VlpdNRAFiIEgPAWhkZGRo/frwyMjKcjgLAQpQYAADgSpQYAADgSpQYAADgSpQYAADgSlxiDcBaxhhVVlYqJCREHo/H6TgAzlGgj98h9b5FAKgnHo9HDRo0cDoGAEvxdhIAa2VnZ2vatGnKzs52OgoAC1FiAFirrKxMu3fvVllZmdNRAFiIEgMAAFyJEgMAAFyJEgMAAFyJEgPAWvHx8Ro9erTi4+OdjgLAQlxiDcBa0dHRuvHGG52OAcBSzMQAsFZxcbFWr16t4uJip6MAsBAlBoC18vLy9OabbyovL8/pKAAsRIkBAACuRIkBAACuRIkBAACuRIkBYK3w8HB17NhR4eHhTkcBYCGPMcY4HaIugb6VNwAAqH+BPn4zEwPAWsYYVVRUyAV/awFwACUGgLXS0tJ0//33Ky0tzekoACxEiQEAAK5EiQEAAK5EiQEAAK5EiQEAAK7EXawBWCs5OVl//vOf+WgFALWixACwVkhIiOLi4pyOAcBSvJ0EwFqHDx/WnDlzdPjwYaejALAQJQaAtY4dO6ZNmzbp2LFjTkcBYCFKDAAAcCVKDAAAcCVKDAAAcCVKDABrNWrUSPfcc48aNWrkdBQAFuISawDWio2NVWpqqtMxAFiKmRgA1jp+/Li2bNmi48ePOx0FgIUoMQCslZubqz/96U/Kzc11OgoAC1FiAACAK1FiAACAK1FiAACAK1FiAFgrNDRUzZo1U2hoqNNRAFjIY4wxToeoS1FRkbxer3w+n2JiYpyOAwAAzkKgj9/MxAAAAFeixACw1sGDBzVq1CgdPHjQ6SgALESJAWAtv9+vsrIy+f1+p6MAsBAlBgAAuBIlBgAAuBIlBgAAuBIlBoC1kpKS9NJLLykpKcnpKAAsFOJ0AAA4nbCwMF1++eVOxwBgKWZiAFgrLy9Pb7/9tvLy8pyOAsBClBgA1iouLtaqVatUXFzsdBQAFqLEAAAAV6LEAAAAV6LEAAAAV6LEALCW1+tV//795fV6nY4CwEJcYg3AWnFxcRo2bJjTMQBYipkYANYqKyvTvn37VFZW5nQUABaixACwVnZ2tqZOnars7GynowCwECUGAAC4EiUGAAC4EiUGAAC4EiUGgLWCg4MVHR2t4OBgp6MAsJDHGGOcDlGXoqIieb1e+Xw+xcTEOB0HAACchUAfv5mJAQAArkSJAWCtzMxMTZgwQZmZmU5HAWAhSgwAa1VUVCg3N1cVFRVORwFgIUoMAABwJUoMAABwJUoMAABwJUoMAGslJCRo8uTJSkhIcDoKAAuFOB0AAE4nIiJCXbp0cToGAEsxEwPAWoWFhVqyZIkKCwudjgLAQpQYANYqKCjQkiVLVFBQ4HQUABaixAAAAFeixAAAAFeixAAAAFeixACwVlRUlHr37q2oqCinowCwkMcYY5wOUZdA38obAADUv0Afv5mJAWAtbgAJ4EwoMQCslZmZqQkTJigzM9PpKAAsRIkBAACuRIkBAACuRIkBAACuRIkBAACuxCXWAAAgILjEGgAAoBaUGADWys7O1nPPPafs7GynowCwECUGgLXKysq0f/9+lZWVOR0FgIUoMQAAwJUoMQAAwJUoMQAAwJUoMQCs1bhxYz388MNq3Lix01EAWCjE6QAAcDpRUVG67rrrnI4BwFLMxACwVlFRkT777DMVFRU5HQWAhSgxAKx19OhRLViwQEePHnU6CgALUWIAAIArUWIAAIArUWIAAIArUWIAWCsiIkJdunRRRESE01EAWMhjjDFOh6hLoG/lDQAA6l+gj9/MxACwlt/vV2lpqfx+v9NRAFiIEgPAWgcPHtQDDzyggwcPOh0FgIUoMQAAwJUoMQAAwJUoMQAAwJUoMQAAwJW4izUAa7Vo0UJvvPGGIiMjnY4CwEKUGADWCg4O5rOhAJwWbycBsFZubq5mzpyp3Nxcp6MAsBAlBoC1jh8/rq1bt+r48eNORwFgIUoMAABwJUoMAABwJUoMAABwJUoMAGvFxcVp2LBhiouLczoKAAtxiTUAa3m9XvXv39/pGAAsxUwMAGsdO3ZMmzZt0rFjx5yOAsBClBgA1jp8+LDmzJmjw4cPOx0FgIUoMQAAwJUoMQAAwJUoMQAAwJUoMQCsFRoaqlatWik0NNTpKAAs5DHGGKdD1KWoqEher1c+n4872gIA4BKBPn4zEwMAAFyJEgPAWmlpaRo+fLjS0tKcjgLAQpQYANYyxqiyslIueNcbgAMoMQAAwJUoMQAAwJUoMQAAwJW4izUAazVr1kwzZsxQkyZNnI4CwEKUGADWCg0NVfPmzZ2OAcBSvJ0EwFp5eXmaN2+e8vLynI4CwEKUGADWKi4u1po1a1RcXOx0FAAWosQAAABXosQAAABXosQAAABXosQAsJbX69XAgQPl9XqdjgLAQlxiDcBacXFxGjx4sNMxAFiKmRgA1iorK9OuXbtUVlbmdBQAFqLEALBWdna2XnzxRWVnZzsdBYCFKDEAAMCVKDEAAMCVKDEAAMCVKDEArBUSEqK4uDiFhHAhJYBTeYwxxukQdSkqKpLX65XP51NMTIzTcQAAwFkI9PGbmRgAAOBKlBgA1srIyND48eOVkZHhdBQAFqLEALBWZWWl8vPzVVlZ6XQUABaixAAAAFeixAAAAFeixAAAAFeixACwVmJioqZMmaLExESnowCwEJ8gBcBa4eHh6tSpk9MxAFiKmRgA1srPz9fixYuVn5/vdBQAFqLEALCWz+fT8uXL5fP5nI4CwEKUGAAA4EqUGAAA4EqUGAAA4EqUGADWio6O1g033KDo6GinowCwkMcYY5wOUZdA38obAADUv0Afv5mJAWCt8vJyZWZmqry83OkoACxEiQFgraysLD355JPKyspyOgoAC7niE3tPvuNVVFTkcBIAF1JxcbEqKipUXFzM7z/gQid/bwN15oorzonJzMxUcnKy0zEAAMB5yMjIUPPmzet9u64oMX6/X4cOHVJ0dLQ8Hs95baOoqEjJycnKyMjg5OBzxNj9NIzf+WPsfhrG76dh/M7fybFLT0+Xx+NRUlKSgoLq/wwWV7ydFBQUVG8NLiYmhh/G88TY/TSM3/lj7H4axu+nYfzOn9frDejYcWIvAABwJUoMAABwpUumxISFhWnq1KkKCwtzOorrMHY/DeN3/hi7n4bx+2kYv/N3ocbOFSf2AgAA/NglMxMDAAAuLpQYAADgSpQYAADgSpQYAADgShdNicnPz9fQoUMVExOj2NhYPfDAAyopKTnjc8rKyjRu3DhddtllioqK0p133qnc3NxT1luwYIG6dOmi8PBwNWnSROPGjQvUbjgmkOMnSUePHlXz5s3l8XhUWFgYgD1wTiDGbtu2bRoyZIiSk5MVERGhjh07as6cOYHelQvitddeU6tWrRQeHq4ePXrom2++OeP6//jHP9ShQweFh4friiuu0IoVK2o8bozRc889p8TEREVERKhPnz7at29fIHfBUfU5fhUVFZo0aZKuuOIKRUZGKikpScOHD9ehQ4cCvRuOqO+fvR8aO3asPB6PZs+eXc+p7RGI8du9e7cGDhwor9eryMhIXXPNNUpPTz/7UOYi0bdvX9O1a1ezceNGs3btWpOSkmKGDBlyxueMHTvWJCcnmy+++MJs3rzZXHvttaZXr1411vnTn/5kkpKSzMKFC83+/fvNtm3bzEcffRTIXXFEoMbvpNTUVNOvXz8jyRQUFARgD5wTiLF76623zKOPPmrWrFljvv/+e/Puu++aiIgIM3fu3EDvTkAtXrzYhIaGmvnz55vvvvvOjB492sTGxprc3Nxa11+/fr0JDg42M2bMMLt27TJTpkwxDRo0MDt27KheZ/r06cbr9Zply5aZbdu2mYEDB5rLL7/clJaWXqjdumDqe/wKCwtNnz59zPvvv2/27NljNmzYYLp3726uuuqqC7lbF0QgfvZOWrp0qenatatJSkoyr7zySoD3xBmBGL/9+/ebuLg4M3HiRLN161azf/9+89FHH512m7W5KErMrl27jCTzr3/9q3rZJ598Yjwej8nKyqr1OYWFhaZBgwbmH//4R/Wy3bt3G0lmw4YNxhhj8vPzTUREhPn8888DuwMOC9T4nfT666+b66+/3nzxxRcXXYkJ9Nj90MMPP2xuvPHG+gvvgO7du5tx48ZVf19VVWWSkpLMH/7wh1rXv/vuu82vf/3rGst69OhhHnzwQWOMMX6/3yQkJJiXX365+vHCwkITFhZmFi1aFIA9cFZ9j19tvvnmGyPJHDx4sH5CWyJQY5eZmWmaNWtmdu7caVq2bHnRlphAjN8999xjhg0b9pNyXRRvJ23YsEGxsbG6+uqrq5f16dNHQUFB2rRpU63P2bJliyoqKtSnT5/qZR06dFCLFi20YcMGSdKqVavk9/uVlZWljh07qnnz5rr77ruVkZER2B26wAI1fpK0a9cu/fd//7feeeedgNz8y2mBHLsf8/l8iouLq7/wF1h5ebm2bNlSY7+DgoLUp0+f0+73hg0baqwvSbfeemv1+gcOHFBOTk6Ndbxer3r06HHGsXSjQIxfbXw+nzwej2JjY+sltw0CNXZ+v1/33XefJk6cqM6dOwcmvAUCMX5+v18ff/yx2rVrp1tvvVVNmjRRjx49tGzZsnPKdlEcVXJyctSkSZMay0JCQhQXF6ecnJzTPic0NPSUX9SmTZtWP+ff//63/H6/fv/732v27Nn68MMPlZ+fr5tvvlnl5eUB2RcnBGr8Tpw4oSFDhujll19WixYtApLdaYEaux/7+uuv9f7772vMmDH1ktsJeXl5qqqqUtOmTWssP9N+5+TknHH9k/+eyzbdKhDj92NlZWWaNGmShgwZclHd8DBQY/fHP/5RISEhevTRR+s/tEUCMX6HDx9WSUmJpk+frr59++qzzz7ToEGDdMcdd+irr74662xWl5jJkyfL4/Gc8WvPnj0Be32/36+Kigq9+uqruvXWW3Xttddq0aJF2rdvn1avXh2w160vTo/fU089pY4dO2rYsGEBe41AcXrsfmjnzp1KTU3V1KlTdcstt1yQ18Slp6KiQnfffbeMMfrLX/7idBzrbdmyRXPmzNGCBQvk8XicjuM6fr9fkpSamqoJEybo5z//uSZPnqwBAwbojTfeOOvthAQqYH14/PHHNWLEiDOu07p1ayUkJOjw4cM1lldWVio/P18JCQm1Pi8hIUHl5eUqLCys8Rdxbm5u9XMSExMlSZ06dap+vHHjxoqPjz+3s6cd4vT4ffnll9qxY4c+/PBDSf+5ikSS4uPj9cwzz+iFF144zz0LPKfH7qRdu3bppptu0pgxYzRlypTz2hdbxMfHKzg4+JQr2Grb75MSEhLOuP7Jf3Nzc6t/X09+//Of/7we0zsvEON30skCc/DgQX355ZcX1SyMFJixW7t2rQ4fPlxjlrmqqkqPP/64Zs+erbS0tPrdCQcFYvzi4+MVEhJS4/gqSR07dtS6devOPtxPOqPGEidPrty8eXP1spUrV57VyZUffvhh9bI9e/bUOLly7969RlKNE3uPHj1qgoKCzMqVKwO0NxdeoMZv//79ZseOHdVf8+fPN5LM119/fU5nn9ssUGNnjDE7d+40TZo0MRMnTgzcDlxg3bt3N+PHj6/+vqqqyjRr1uyMJwcOGDCgxrKePXuecmLvzJkzqx/3+XwX9Ym99Tl+xhhTXl5ubr/9dtO5c2dz+PDhwAS3QH2PXV5eXo3/33bs2GGSkpLMpEmTzJ49ewK3Iw4JxM9ez549Tzmx9/bbb6/z6s4fuihKjDH/ucy1W7duZtOmTWbdunWmbdu2NQYiMzPTtG/f3mzatKl62dixY02LFi3Ml19+aTZv3mx69uxpevbsWWO7qamppnPnzmb9+vVmx44dZsCAAaZTp06mvLz8gu3bhRCo8fuh1atXX3RXJxkTmLHbsWOHady4sRk2bJjJzs6u/nL7QWbx4sUmLCzMLFiwwOzatcuMGTPGxMbGmpycHGOMMffdd5+ZPHly9frr1683ISEhZubMmWb37t1m6tSptV5iHRsbaz766COzfft2k5qaelFfYl2f41deXm4GDhxomjdvbr799tsaP2snTpxwZB8DJRA/ez92MV+dFIjxW7p0qWnQoIGZN2+e2bdvn5k7d64JDg42a9euPetcF02JOXr0qBkyZIiJiooyMTExZuTIkaa4uLj68QMHDhhJZvXq1dXLSktLzcMPP2waNWpkGjZsaAYNGmSys7NrbNfn85lRo0aZ2NhYExcXZwYNGmTS09Mv1G5dMIEavx+6WEtMIMZu6tSpRtIpXy1btryAexYYc+fONS1atDChoaGme/fuZuPGjdWPXX/99eb++++vsf4HH3xg2rVrZ0JDQ03nzp3Nxx9/XONxv99vnn32WdO0aVMTFhZmbrrpJrN3794LsSuOqM/xO/mzWdvXD39eLxb1/bP3YxdziTEmMOP31ltvmZSUFBMeHm66du1qli1bdk6ZPMb8/xMVAAAAXMTqq5MAAABOhxIDAABciRIDAABciRIDAABciRIDAABciRIDAABciRIDAABciRIDAABciRIDWO6GG27QY489FpBt//KXv9Tf//73gGy7vLxcrVq10ubNm89q/WeffVZjxowJSBanXHvttVqyZInTMYCLFiUGuEQtX75cubm5Gjx4cPWyVq1aafbs2aes+/zzz9e4K/Tzzz8vj8cjj8ej4OBgJScna8yYMcrPz69eJzQ0VE888YQmTZpUZ5acnBzNmTNHzzzzTPWy4uJiPfbYY2rZsqUiIiLUq1cv/etf/6rxvBEjRlTnOPnVt2/f6sdPnDih++67TzExMWrXrp0+//zzGs9/+eWX9cgjj9SZT5KKior0zDPPqEOHDgoPD1dCQoL69OmjpUuXVt+h/ceFc8qUKZo8ebL8fv9ZvQaAc0OJAS5Rr776qkaOHKmgoPP7b6Bz587Kzs5Wenq63n77bX366ad66KGHaqwzdOhQrVu3Tt99990Zt/W3v/1NvXr1UsuWLauX/fa3v9WqVav07rvvaseOHbrlllvUp08fZWVl1Xhu3759lZ2dXf21aNGi6sfmzZunLVu2aMOGDRozZozuvffe6sJx4MABvfnmm3rppZfq3NfCwkL16tVL77zzjp566ilt3bpV//znP3XPPffoySeflM/nq/V5/fr1U3FxsT755JM6XwPAuaPEAC5TUFCg4cOHq1GjRmrYsKH69eunffv21VjnzTffVHJysho2bKhBgwZp1qxZio2NrX78yJEj+vLLL3Xbbbedd46QkBAlJCSoWbNm6tOnj37zm99o1apVNdZp1KiRevfurcWLF59xW4sXL66RpbS0VEuWLNGMGTP0y1/+UikpKXr++eeVkpKiv/zlLzWeGxYWpoSEhOqvRo0aVT+2e/duDRw4UJ07d9a4ceN05MgR5eXlSZIeeugh/fGPf1RMTEyd+/r0008rLS1NmzZt0v33369OnTqpXbt2Gj16tL799ltFRUXV+rzg4GD179+/zv0HcH4oMYDLjBgxQps3b9by5cu1YcMGGWPUv39/VVRUSJLWr1+vsWPH6ne/+52+/fZb3XzzzafMNqxbt04NGzZUx44d6yVTWlqaVq5cqdDQ0FMe6969u9auXXva5+bn52vXrl26+uqrq5dVVlaqqqpK4eHhNdaNiIjQunXraixbs2aNmjRpovbt2+uhhx7S0aNHqx/r2rWr1q1bp9LSUq1cuVKJiYmKj4/XwoULFR4erkGDBtW5b36/X4sXL9bQoUOVlJR0yuNRUVEKCQk57fPr2n8A5+/0v3kArLNv3z4tX75c69evV69evSRJCxcuVHJyspYtW6bf/OY3mjt3rvr166cnnnhCktSuXTt9/fXX+t///d/q7Rw8eFBNmzat9a2kSZMmacqUKTWWlZeXq1OnTjWW7dixQ1FRUaqqqlJZWZkkadasWadsLykpSQcPHjztPqWnp8sYU6MgREdHq2fPnpo2bZo6duyopk2batGiRdqwYYNSUlKq1+vbt6/uuOMOXX755fr+++/19NNPq1+/ftqwYYOCg4M1atQobd++XZ06dVJ8fLw++OADFRQU6LnnntOaNWs0ZcoULV68WG3atNH8+fPVrFmzU/Ll5eWpoKBAHTp0OO0+nElSUpIyMjLk9/vP+607ALWjxAAusnv3boWEhKhHjx7Vyy677DK1b99eu3fvliTt3bv3lBmG7t271ygxpaWlp8xynDRx4kSNGDGixrJXX31V//znP2ssa9++vZYvX66ysjK99957+vbbb2s9STYiIkLHjx8/7T6VlpZK0il53n33XY0aNUrNmjVTcHCwrrzySg0ZMkRbtmypXueHJyVfccUV6tKli9q0aaM1a9bopptuUoMGDfTaa6/V2O7IkSP16KOP6v/+7/+0bNkybdu2TTNmzNCjjz5a65VEJ8+hOV8RERHy+/06ceKEIiIiftK2ANTEnwXAJSg+Pl4FBQWnfSwlJaXGV1xc3CnrhYaGKiUlRT/72c80ffp0BQcH64UXXjhlvfz8fDVu3PiMWSSdkqdNmzb66quvVFJSooyMDH3zzTeqqKhQ69atT7ut1q1bKz4+Xvv376/18dWrV+u7777T+PHjtWbNGvXv31+RkZG6++67tWbNmlqf07hxY8XGxmrPnj2nfd0zyc/PV2RkJAUGCABKDOAiHTt2VGVlpTZt2lS97OjRo9q7d2/12z3t27c/5VLkH3/frVs35eTknLbInI8pU6Zo5syZOnToUI3lO3fuVLdu3U77vDZt2igmJka7du2q9fHIyEglJiaqoKBAK1euVGpq6mm3lZmZqaNHjyoxMfGUx8rKyjRu3Dj99a9/VXBwsKqqqqrPI6qoqFBVVVWt2wwKCtLgwYO1cOHCU/ZNkkpKSlRZWXnaTHXtP4DzR4kBXKRt27ZKTU3V6NGjtW7dOm3btk3Dhg1Ts2bNqg/ujzzyiFasWKFZs2Zp3759+utf/6pPPvlEHo+nejvdunVTfHy81q9fX2/ZevbsqS5duuj3v/99jeVr167VLbfcctrnBQUFqU+fPqecsLty5Up9+umnOnDggFatWqUbb7xRHTp00MiRIyX9pzxMnDhRGzduVFpamr744gulpqYqJSVFt9566ymvM23aNPXv37+6UPTu3VtLly7V9u3b9ec//1m9e/c+bcaXXnpJycnJ6tGjh9555x3t2rVL+/bt0/z589WtWzeVlJSc9rl17T+A80eJAVzm7bff1lVXXaUBAwaoZ8+eMsZoxYoVatCggaT/HJzfeOMNzZo1S127dtWnn36qCRMm1DjnJDg4WCNHjtTChQvrNduECRP0t7/9TRkZGZKkDRs2yOfz6a677jrj8377299q8eLFNT4Uzufzady4cerQoYOGDx+u6667TitXrqzez+DgYG3fvl0DBw5Uu3bt9MADD+iqq67S2rVrFRYWVmP7O3fu1AcffFDj7a677rpLv/71r/WLX/xC27dv15w5c06bLy4uThs3btSwYcP04osvqlu3bvrFL36hRYsW6eWXX5bX6631eVlZWfr666+rixeA+uUxP/WsNQDWGz16tPbs2VPjUt+cnBx17txZW7durfEhc/XpnnvuUdeuXfX000+fcT1jjHr06KEJEyZoyJAhAcnihEmTJqmgoEDz5s1zOgpwUWImBrgIzZw5U9u2bdP+/fs1d+5c/c///I/uv//+GuskJCTorbfeUnp6ekAylJeX64orrtCECRPqXNfj8WjevHlnPLfEjZo0aaJp06Y5HQO4aDETA1yETl5tU1xcrNatW+uRRx7R2LFjnY4FAPWKEgMAAFyJt5MAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIArUWIAAIAr/T8D3i+tuR89QgAAAABJRU5ErkJggg==", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiQAAAGwCAYAAACZ7H64AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAqpElEQVR4nO3deXRV9b3+8eckIQMZDgkBkkAQmYcKRSwU0KqVUqAKtVoFQQQsiIBW6gAqgxZtvcDFUpRasOBFqdEKV1kFpShiAQFlRgMs8BJIQhLIdBIgIdP394eL/EyBhLBz+GbL+7VWlot99jl5PpJkP3zP3tkeY4wRAACARQG2AwAAAFBIAACAdRQSAABgHYUEAABYRyEBAADWUUgAAIB1FBIAAGBdkO0Al6KiokLHjx9XZGSkPB6P7TgAAOASGGNUWFiohIQEBQRUvwbiikJy/PhxJSYm2o4BAAAuQ2pqqlq0aFHtPq4oJJGRkZK+HSgqKspyGgCoP1JSUjRr1ixNnz5drVq1sh0HqKKgoECJiYmVx/HquKKQnHubJioqikICAN8RGRmpBg0aKDIykp+PqLcu5XQLTmoFAADWUUgAwMUCAwMVGRmpwMBA21EARzxuuNtvQUGBvF6vfD4fS5IAALhEbY7frJAAAADrKCQA4GJpaWmaPHmy0tLSbEcBHKGQAICLlZaWKisrS6WlpbajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXi4uL09SpUxUXF2c7CuCIK351PADgwsLCwtS1a1fbMQDHWCEBABfLz8/XihUrlJ+fbzsK4AiFBABcLC8vTytWrFBeXp7tKIAjFBIAAGAdhQQAAFhHIQEAANZRSADAxSIiItS3b19FRETYjgI44jHGGNshalKb2xcDAID6oTbHb1ZIAMDFuLkevi8oJADgYmlpaZo8ebLS0tJsRwEcoZAAAADrKCQAAMA6CgkAALCOQgIAAKzjsl8AAOAXXPYLAABchUICAC6WkZGhGTNmKCMjw3YUwBEKCQC4WHFxsQ4fPqzi4mLbUQBHKCQAAMA6CgkAALCOQgIAAKyjkACAizVp0kQTJkxQkyZNbEcBHAmyHQAAcPkiIiJ044032o4BOMYKCQC4WEFBgf71r3+poKDAdhTAEQoJALhYTk6O3njjDeXk5NiOAjhCIQEAANZRSAAAgHUUEgAAYB2FBABcLCwsTF27dlVYWJjtKIAjHmOMsR2iJrW5fTEAAKgfanP8ZoUEAFysoqJCRUVFqqiosB0FcIRCAgAudvToUT344IM6evSo7SiAIxQSAABgHYUEAABYRyEBAADWUUgAAIB13O0XAFysZcuWeu211xQeHm47CuAIhQQAXCwwMJDfz4TvBd6yAQAXy8rK0ty5c5WVlWU7CuAIhQQAXOzMmTPauXOnzpw5YzsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCxmJgYjRgxQjExMbajAI5w2S8AuJjX69WgQYNsxwAcY4UEAFzs9OnT2rZtm06fPm07CuAIhQQAXOzEiROaP3++Tpw4YTsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCx4OBgtWrVSsHBwbajAI54jDHGdoiaFBQUyOv1yufzcVdLAABcojbHb1ZIAACAdRQSAHCxlJQUjRw5UikpKbajAI5QSADAxYwxKisrkwvefQeqRSEBAADWUUgAAIB1FBIAAGAdd/sFABdr3ry5Zs+eraZNm9qOAjhCIQEAFwsODlaLFi1sxwAc4y0bAHCx7OxsLVq0SNnZ2bajAI5QSADAxQoLC7VhwwYVFhbajgI4QiEBAADWUUgAAIB1FBIAAGAdhQQAXMzr9Wrw4MHyer22owCOcNkvALhYTEyMhg4dajsG4BgrJADgYsXFxUpOTlZxcbHtKIAjFBIAcLGMjAy98MILysjIsB0FcIRCAgAArKOQAAAA6ygkAADAOgoJALhYUFCQYmJiFBTERZNwN48xxtgOUZOCggJ5vV75fD5FRUXZjgMAAC5BbY7frJAAAADrKCQA4GKpqamaNGmSUlNTbUcBHKGQAICLlZWVKTc3V2VlZbajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXi4+P17Rp0xQfH287CuAIv0kHAFwsNDRUnTt3th0DcIwVEgBwsdzcXCUlJSk3N9d2FMARCgkAuJjP59OqVavk8/lsRwEcoZAAAADrKCQAAMA6CgkAALCOQgIALhYZGalbbrlFkZGRtqMAjniMMcZ2iJrU5vbFAACgfqjN8ZsVEgBwsZKSEqWlpamkpMR2FMARCgkAuFh6erqeeuoppaen244COEIhAQAA1lFIAACAdRQSAABgHYUEAFzM4/EoKChIHo/HdhTAES77BQAAfsFlvwAAwFUoJADgYunp6XrmmWe47BeuRyEBABcrKSlRSkoKvxgNrkchAQAA1lFIAACAdRQSAABgHYUEAFysadOm+u1vf6umTZvajgI4EmQ7AADg8oWHh6tXr162YwCOsUICAC7m8/m0Zs0a+Xw+21EARygkAOBiubm5euutt5Sbm2s7CuAIhQQAAFhHIQEAANZRSAAAgHUUEgBwsYYNG+r6669Xw4YNbUcBHPEYY4ztEDWpze2LAQBA/VCb4zcrJADgYuXl5SooKFB5ebntKIAjFBIAcLFjx45p/PjxOnbsmO0ogCMUEgAAYB2FBAAAWEchAQAA1lFIAACAdVz2CwAuVlFRobNnzyokJEQBAfwbE/VLbY7fQVcoEwDADwICAhQWFmY7BuAYdRoAXCwzM1MvvfSSMjMzbUcBHKGQAICLFRUVae/evSoqKrIdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WOPGjTVq1Cg1btzYdhTAES77BQAXi4qKUv/+/W3HABxjhQQAXOzUqVPatGmTTp06ZTsK4AiFBABc7OTJk1q4cKFOnjxpOwrgCIUEAABYRyEBAADWUUgAAIB1FBIAcLHQ0FC1bdtWoaGhtqMAjniMMcZ2iJrU5vbFAACgfqjN8ZsVEgAAYB2FBABc7MiRI7rvvvt05MgR21EARygkAADAOgoJAACwjkICAACso5AAAADruNsvALhYixYt9PLLLysmJsZ2FMARCgkAuFiDBg3UrFkz2zEAx3jLBgBc7OTJk3r11Ve52y9cj0ICAC526tQpbd68WadOnbIdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WHR0tO666y5FR0fbjgI4wmW/AOBijRo10l133WU7BuAYKyQA4GJFRUXau3evioqKbEcBHKGQAICLZWZm6qWXXlJmZqbtKIAjFBIAAGAdhQQAAFhHIQEAANZRSADAxc7dXK9Bgwa2owCOeIwxxnaImhQUFMjr9crn8ykqKsp2HAAAcAlqc/xmhQQAAFhHIQEAFzt27JgeeughHTt2zHYUwBEKCQC4WHl5uQoLC1VeXm47CuAIhQQAAFhHIQEAANZRSAAAgHUUEgBwsfj4eD3//POKj4+3HQVwJMh2AADA5QsNDVW7du1sxwAcY4UEAFwsNzdXb731lnJzc21HARyhkACAi/l8Pq1Zs0Y+n892FMARCgkAALCOQgIAAKzjpFYAV9TNN9+s1NTUavdJTEzUZ599doUSAagPrvoVktatW6t169a2YwBXjdTU1Grvu3Ls2LEaCwv+v8jISP3sZz9TZGSk7ShwsfpwLLS6QlJSUqLg4GCbEQBY0LJlS/3f//3fBR+z/UPRbWJjYzV69GjbMQDHLnmFZNGiRUpISFBFRUWV7UOGDNGYMWP0zTffaMiQIWrWrJkiIiL0ox/9SB9//HGVfVu1aqVZs2Zp5MiRioqK0rhx4+pmCgC4Sp09e1ZHjhzR2bNnbUcBHPEYY8yl7JiXl6e4uDitWbNGt912m6Rvr3+Pj4/XmjVrFBsbq61bt6pv374KCQnRsmXLNHfuXB08eFAtW7aU9G0hycvL04wZM/TLX/5SktSmTZvzPtfZs2erfHMVFBQoMTFRPp9PUVFRTmeuonXr1kpNTVViYmKdvi6ACzv3/VbdCgnfk5eutLRUeXl5io6OVoMGDWzHgUvV9H15uQoKCuT1ei/p+H3JKyTR0dEaOHCg/v73v1due++99xQbG6tbb71V3bp100MPPaQf/OAHateunWbNmqU2bdpo1apVVV7npz/9qR5//HG1adPmgmVEkv74xz/K6/VWfvCDCQCA77danUMyfPhwjR07VgsXLlRISIiWL1+uoUOHKiAgQKdOndJzzz2n1atXKyMjQ2VlZSoqKjrv5LUbbrihxs/z9NNP63e/+13ln8+tkPiLP1ohgAu7lHNE+J68dEeOHNGzzz6rF198Uddee63tOHCp+nDuVq0KyR133CFjjFavXq0f/ehH2rhxo15++WVJ0hNPPKF169Zp7ty5atu2rcLCwnT33XerpKSkymuEh4fX+HlCQkIUEhJSm2gAAMDFalVIQkND9atf/UrLly/X4cOH1aFDB11//fWSpM2bN2vUqFG68847JUmnTp1SSkpKnQcG4H7Hjh276L/Ijh07VnneGWoWEBCg0NBQBQRc9b/FAS5X68t+hw8frttvv11ff/21RowYUbm9Xbt2Wrlype644w55PB5Nnz79vCty6iOWhYErq6a3X1u2bMl5Y7VwzTXXaMmSJbZjwOXqw7Gw1oXkpz/9qWJiYnTw4EHdd999ldvnzZunMWPGqE+fPoqNjdWUKVNUUFBQp2EBuB+/gRXAhVzyZb821eayIQC4mqSnp+tPf/qTHnvsMTVv3tx2HKAKv1z2CwCof0pKSpSenn7eBQSA21BIAACAdRQSAABgHYUEAABYRyEBABdr1qyZHn/8cTVr1sx2FMCRWl/2CwCoPxo2bKgePXrYjgE4xgoJALhYfn6+PvjgA+Xn59uOAjhCIQEAF8vLy9M777yjvLw821EARygkAADAOgoJAACwjkICAACso5AAgIuFh4erV69eCg8Ptx0FcISb6wEAAL/g5noAcJUoKytTbm6uysrKbEcBHKGQAICLpaamatKkSUpNTbUdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQAAsI7LfgHAxYwxKisrU1BQkDwej+04QBW1OX4HXaFMAAA/8Hg8atCgge0YgGO8ZQMALpaRkaFZs2YpIyPDdhTAEQoJALhYcXGx9u/fr+LiYttRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLxcbGauzYsYqNjbUdBXCEy34BwMUiIyN166232o4BOMYKCQC4WGFhoT799FMVFhbajgI4QiEBABfLzs7W4sWLlZ2dbTsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCx0NBQderUSaGhobajAI54jDHGdoia1Ob2xQAAoH6ozfGbFRIAcDFjjEpLS+WCf1sC1aKQAICLpaSk6IEHHlBKSortKIAjFBIAAGAdhQQAAFhHIQEAANZRSAAAgHXc7RcAXCwxMVGvvPIKvxIBrkchAQAXCwoKUkxMjO0YgGO8ZQMALnbixAnNnz9fJ06csB0FcIRCAgAudvr0aW3btk2nT5+2HQVwhEICAACso5AAAADrKCQAAMA6CgkAuFh0dLTuvfdeRUdH244COMJlvwDgYo0aNdKQIUNsxwAcY4UEAFzszJkz2rFjh86cOWM7CuAIhQQAXCwrK0v//d//raysLNtRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLBQcHq3nz5goODrYdBXDEY4wxtkPUpKCgQF6vVz6fj1tsAwDgErU5frNCAgAArKOQAICLHT16VGPGjNHRo0dtRwEcoZAAgItVVFSouLhYFRUVtqMAjlBIAACAdRQSAABgHYUEAABYRyEBABdLSEjQiy++qISEBNtRAEeCbAcAAFy+kJAQXXvttbZjAI6xQgIALpadna2lS5cqOzvbdhTAEQoJALhYYWGh1q1bp8LCQttRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLeb1eDRo0SF6v13YUwBEu+wUAF4uJidGIESNsxwAcY4UEAFysuLhYhw4dUnFxse0ogCMUEgBwsYyMDM2cOVMZGRm2owCOUEgAAIB1FBIAAGAdhQQAAFhHIQEAFwsMDFRkZKQCAwNtRwEc8RhjjO0QNSkoKJDX65XP51NUVJTtOAAA4BLU5vjNCgkAALCOQgIALpaWlqbJkycrLS3NdhTAEQoJALhYaWmpsrKyVFpaajsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCxuLg4TZ06VXFxcbajAI4E2Q4AALh8YWFh6tq1q+0YgGOskACAi+Xn52vFihXKz8+3HQVwhEICAC6Wl5enFStWKC8vz3YUwBEKCQAAsI5CAgAArKOQAAAA6ygkAOBiERER6tu3ryIiImxHARzxGGOM7RA1qc3tiwEAQP1Qm+M3KyQA4GLcXA/fFxQSAHCxtLQ0TZ48WWlpabajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAA1nHZLwAA8Asu+wUAAK5CIQEAF8vIyNCMGTOUkZFhOwrgCIUEAFysuLhYhw8fVnFxse0ogCMUEgAAYB2FBAAAWEchAQAA1lFIAMDFmjRpogkTJqhJkya2owCOBNkOAAC4fBEREbrxxhttxwAcY4UEAFysoKBA//rXv1RQUGA7CuAIhQQAXCwnJ0dvvPGGcnJybEcBHKGQAAAA6ygkAADAOgoJAACwjkICAC4WFhamrl27KiwszHYUwBGPMcbYDlGT2ty+GAAA1A+1OX6zQgIALlZRUaGioiJVVFTYjgI4QiEBABc7evSoHnzwQR09etR2FMARCgkAALCOQgIAAKyjkAAAAOsoJAAAwDru9gsALtayZUu99tprCg8Ptx0FcIRCAgAuFhgYyO9nwvcCb9kAgItlZWVp7ty5ysrKsh0FcIRCAgAudubMGe3cuVNnzpyxHQVwhEICAACso5AAAADrKCQAAMA6CgkAuFhMTIxGjBihmJgY21EAR7jsFwBczOv1atCgQbZjAI6xQgIALnb69Glt27ZNp0+fth0FcIRCAgAuduLECc2fP18nTpywHQVwhEICAACso5AAAADrKCQAAMA6CgkAuFhwcLBatWql4OBg21EARzzGGGM7RE0KCgrk9Xrl8/m4qyUAAC5Rm+M3KyQAAMA6CgkAuFhKSopGjhyplJQU21EARygkAOBixhiVlZXJBe++A9WikAAAAOsoJAAAwDoKCQAAsI67/QKAizVv3lyzZ89W06ZNbUcBHKGQAICLBQcHq0WLFrZjAI7xlg0AuFh2drYWLVqk7Oxs21EARygkAOBihYWF2rBhgwoLC21HARyhkAAAAOsoJAAAwDoKCQAAsI5CAgAu5vV6NXjwYHm9XttRAEe47BcAXCwmJkZDhw61HQNwjBUSAHCx4uJiJScnq7i42HYUwBEKCQC4WEZGhl544QVlZGTYjgI4QiEBAADWUUgAAIB1FBIAAGAdhQQAXCwoKEgxMTEKCuKiSbibxxhjbIeoSUFBgbxer3w+n6KiomzHAQAAl6A2x29WSAAAgHUUEgBwsdTUVE2aNEmpqam2owCOUEgAwMXKysqUm5ursrIy21EARygkAADAOgoJAACwjkICAACso5AAgIvFx8dr2rRpio+Ptx0FcITfpAMALhYaGqrOnTvbjgE4xgoJALhYbm6ukpKSlJubazsK4AiFBABczOfzadWqVfL5fLajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXi4yM1C233KLIyEjbUQBHPMYYYztETWpz+2IAAFA/1Ob4zQoJALhYSUmJ0tLSVFJSYjsK4AiFBABcLD09XU899ZTS09NtRwEcccVvaj33rlJBQYHlJABQvxQWFqq0tFSFhYX8jES9c+5r8lLODnHFOSRpaWlKTEy0HQMAAFyG1NRUtWjRotp9XFFIKioqdPz4cUVGRsrj8VzScwoKCpSYmKjU1NSr7kRYZmd2Zr96MDuz1+fZjTEqLCxUQkKCAgKqP0vEFW/ZBAQE1NisLiYqKqpe/2X5E7Mz+9WG2Zn9auOG2b1e7yXtx0mtAADAOgoJAACw7ntbSEJCQjRz5kyFhITYjnLFMTuzX22YndmvNt/H2V1xUisAAPh++96ukAAAAPegkAAAAOsoJAAAwDoKCQAAsM61hSQ3N1fDhw9XVFSUGjVqpAcffFCnTp2q9jnFxcWaOHGiGjdurIiICN11113Kysqqso/H4znvIykpyZ+j1Jq/Zj8nJydHLVq0kMfjUX5+vh8muHz+mD0nJ0cDBgxQQkKCQkJClJiYqEmTJtW7+4L4Y/Y9e/Zo2LBhSkxMVFhYmDp16qT58+f7e5Ra89fX/KOPPqoePXooJCREP/zhD/04waV79dVX1apVK4WGhqpXr1764osvqt3/H//4hzp27KjQ0FBdd911WrNmTZXHjTGaMWOG4uPjFRYWpn79+unQoUP+HOGy1fXsK1euVP/+/dW4cWN5PB7t3r3bj+mdqcvZS0tLNWXKFF133XUKDw9XQkKCRo4cqePHj/t7DGeMSw0YMMB069bNbN261WzcuNG0bdvWDBs2rNrnjB8/3iQmJppPPvnEbN++3fz4xz82ffr0qbKPJLN06VKTkZFR+VFUVOTPUWrNX7OfM2TIEDNw4EAjyeTl5flhgsvnj9lzc3PNwoULzZdffmlSUlLMxx9/bDp06FDj615p/pj9b3/7m3n00UfNhg0bzDfffGPefPNNExYWZhYsWODvcWrFX1/zjzzyiHnllVfM/fffb7p16+bHCS5NUlKSCQ4ONkuWLDFff/21GTt2rGnUqJHJysq64P6bN282gYGBZvbs2SY5OdlMmzbNNGjQwOzbt69yn5deesl4vV7z/vvvmz179pjBgweba6+9tt79XPPH7MuWLTPPP/+8Wbx4sZFkdu3adYWmqZ26nj0/P9/069fPvPPOO+bAgQNmy5YtpmfPnqZHjx5Xcqxac2UhSU5ONpLMl19+Wbntww8/NB6Px6Snp1/wOfn5+aZBgwbmH//4R+W2/fv3G0lmy5Ytldskmf/93//1W3an/Dm7McYsXLjQ3HzzzeaTTz6pd4XE37N/1/z5802LFi3qLrxDV3L2CRMmmFtvvbXuwjt0JWafOXNmvSgkPXv2NBMnTqz8c3l5uUlISDB//OMfL7j/PffcY37xi19U2darVy/z0EMPGWOMqaioMHFxcWbOnDmVj+fn55uQkBDz9ttv+2GCy1fXs3/XkSNH6nUh8efs53zxxRdGkjl69GjdhPYDV75ls2XLFjVq1Eg33HBD5bZ+/fopICBA27Ztu+BzduzYodLSUvXr169yW8eOHdWyZUtt2bKlyr4TJ05UbGysevbsqSVLllzSbZOvFH/OnpycrN///vdatmxZjTdBssHff+/nHD9+XCtXrtTNN99ctwM4cKVmlySfz6eYmJi6C+/QlZzdppKSEu3YsaNK5oCAAPXr1++imbds2VJlf0n6+c9/Xrn/kSNHlJmZWWUfr9erXr161av/D/6Y3S2u1Ow+n08ej0eNGjWqk9z+UP+OOpcgMzNTTZs2rbItKChIMTExyszMvOhzgoODz/vLaNasWZXn/P73v9e7776rdevW6a677tKECRO0YMGCOp/hcvlr9rNnz2rYsGGaM2eOWrZs6ZfsTvnz712Shg0bpoYNG6p58+aKiorS66+/Xqf5nfD37Od8/vnneueddzRu3Lg6yV0XrtTstmVnZ6u8vFzNmjWrsr26zJmZmdXuf+6/tXlNG/wxu1tcidmLi4s1ZcoUDRs2rF7fiK9eFZKpU6de8KTS734cOHDArxmmT5+uvn37qnv37poyZYqeeuopzZkzx6+fU7I/+9NPP61OnTppxIgRfvscF2N79nNefvll7dy5Ux988IG++eYb/e53v/P756wvs0vSV199pSFDhmjmzJnq37+/3z9ffZod+L4qLS3VPffcI2OM/vKXv9iOU60g2wG+6/HHH9eoUaOq3ad169aKi4vTiRMnqmwvKytTbm6u4uLiLvi8uLg4lZSUKD8/v8q/mrKysi76HEnq1auXZs2apbNnz/r1ngG2Z1+/fr327dun9957T5Iq36aKjY3Vs88+q+eff/4yJ6uZ7dm/u29cXJw6duyomJgY3XTTTZo+fbri4+Mva65LUV9mT05O1m233aZx48Zp2rRplzVLbdWX2euL2NhYBQYGnnclUHWZ4+Liqt3/3H+zsrKqfB1nZWXVm6uKJP/M7hb+nP1cGTl69KjWr19fr1dHJLnzKptzJ7lt3769ctvatWsv6SS39957r3LbgQMHajzB74UXXjDR0dF1F94hf81++PBhs2/fvsqPJUuWGEnm888/v+iZ3lfalfx7/+yzz4wkc+TIkTrL74Q/Z//qq69M06ZNzZNPPum/ARy4En/v9emk1kmTJlX+uby83DRv3rzakxtvv/32Ktt69+593kmtc+fOrXzc5/PV25Na63L273LDSa11PXtJSYn55S9/abp06WJOnDjhn+B1zJWFxJhvLwPs3r272bZtm9m0aZNp165dlcsA09LSTIcOHcy2bdsqt40fP960bNnSrF+/3mzfvt307t3b9O7du/LxVatWmcWLF5t9+/aZQ4cOmYULF5qGDRuaGTNmXNHZauKP2f/Tp59+Wu+usjHGP7OvXr3aLFmyxOzbt88cOXLE/POf/zSdOnUyffv2vaKz1cQfs+/bt880adLEjBgxosql7vXtB5i/vuYPHTpkdu3aZR566CHTvn17s2vXLrNr1y5z9uzZKzbbdyUlJZmQkBDzxhtvmOTkZDNu3DjTqFEjk5mZaYwx5v777zdTp06t3H/z5s0mKCjIzJ071+zfv9/MnDnzgpf9NmrUyHzwwQdm7969ZsiQIfX2st+6nj0nJ8fs2rXLrF692kgySUlJZteuXSYjI+OKz1edup69pKTEDB482LRo0cLs3r27yve2ra/tS+HaQpKTk2OGDRtmIiIiTFRUlBk9erQpLCysfPxcI/70008rtxUVFZkJEyaY6Oho07BhQ3PnnXdW+cL88MMPzQ9/+EMTERFhwsPDTbdu3cxrr71mysvLr+RoNfLH7P+pvhYSf8y+fv1607t3b+P1ek1oaKhp166dmTJlylUx+8yZM42k8z6uueaaKzhZzfz1NX/zzTdfcH6bK2MLFiwwLVu2NMHBwaZnz55m69atVfI+8MADVfZ/9913Tfv27U1wcLDp0qWLWb16dZXHKyoqzPTp002zZs1MSEiIue2228zBgwevxCi1VtezL1269IJ/vzNnzrwC09ROXc5+7vvhQh/f/R6pbzzG1KNrWgEAwFWpXl1lAwAArk4UEgAAYB2FBAAAWEchAQAA1lFIAACAdRQSAABgHYUEAABYRyEBAADWUUiAeu6WW27RY4895pfX/slPfqK///3vfnntkpIStWrVStu3b7+k/adPn65x48b5JYstP/7xj7VixQrbMQBXoJAAV6lVq1YpKytLQ4cOrdzWqlUr/elPfzpv3+eee67K3WGfe+45eTweeTweBQYGKjExUePGjVNubm7lPsHBwXriiSc0ZcqUGrNkZmZq/vz5evbZZyu3FRYW6rHHHtM111yjsLAw9enTR19++WWV540aNaoyx7mPAQMGVD5+9uxZ3X///YqKilL79u318ccfV3n+nDlz9Mgjj9SYT5IKCgr07LPPqmPHjgoNDVVcXJz69eunlStXVt4d+z/L47Rp0zR16lRVVFRc0ucArmYUEuAq9ec//1mjR49WQMDl/Rjo0qWLMjIydOzYMS1dulQfffSRHn744Sr7DB8+XJs2bdLXX39d7Wu9/vrr6tOnj6655prKbb/5zW+0bt06vfnmm9q3b5/69++vfv36KT09vcpzBwwYoIyMjMqPt99+u/KxRYsWaceOHdqyZYvGjRun++67r7I8HDlyRIsXL9aLL75Y46z5+fnq06ePli1bpqefflo7d+7Uv//9b91777166qmn5PP5Lvi8gQMHqrCwUB9++GGNnwO42lFIAJfJy8vTyJEjFR0drYYNG2rgwIE6dOhQlX0WL16sxMRENWzYUHfeeafmzZunRo0aVT5+8uRJrV+/Xnfcccdl5wgKClJcXJyaN2+ufv366de//rXWrVtXZZ/o6Gj17dtXSUlJ1b5WUlJSlSxFRUVasWKFZs+erZ/85Cdq27atnnvuObVt21Z/+ctfqjw3JCREcXFxlR/R0dGVj+3fv1+DBw9Wly5dNHHiRJ08eVLZ2dmSpIcfflj/9V//paioqBpnfeaZZ5SSkqJt27bpgQceUOfOndW+fXuNHTtWu3fvVkRExAWfFxgYqEGDBtU4PwAKCeA6o0aN0vbt27Vq1Spt2bJFxhgNGjRIpaWlkqTNmzdr/Pjx+u1vf6vdu3frZz/72XmrAJs2bVLDhg3VqVOnOsmUkpKitWvXKjg4+LzHevbsqY0bN170ubm5uUpOTtYNN9xQua2srEzl5eUKDQ2tsm9YWJg2bdpUZduGDRvUtGlTdejQQQ8//LBycnIqH+vWrZs2bdqkoqIirV27VvHx8YqNjdXy5csVGhqqO++8s8bZKioqlJSUpOHDhyshIeG8xyMiIhQUFHTR59c0P4BvXfy7CEC9c+jQIa1atUqbN29Wnz59JEnLly9XYmKi3n//ff3617/WggULNHDgQD3xxBOSpPbt2+vzzz/XP//5z8rXOXr0qJo1a3bBt2umTJmiadOmVdlWUlKizp07V9m2b98+RUREqLy8XMXFxZKkefPmnfd6CQkJOnr06EVnOnbsmIwxVQ72kZGR6t27t2bNmqVOnTqpWbNmevvtt7Vlyxa1bdu2cr8BAwboV7/6la699lp98803euaZZzRw4EBt2bJFgYGBGjNmjPbu3avOnTsrNjZW7777rvLy8jRjxgxt2LBB06ZNU1JSktq0aaMlS5aoefPm5+XLzs5WXl6eOnbseNEZqpOQkKDU1FRVVFRc9ttjwNWAQgK4yP79+xUUFKRevXpVbmvcuLE6dOig/fv3S5IOHjx43r/8e/bsWaWQFBUVnbf6cM6TTz6pUaNGVdn25z//Wf/+97+rbOvQoYNWrVql4uJivfXWW9q9e/cFTxANCwvTmTNnLjpTUVGRJJ2X580339SYMWPUvHlzBQYG6vrrr9ewYcO0Y8eOyn2+e0Luddddp65du6pNmzbasGGDbrvtNjVo0ECvvvpqldcdPXq0Hn30Ue3atUvvv/++9uzZo9mzZ+vRRx+94BUx5845uVxhYWGqqKjQ2bNnFRYW5ui1gO8z6jpwFYqNjVVeXt5FH2vbtm2Vj5iYmPP2Cw4OVtu2bfWDH/xAL730kgIDA/X888+ft19ubq6aNGlSbRZJ5+Vp06aNPvvsM506dUqpqan64osvVFpaqtatW1/0tVq3bq3Y2FgdPnz4go9/+umn+vrrrzVp0iRt2LBBgwYNUnh4uO655x5t2LDhgs9p0qSJGjVqpAMHDlz081YnNzdX4eHhlBGgBhQSwEU6deqksrIybdu2rXJbTk6ODh48WPmWSocOHc67PPY//9y9e3dlZmZetJRcjmnTpmnu3Lk6fvx4le1fffWVunfvftHntWnTRlFRUUpOTr7g4+Hh4YqPj1deXp7Wrl2rIUOGXPS10tLSlJOTo/j4+PMeKy4u1sSJE/XXv/5VgYGBKi8vrzzvprS0VOXl5Rd8zYCAAA0dOlTLly8/bzZJOnXqlMrKyi6aqab5AXyLQgK4SLt27TRkyBCNHTtWmzZt0p49ezRixAg1b9688kD9yCOPaM2aNZo3b54OHTqkv/71r/rwww/l8XgqX6d79+6KjY3V5s2b6yxb79691bVrV/3hD3+osn3jxo3q37//RZ8XEBCgfv36nXey6tq1a/XRRx/pyJEjWrdunW699VZ17NhRo0ePlvRtEXjyySe1detWpaSk6JNPPtGQIUPUtm1b/fznPz/v88yaNUuDBg2qLAd9+/bVypUrtXfvXr3yyivq27fvRTO++OKLSkxMVK9evbRs2TIlJyfr0KFDWrJkibp3765Tp05d9Lk1zQ/gWxQSwGWWLl2qHj166Pbbb1fv3r1ljNGaNWvUoEEDSd8eaF977TXNmzdP3bp100cffaTJkydXOUcjMDBQo0eP1vLly+s02+TJk/X6668rNTVVkrRlyxb5fD7dfffd1T7vN7/5jZKSkqr8AjGfz6eJEyeqY8eOGjlypG688UatXbu2cs7AwEDt3btXgwcPVvv27fXggw+qR48e2rhxo0JCQqq8/ldffaV33323yltKd999t37xi1/opptu0t69ezV//vyL5ouJidHWrVs1YsQIvfDCC+revbtuuukmvf3225ozZ468Xu8Fn5eenq7PP/+8skQBuDiPcXrGFoB6b+zYsTpw4ECVy08zMzPVpUsX7dy5s8ovJKtL9957r7p166Znnnmm2v2MMerVq5cmT56sYcOG+SWLDVOmTFFeXp4WLVpkOwpQ77FCAnwPzZ07V3v27NHhw4e1YMEC/c///I8eeOCBKvvExcXpb3/7m44dO+aXDCUlJbruuus0efLkGvf1eDxatGhRtediuFHTpk01a9Ys2zEAV2CFBPgeOnfVSGFhoVq3bq1HHnlE48ePtx0LAC6KQgIAAKzjLRsAAGAdhQQAAFhHIQEAANZRSAAAgHUUEgAAYB2FBAAAWEchAQAA1lFIAACAdf8P2rDFSZA8WecAAAAASUVORK5CYII=", "text/plain": [ "
" ] From 48cdcc0545d3f45ae5727aa64e174995d80fdc82 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Fri, 3 Jan 2025 15:27:01 -0500 Subject: [PATCH 12/19] minimal test --- docs/notebooks/loss_time_covariates.py | 15 +++--- docs/notebooks/time_varying.ipynb | 72 ++++++++++---------------- 2 files changed, 35 insertions(+), 52 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index 60a72c8..dfd5a20 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -167,25 +167,21 @@ def _time_varying_covariance( if __name__ == "__main__": import torch - from torchsurv.loss import cox from torchsurv.metrics.cindex import ConcordanceIndex # set seed torch.manual_seed(123) # Parameters - input_size = 16 # Irrelevant to the loss function + input_size = 8 # Irrelevant to the loss function output_size = 1 # always 1 for Cox - seq_length = 2 # number of time steps - batch_size = 3 # number of samples + seq_length = 5 # number of time steps + batch_size = 32 # number of samples # make random boolean events events = torch.rand(batch_size) > 0.5 - print(events) - # make random positive time to event time = torch.rand(batch_size) * 100 - print(time) # Create simple RNN model rnn = torch.nn.RNN(input_size, output_size, seq_length) @@ -200,3 +196,8 @@ def _time_varying_covariance( # Loss loss = neg_partial_time_log_likelihood(outputs, time, events) print(f"loss = {loss}") + + # Cindex + cindex = ConcordanceIndex() + estimates = outputs[-1].squeeze() # Last outputs matter ?! @Melodie + print(f"C-index = {cindex(estimates, events, time)}") diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 44cddcf..da52850 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -93,7 +93,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -170,7 +170,7 @@ }, { "cell_type": "code", - "execution_count": 55, + "execution_count": 4, "metadata": {}, "outputs": [ { @@ -179,7 +179,7 @@ "text": [ "tensor([[34.2016, 34.2186, 34.2356, 34.2526, 34.2696, 34.2866],\n", " [33.4380, 33.4308, 33.4235, 33.4163, 33.4091, 33.4018],\n", - " [31.5581, 31.5565, 31.5548, 31.5531, 31.5515, 31.5498],\n", + " [31.5581, 31.5564, 31.5548, 31.5531, 31.5515, 31.5498],\n", " [35.7813, 35.7953, 35.8093, 35.8233, 35.8373, 35.8513]])\n" ] } @@ -227,7 +227,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ @@ -280,7 +280,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ @@ -302,7 +302,7 @@ }, { "cell_type": "code", - "execution_count": 63, + "execution_count": 7, "metadata": {}, "outputs": [ { @@ -320,7 +320,7 @@ " True, True, True, True, True, True, True, False, False, True])" ] }, - "execution_count": 63, + "execution_count": 7, "metadata": {}, "output_type": "execute_result" } @@ -401,7 +401,7 @@ }, { "cell_type": "code", - "execution_count": 66, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ @@ -415,7 +415,7 @@ }, { "cell_type": "code", - "execution_count": 69, + "execution_count": 9, "metadata": {}, "outputs": [ { @@ -476,7 +476,7 @@ }, { "cell_type": "code", - "execution_count": 64, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ @@ -532,7 +532,7 @@ }, { "cell_type": "code", - "execution_count": 65, + "execution_count": 11, "metadata": {}, "outputs": [ { @@ -595,7 +595,7 @@ " \n", " \n", " time fit was run\n", - " 2025-01-03 19:18:44 UTC\n", + " 2025-01-03 20:10:47 UTC\n", " \n", " \n", "\n", @@ -679,7 +679,7 @@ " number of periods = 500\n", " number of events = 81\n", "partial log-likelihood = -324.16\n", - " time fit was run = 2025-01-03 19:18:44 UTC\n", + " time fit was run = 2025-01-03 20:10:47 UTC\n", "\n", "---\n", " coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%\n", @@ -704,13 +704,13 @@ "" ] }, - "execution_count": 65, + "execution_count": 11, "metadata": {}, "output_type": "execute_result" }, { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiQAAAGwCAYAAACZ7H64AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAqpElEQVR4nO3deXRV9b3+8eckIQMZDgkBkkAQmYcKRSwU0KqVUqAKtVoFQQQsiIBW6gAqgxZtvcDFUpRasOBFqdEKV1kFpShiAQFlRgMs8BJIQhLIdBIgIdP394eL/EyBhLBz+GbL+7VWlot99jl5PpJkP3zP3tkeY4wRAACARQG2AwAAAFBIAACAdRQSAABgHYUEAABYRyEBAADWUUgAAIB1FBIAAGBdkO0Al6KiokLHjx9XZGSkPB6P7TgAAOASGGNUWFiohIQEBQRUvwbiikJy/PhxJSYm2o4BAAAuQ2pqqlq0aFHtPq4oJJGRkZK+HSgqKspyGgCoP1JSUjRr1ixNnz5drVq1sh0HqKKgoECJiYmVx/HquKKQnHubJioqikICAN8RGRmpBg0aKDIykp+PqLcu5XQLTmoFAADWUUgAwMUCAwMVGRmpwMBA21EARzxuuNtvQUGBvF6vfD4fS5IAALhEbY7frJAAAADrKCQA4GJpaWmaPHmy0tLSbEcBHKGQAICLlZaWKisrS6WlpbajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXi4uL09SpUxUXF2c7CuCIK351PADgwsLCwtS1a1fbMQDHWCEBABfLz8/XihUrlJ+fbzsK4AiFBABcLC8vTytWrFBeXp7tKIAjFBIAAGAdhQQAAFhHIQEAANZRSADAxSIiItS3b19FRETYjgI44jHGGNshalKb2xcDAID6oTbHb1ZIAMDFuLkevi8oJADgYmlpaZo8ebLS0tJsRwEcoZAAAADrKCQAAMA6CgkAALCOQgIAAKzjsl8AAOAXXPYLAABchUICAC6WkZGhGTNmKCMjw3YUwBEKCQC4WHFxsQ4fPqzi4mLbUQBHKCQAAMA6CgkAALCOQgIAAKyjkACAizVp0kQTJkxQkyZNbEcBHAmyHQAAcPkiIiJ044032o4BOMYKCQC4WEFBgf71r3+poKDAdhTAEQoJALhYTk6O3njjDeXk5NiOAjhCIQEAANZRSAAAgHUUEgAAYB2FBABcLCwsTF27dlVYWJjtKIAjHmOMsR2iJrW5fTEAAKgfanP8ZoUEAFysoqJCRUVFqqiosB0FcIRCAgAudvToUT344IM6evSo7SiAIxQSAABgHYUEAABYRyEBAADWUUgAAIB13O0XAFysZcuWeu211xQeHm47CuAIhQQAXCwwMJDfz4TvBd6yAQAXy8rK0ty5c5WVlWU7CuAIhQQAXOzMmTPauXOnzpw5YzsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCxmJgYjRgxQjExMbajAI5w2S8AuJjX69WgQYNsxwAcY4UEAFzs9OnT2rZtm06fPm07CuAIhQQAXOzEiROaP3++Tpw4YTsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCx4OBgtWrVSsHBwbajAI54jDHGdoiaFBQUyOv1yufzcVdLAABcojbHb1ZIAACAdRQSAHCxlJQUjRw5UikpKbajAI5QSADAxYwxKisrkwvefQeqRSEBAADWUUgAAIB1FBIAAGAdd/sFABdr3ry5Zs+eraZNm9qOAjhCIQEAFwsODlaLFi1sxwAc4y0bAHCx7OxsLVq0SNnZ2bajAI5QSADAxQoLC7VhwwYVFhbajgI4QiEBAADWUUgAAIB1FBIAAGAdhQQAXMzr9Wrw4MHyer22owCOcNkvALhYTEyMhg4dajsG4BgrJADgYsXFxUpOTlZxcbHtKIAjFBIAcLGMjAy98MILysjIsB0FcIRCAgAArKOQAAAA6ygkAADAOgoJALhYUFCQYmJiFBTERZNwN48xxtgOUZOCggJ5vV75fD5FRUXZjgMAAC5BbY7frJAAAADrKCQA4GKpqamaNGmSUlNTbUcBHKGQAICLlZWVKTc3V2VlZbajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXi4+P17Rp0xQfH287CuAIv0kHAFwsNDRUnTt3th0DcIwVEgBwsdzcXCUlJSk3N9d2FMARCgkAuJjP59OqVavk8/lsRwEcoZAAAADrKCQAAMA6CgkAALCOQgIALhYZGalbbrlFkZGRtqMAjniMMcZ2iJrU5vbFAACgfqjN8ZsVEgBwsZKSEqWlpamkpMR2FMARCgkAuFh6erqeeuoppaen244COEIhAQAA1lFIAACAdRQSAABgHYUEAFzM4/EoKChIHo/HdhTAES77BQAAfsFlvwAAwFUoJADgYunp6XrmmWe47BeuRyEBABcrKSlRSkoKvxgNrkchAQAA1lFIAACAdRQSAABgHYUEAFysadOm+u1vf6umTZvajgI4EmQ7AADg8oWHh6tXr162YwCOsUICAC7m8/m0Zs0a+Xw+21EARygkAOBiubm5euutt5Sbm2s7CuAIhQQAAFhHIQEAANZRSAAAgHUUEgBwsYYNG+r6669Xw4YNbUcBHPEYY4ztEDWpze2LAQBA/VCb4zcrJADgYuXl5SooKFB5ebntKIAjFBIAcLFjx45p/PjxOnbsmO0ogCMUEgAAYB2FBAAAWEchAQAA1lFIAACAdVz2CwAuVlFRobNnzyokJEQBAfwbE/VLbY7fQVcoEwDADwICAhQWFmY7BuAYdRoAXCwzM1MvvfSSMjMzbUcBHKGQAICLFRUVae/evSoqKrIdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WOPGjTVq1Cg1btzYdhTAES77BQAXi4qKUv/+/W3HABxjhQQAXOzUqVPatGmTTp06ZTsK4AiFBABc7OTJk1q4cKFOnjxpOwrgCIUEAABYRyEBAADWUUgAAIB1FBIAcLHQ0FC1bdtWoaGhtqMAjniMMcZ2iJrU5vbFAACgfqjN8ZsVEgAAYB2FBABc7MiRI7rvvvt05MgR21EARygkAADAOgoJAACwjkICAACso5AAAADruNsvALhYixYt9PLLLysmJsZ2FMARCgkAuFiDBg3UrFkz2zEAx3jLBgBc7OTJk3r11Ve52y9cj0ICAC526tQpbd68WadOnbIdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WHR0tO666y5FR0fbjgI4wmW/AOBijRo10l133WU7BuAYKyQA4GJFRUXau3evioqKbEcBHKGQAICLZWZm6qWXXlJmZqbtKIAjFBIAAGAdhQQAAFhHIQEAANZRSADAxc7dXK9Bgwa2owCOeIwxxnaImhQUFMjr9crn8ykqKsp2HAAAcAlqc/xmhQQAAFhHIQEAFzt27JgeeughHTt2zHYUwBEKCQC4WHl5uQoLC1VeXm47CuAIhQQAAFhHIQEAANZRSAAAgHUUEgBwsfj4eD3//POKj4+3HQVwJMh2AADA5QsNDVW7du1sxwAcY4UEAFwsNzdXb731lnJzc21HARyhkACAi/l8Pq1Zs0Y+n892FMARCgkAALCOQgIAAKzjpFYAV9TNN9+s1NTUavdJTEzUZ599doUSAagPrvoVktatW6t169a2YwBXjdTU1Grvu3Ls2LEaCwv+v8jISP3sZz9TZGSk7ShwsfpwLLS6QlJSUqLg4GCbEQBY0LJlS/3f//3fBR+z/UPRbWJjYzV69GjbMQDHLnmFZNGiRUpISFBFRUWV7UOGDNGYMWP0zTffaMiQIWrWrJkiIiL0ox/9SB9//HGVfVu1aqVZs2Zp5MiRioqK0rhx4+pmCgC4Sp09e1ZHjhzR2bNnbUcBHPEYY8yl7JiXl6e4uDitWbNGt912m6Rvr3+Pj4/XmjVrFBsbq61bt6pv374KCQnRsmXLNHfuXB08eFAtW7aU9G0hycvL04wZM/TLX/5SktSmTZvzPtfZs2erfHMVFBQoMTFRPp9PUVFRTmeuonXr1kpNTVViYmKdvi6ACzv3/VbdCgnfk5eutLRUeXl5io6OVoMGDWzHgUvV9H15uQoKCuT1ei/p+H3JKyTR0dEaOHCg/v73v1due++99xQbG6tbb71V3bp100MPPaQf/OAHateunWbNmqU2bdpo1apVVV7npz/9qR5//HG1adPmgmVEkv74xz/K6/VWfvCDCQCA77danUMyfPhwjR07VgsXLlRISIiWL1+uoUOHKiAgQKdOndJzzz2n1atXKyMjQ2VlZSoqKjrv5LUbbrihxs/z9NNP63e/+13ln8+tkPiLP1ohgAu7lHNE+J68dEeOHNGzzz6rF198Uddee63tOHCp+nDuVq0KyR133CFjjFavXq0f/ehH2rhxo15++WVJ0hNPPKF169Zp7ty5atu2rcLCwnT33XerpKSkymuEh4fX+HlCQkIUEhJSm2gAAMDFalVIQkND9atf/UrLly/X4cOH1aFDB11//fWSpM2bN2vUqFG68847JUmnTp1SSkpKnQcG4H7Hjh276L/Ijh07VnneGWoWEBCg0NBQBQRc9b/FAS5X68t+hw8frttvv11ff/21RowYUbm9Xbt2Wrlype644w55PB5Nnz79vCty6iOWhYErq6a3X1u2bMl5Y7VwzTXXaMmSJbZjwOXqw7Gw1oXkpz/9qWJiYnTw4EHdd999ldvnzZunMWPGqE+fPoqNjdWUKVNUUFBQp2EBuB+/gRXAhVzyZb821eayIQC4mqSnp+tPf/qTHnvsMTVv3tx2HKAKv1z2CwCof0pKSpSenn7eBQSA21BIAACAdRQSAABgHYUEAABYRyEBABdr1qyZHn/8cTVr1sx2FMCRWl/2CwCoPxo2bKgePXrYjgE4xgoJALhYfn6+PvjgA+Xn59uOAjhCIQEAF8vLy9M777yjvLw821EARygkAADAOgoJAACwjkICAACso5AAgIuFh4erV69eCg8Ptx0FcISb6wEAAL/g5noAcJUoKytTbm6uysrKbEcBHKGQAICLpaamatKkSUpNTbUdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQAAsI7LfgHAxYwxKisrU1BQkDwej+04QBW1OX4HXaFMAAA/8Hg8atCgge0YgGO8ZQMALpaRkaFZs2YpIyPDdhTAEQoJALhYcXGx9u/fr+LiYttRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLxcbGauzYsYqNjbUdBXCEy34BwMUiIyN166232o4BOMYKCQC4WGFhoT799FMVFhbajgI4QiEBABfLzs7W4sWLlZ2dbTsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCx0NBQderUSaGhobajAI54jDHGdoia1Ob2xQAAoH6ozfGbFRIAcDFjjEpLS+WCf1sC1aKQAICLpaSk6IEHHlBKSortKIAjFBIAAGAdhQQAAFhHIQEAANZRSAAAgHXc7RcAXCwxMVGvvPIKvxIBrkchAQAXCwoKUkxMjO0YgGO8ZQMALnbixAnNnz9fJ06csB0FcIRCAgAudvr0aW3btk2nT5+2HQVwhEICAACso5AAAADrKCQAAMA6CgkAuFh0dLTuvfdeRUdH244COMJlvwDgYo0aNdKQIUNsxwAcY4UEAFzszJkz2rFjh86cOWM7CuAIhQQAXCwrK0v//d//raysLNtRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLBQcHq3nz5goODrYdBXDEY4wxtkPUpKCgQF6vVz6fj1tsAwDgErU5frNCAgAArKOQAICLHT16VGPGjNHRo0dtRwEcoZAAgItVVFSouLhYFRUVtqMAjlBIAACAdRQSAABgHYUEAABYRyEBABdLSEjQiy++qISEBNtRAEeCbAcAAFy+kJAQXXvttbZjAI6xQgIALpadna2lS5cqOzvbdhTAEQoJALhYYWGh1q1bp8LCQttRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLeb1eDRo0SF6v13YUwBEu+wUAF4uJidGIESNsxwAcY4UEAFysuLhYhw4dUnFxse0ogCMUEgBwsYyMDM2cOVMZGRm2owCOUEgAAIB1FBIAAGAdhQQAAFhHIQEAFwsMDFRkZKQCAwNtRwEc8RhjjO0QNSkoKJDX65XP51NUVJTtOAAA4BLU5vjNCgkAALCOQgIALpaWlqbJkycrLS3NdhTAEQoJALhYaWmpsrKyVFpaajsK4AiFBAAAWEchAQAA1lFIAACAdRQSAHCxuLg4TZ06VXFxcbajAI4E2Q4AALh8YWFh6tq1q+0YgGOskACAi+Xn52vFihXKz8+3HQVwhEICAC6Wl5enFStWKC8vz3YUwBEKCQAAsI5CAgAArKOQAAAA6ygkAOBiERER6tu3ryIiImxHARzxGGOM7RA1qc3tiwEAQP1Qm+M3KyQA4GLcXA/fFxQSAHCxtLQ0TZ48WWlpabajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAA1nHZLwAA8Asu+wUAAK5CIQEAF8vIyNCMGTOUkZFhOwrgCIUEAFysuLhYhw8fVnFxse0ogCMUEgAAYB2FBAAAWEchAQAA1lFIAMDFmjRpogkTJqhJkya2owCOBNkOAAC4fBEREbrxxhttxwAcY4UEAFysoKBA//rXv1RQUGA7CuAIhQQAXCwnJ0dvvPGGcnJybEcBHKGQAAAA6ygkAADAOgoJAACwjkICAC4WFhamrl27KiwszHYUwBGPMcbYDlGT2ty+GAAA1A+1OX6zQgIALlZRUaGioiJVVFTYjgI4QiEBABc7evSoHnzwQR09etR2FMARCgkAALCOQgIAAKyjkAAAAOsoJAAAwDru9gsALtayZUu99tprCg8Ptx0FcIRCAgAuFhgYyO9nwvcCb9kAgItlZWVp7ty5ysrKsh0FcIRCAgAudubMGe3cuVNnzpyxHQVwhEICAACso5AAAADrKCQAAMA6CgkAuFhMTIxGjBihmJgY21EAR7jsFwBczOv1atCgQbZjAI6xQgIALnb69Glt27ZNp0+fth0FcIRCAgAuduLECc2fP18nTpywHQVwhEICAACso5AAAADrKCQAAMA6CgkAuFhwcLBatWql4OBg21EARzzGGGM7RE0KCgrk9Xrl8/m4qyUAAC5Rm+M3KyQAAMA6CgkAuFhKSopGjhyplJQU21EARygkAOBixhiVlZXJBe++A9WikAAAAOsoJAAAwDoKCQAAsI67/QKAizVv3lyzZ89W06ZNbUcBHKGQAICLBQcHq0WLFrZjAI7xlg0AuFh2drYWLVqk7Oxs21EARygkAOBihYWF2rBhgwoLC21HARyhkAAAAOsoJAAAwDoKCQAAsI5CAgAu5vV6NXjwYHm9XttRAEe47BcAXCwmJkZDhw61HQNwjBUSAHCx4uJiJScnq7i42HYUwBEKCQC4WEZGhl544QVlZGTYjgI4QiEBAADWUUgAAIB1FBIAAGAdhQQAXCwoKEgxMTEKCuKiSbibxxhjbIeoSUFBgbxer3w+n6KiomzHAQAAl6A2x29WSAAAgHUUEgBwsdTUVE2aNEmpqam2owCOUEgAwMXKysqUm5ursrIy21EARygkAADAOgoJAACwjkICAACso5AAgIvFx8dr2rRpio+Ptx0FcITfpAMALhYaGqrOnTvbjgE4xgoJALhYbm6ukpKSlJubazsK4AiFBABczOfzadWqVfL5fLajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXi4yM1C233KLIyEjbUQBHPMYYYztETWpz+2IAAFA/1Ob4zQoJALhYSUmJ0tLSVFJSYjsK4AiFBABcLD09XU899ZTS09NtRwEcccVvaj33rlJBQYHlJABQvxQWFqq0tFSFhYX8jES9c+5r8lLODnHFOSRpaWlKTEy0HQMAAFyG1NRUtWjRotp9XFFIKioqdPz4cUVGRsrj8VzScwoKCpSYmKjU1NSr7kRYZmd2Zr96MDuz1+fZjTEqLCxUQkKCAgKqP0vEFW/ZBAQE1NisLiYqKqpe/2X5E7Mz+9WG2Zn9auOG2b1e7yXtx0mtAADAOgoJAACw7ntbSEJCQjRz5kyFhITYjnLFMTuzX22YndmvNt/H2V1xUisAAPh++96ukAAAAPegkAAAAOsoJAAAwDoKCQAAsM61hSQ3N1fDhw9XVFSUGjVqpAcffFCnTp2q9jnFxcWaOHGiGjdurIiICN11113Kysqqso/H4znvIykpyZ+j1Jq/Zj8nJydHLVq0kMfjUX5+vh8muHz+mD0nJ0cDBgxQQkKCQkJClJiYqEmTJtW7+4L4Y/Y9e/Zo2LBhSkxMVFhYmDp16qT58+f7e5Ra89fX/KOPPqoePXooJCREP/zhD/04waV79dVX1apVK4WGhqpXr1764osvqt3/H//4hzp27KjQ0FBdd911WrNmTZXHjTGaMWOG4uPjFRYWpn79+unQoUP+HOGy1fXsK1euVP/+/dW4cWN5PB7t3r3bj+mdqcvZS0tLNWXKFF133XUKDw9XQkKCRo4cqePHj/t7DGeMSw0YMMB069bNbN261WzcuNG0bdvWDBs2rNrnjB8/3iQmJppPPvnEbN++3fz4xz82ffr0qbKPJLN06VKTkZFR+VFUVOTPUWrNX7OfM2TIEDNw4EAjyeTl5flhgsvnj9lzc3PNwoULzZdffmlSUlLMxx9/bDp06FDj615p/pj9b3/7m3n00UfNhg0bzDfffGPefPNNExYWZhYsWODvcWrFX1/zjzzyiHnllVfM/fffb7p16+bHCS5NUlKSCQ4ONkuWLDFff/21GTt2rGnUqJHJysq64P6bN282gYGBZvbs2SY5OdlMmzbNNGjQwOzbt69yn5deesl4vV7z/vvvmz179pjBgweba6+9tt79XPPH7MuWLTPPP/+8Wbx4sZFkdu3adYWmqZ26nj0/P9/069fPvPPOO+bAgQNmy5YtpmfPnqZHjx5Xcqxac2UhSU5ONpLMl19+Wbntww8/NB6Px6Snp1/wOfn5+aZBgwbmH//4R+W2/fv3G0lmy5Ytldskmf/93//1W3an/Dm7McYsXLjQ3HzzzeaTTz6pd4XE37N/1/z5802LFi3qLrxDV3L2CRMmmFtvvbXuwjt0JWafOXNmvSgkPXv2NBMnTqz8c3l5uUlISDB//OMfL7j/PffcY37xi19U2darVy/z0EMPGWOMqaioMHFxcWbOnDmVj+fn55uQkBDz9ttv+2GCy1fXs3/XkSNH6nUh8efs53zxxRdGkjl69GjdhPYDV75ls2XLFjVq1Eg33HBD5bZ+/fopICBA27Ztu+BzduzYodLSUvXr169yW8eOHdWyZUtt2bKlyr4TJ05UbGysevbsqSVLllzSbZOvFH/OnpycrN///vdatmxZjTdBssHff+/nHD9+XCtXrtTNN99ctwM4cKVmlySfz6eYmJi6C+/QlZzdppKSEu3YsaNK5oCAAPXr1++imbds2VJlf0n6+c9/Xrn/kSNHlJmZWWUfr9erXr161av/D/6Y3S2u1Ow+n08ej0eNGjWqk9z+UP+OOpcgMzNTTZs2rbItKChIMTExyszMvOhzgoODz/vLaNasWZXn/P73v9e7776rdevW6a677tKECRO0YMGCOp/hcvlr9rNnz2rYsGGaM2eOWrZs6ZfsTvnz712Shg0bpoYNG6p58+aKiorS66+/Xqf5nfD37Od8/vnneueddzRu3Lg6yV0XrtTstmVnZ6u8vFzNmjWrsr26zJmZmdXuf+6/tXlNG/wxu1tcidmLi4s1ZcoUDRs2rF7fiK9eFZKpU6de8KTS734cOHDArxmmT5+uvn37qnv37poyZYqeeuopzZkzx6+fU7I/+9NPP61OnTppxIgRfvscF2N79nNefvll7dy5Ux988IG++eYb/e53v/P756wvs0vSV199pSFDhmjmzJnq37+/3z9ffZod+L4qLS3VPffcI2OM/vKXv9iOU60g2wG+6/HHH9eoUaOq3ad169aKi4vTiRMnqmwvKytTbm6u4uLiLvi8uLg4lZSUKD8/v8q/mrKysi76HEnq1auXZs2apbNnz/r1ngG2Z1+/fr327dun9957T5Iq36aKjY3Vs88+q+eff/4yJ6uZ7dm/u29cXJw6duyomJgY3XTTTZo+fbri4+Mva65LUV9mT05O1m233aZx48Zp2rRplzVLbdWX2euL2NhYBQYGnnclUHWZ4+Liqt3/3H+zsrKqfB1nZWXVm6uKJP/M7hb+nP1cGTl69KjWr19fr1dHJLnzKptzJ7lt3769ctvatWsv6SS39957r3LbgQMHajzB74UXXjDR0dF1F94hf81++PBhs2/fvsqPJUuWGEnm888/v+iZ3lfalfx7/+yzz4wkc+TIkTrL74Q/Z//qq69M06ZNzZNPPum/ARy4En/v9emk1kmTJlX+uby83DRv3rzakxtvv/32Ktt69+593kmtc+fOrXzc5/PV25Na63L273LDSa11PXtJSYn55S9/abp06WJOnDjhn+B1zJWFxJhvLwPs3r272bZtm9m0aZNp165dlcsA09LSTIcOHcy2bdsqt40fP960bNnSrF+/3mzfvt307t3b9O7du/LxVatWmcWLF5t9+/aZQ4cOmYULF5qGDRuaGTNmXNHZauKP2f/Tp59+Wu+usjHGP7OvXr3aLFmyxOzbt88cOXLE/POf/zSdOnUyffv2vaKz1cQfs+/bt880adLEjBgxosql7vXtB5i/vuYPHTpkdu3aZR566CHTvn17s2vXLrNr1y5z9uzZKzbbdyUlJZmQkBDzxhtvmOTkZDNu3DjTqFEjk5mZaYwx5v777zdTp06t3H/z5s0mKCjIzJ071+zfv9/MnDnzgpf9NmrUyHzwwQdm7969ZsiQIfX2st+6nj0nJ8fs2rXLrF692kgySUlJZteuXSYjI+OKz1edup69pKTEDB482LRo0cLs3r27yve2ra/tS+HaQpKTk2OGDRtmIiIiTFRUlBk9erQpLCysfPxcI/70008rtxUVFZkJEyaY6Oho07BhQ3PnnXdW+cL88MMPzQ9/+EMTERFhwsPDTbdu3cxrr71mysvLr+RoNfLH7P+pvhYSf8y+fv1607t3b+P1ek1oaKhp166dmTJlylUx+8yZM42k8z6uueaaKzhZzfz1NX/zzTdfcH6bK2MLFiwwLVu2NMHBwaZnz55m69atVfI+8MADVfZ/9913Tfv27U1wcLDp0qWLWb16dZXHKyoqzPTp002zZs1MSEiIue2228zBgwevxCi1VtezL1269IJ/vzNnzrwC09ROXc5+7vvhQh/f/R6pbzzG1KNrWgEAwFWpXl1lAwAArk4UEgAAYB2FBAAAWEchAQAA1lFIAACAdRQSAABgHYUEAABYRyEBAADWUUiAeu6WW27RY4895pfX/slPfqK///3vfnntkpIStWrVStu3b7+k/adPn65x48b5JYstP/7xj7VixQrbMQBXoJAAV6lVq1YpKytLQ4cOrdzWqlUr/elPfzpv3+eee67K3WGfe+45eTweeTweBQYGKjExUePGjVNubm7lPsHBwXriiSc0ZcqUGrNkZmZq/vz5evbZZyu3FRYW6rHHHtM111yjsLAw9enTR19++WWV540aNaoyx7mPAQMGVD5+9uxZ3X///YqKilL79u318ccfV3n+nDlz9Mgjj9SYT5IKCgr07LPPqmPHjgoNDVVcXJz69eunlStXVt4d+z/L47Rp0zR16lRVVFRc0ucArmYUEuAq9ec//1mjR49WQMDl/Rjo0qWLMjIydOzYMS1dulQfffSRHn744Sr7DB8+XJs2bdLXX39d7Wu9/vrr6tOnj6655prKbb/5zW+0bt06vfnmm9q3b5/69++vfv36KT09vcpzBwwYoIyMjMqPt99+u/KxRYsWaceOHdqyZYvGjRun++67r7I8HDlyRIsXL9aLL75Y46z5+fnq06ePli1bpqefflo7d+7Uv//9b91777166qmn5PP5Lvi8gQMHqrCwUB9++GGNnwO42lFIAJfJy8vTyJEjFR0drYYNG2rgwIE6dOhQlX0WL16sxMRENWzYUHfeeafmzZunRo0aVT5+8uRJrV+/Xnfcccdl5wgKClJcXJyaN2+ufv366de//rXWrVtXZZ/o6Gj17dtXSUlJ1b5WUlJSlSxFRUVasWKFZs+erZ/85Cdq27atnnvuObVt21Z/+ctfqjw3JCREcXFxlR/R0dGVj+3fv1+DBw9Wly5dNHHiRJ08eVLZ2dmSpIcfflj/9V//paioqBpnfeaZZ5SSkqJt27bpgQceUOfOndW+fXuNHTtWu3fvVkRExAWfFxgYqEGDBtU4PwAKCeA6o0aN0vbt27Vq1Spt2bJFxhgNGjRIpaWlkqTNmzdr/Pjx+u1vf6vdu3frZz/72XmrAJs2bVLDhg3VqVOnOsmUkpKitWvXKjg4+LzHevbsqY0bN170ubm5uUpOTtYNN9xQua2srEzl5eUKDQ2tsm9YWJg2bdpUZduGDRvUtGlTdejQQQ8//LBycnIqH+vWrZs2bdqkoqIirV27VvHx8YqNjdXy5csVGhqqO++8s8bZKioqlJSUpOHDhyshIeG8xyMiIhQUFHTR59c0P4BvXfy7CEC9c+jQIa1atUqbN29Wnz59JEnLly9XYmKi3n//ff3617/WggULNHDgQD3xxBOSpPbt2+vzzz/XP//5z8rXOXr0qJo1a3bBt2umTJmiadOmVdlWUlKizp07V9m2b98+RUREqLy8XMXFxZKkefPmnfd6CQkJOnr06EVnOnbsmIwxVQ72kZGR6t27t2bNmqVOnTqpWbNmevvtt7Vlyxa1bdu2cr8BAwboV7/6la699lp98803euaZZzRw4EBt2bJFgYGBGjNmjPbu3avOnTsrNjZW7777rvLy8jRjxgxt2LBB06ZNU1JSktq0aaMlS5aoefPm5+XLzs5WXl6eOnbseNEZqpOQkKDU1FRVVFRc9ttjwNWAQgK4yP79+xUUFKRevXpVbmvcuLE6dOig/fv3S5IOHjx43r/8e/bsWaWQFBUVnbf6cM6TTz6pUaNGVdn25z//Wf/+97+rbOvQoYNWrVql4uJivfXWW9q9e/cFTxANCwvTmTNnLjpTUVGRJJ2X580339SYMWPUvHlzBQYG6vrrr9ewYcO0Y8eOyn2+e0Luddddp65du6pNmzbasGGDbrvtNjVo0ECvvvpqldcdPXq0Hn30Ue3atUvvv/++9uzZo9mzZ+vRRx+94BUx5845uVxhYWGqqKjQ2bNnFRYW5ui1gO8z6jpwFYqNjVVeXt5FH2vbtm2Vj5iYmPP2Cw4OVtu2bfWDH/xAL730kgIDA/X888+ft19ubq6aNGlSbRZJ5+Vp06aNPvvsM506dUqpqan64osvVFpaqtatW1/0tVq3bq3Y2FgdPnz4go9/+umn+vrrrzVp0iRt2LBBgwYNUnh4uO655x5t2LDhgs9p0qSJGjVqpAMHDlz081YnNzdX4eHhlBGgBhQSwEU6deqksrIybdu2rXJbTk6ODh48WPmWSocOHc67PPY//9y9e3dlZmZetJRcjmnTpmnu3Lk6fvx4le1fffWVunfvftHntWnTRlFRUUpOTr7g4+Hh4YqPj1deXp7Wrl2rIUOGXPS10tLSlJOTo/j4+PMeKy4u1sSJE/XXv/5VgYGBKi8vrzzvprS0VOXl5Rd8zYCAAA0dOlTLly8/bzZJOnXqlMrKyi6aqab5AXyLQgK4SLt27TRkyBCNHTtWmzZt0p49ezRixAg1b9688kD9yCOPaM2aNZo3b54OHTqkv/71r/rwww/l8XgqX6d79+6KjY3V5s2b6yxb79691bVrV/3hD3+osn3jxo3q37//RZ8XEBCgfv36nXey6tq1a/XRRx/pyJEjWrdunW699VZ17NhRo0ePlvRtEXjyySe1detWpaSk6JNPPtGQIUPUtm1b/fznPz/v88yaNUuDBg2qLAd9+/bVypUrtXfvXr3yyivq27fvRTO++OKLSkxMVK9evbRs2TIlJyfr0KFDWrJkibp3765Tp05d9Lk1zQ/gWxQSwGWWLl2qHj166Pbbb1fv3r1ljNGaNWvUoEEDSd8eaF977TXNmzdP3bp100cffaTJkydXOUcjMDBQo0eP1vLly+s02+TJk/X6668rNTVVkrRlyxb5fD7dfffd1T7vN7/5jZKSkqr8AjGfz6eJEyeqY8eOGjlypG688UatXbu2cs7AwEDt3btXgwcPVvv27fXggw+qR48e2rhxo0JCQqq8/ldffaV33323yltKd999t37xi1/opptu0t69ezV//vyL5ouJidHWrVs1YsQIvfDCC+revbtuuukmvf3225ozZ468Xu8Fn5eenq7PP/+8skQBuDiPcXrGFoB6b+zYsTpw4ECVy08zMzPVpUsX7dy5s8ovJKtL9957r7p166Znnnmm2v2MMerVq5cmT56sYcOG+SWLDVOmTFFeXp4WLVpkOwpQ77FCAnwPzZ07V3v27NHhw4e1YMEC/c///I8eeOCBKvvExcXpb3/7m44dO+aXDCUlJbruuus0efLkGvf1eDxatGhRtediuFHTpk01a9Ys2zEAV2CFBPgeOnfVSGFhoVq3bq1HHnlE48ePtx0LAC6KQgIAAKzjLRsAAGAdhQQAAFhHIQEAANZRSAAAgHUUEgAAYB2FBAAAWEchAQAA1lFIAACAdf8P2rDFSZA8WecAAAAASUVORK5CYII=", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiQAAAGwCAYAAACZ7H64AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/H5lhTAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAqr0lEQVR4nO3deXxV9Z3/8fcle0hyISzZ2PfdAUSsLUJd2DqSKn0gsii2Dwa0KtBBSh0BrQiUokitFmEcdFrEVoPCoK3LKIhsFkVFQQHJikkwJLk3EBIS8v394SS/pglZOLl8c+T1fDzyR8499/L+BMJ553vPyfEYY4wAAAAsamE7AAAAAIUEAABYRyEBAADWUUgAAIB1FBIAAGAdhQQAAFhHIQEAANYF2w7QEBUVFfr6668VHR0tj8djOw4AAGgAY4yKioqUmJioFi3qXgNxRSH5+uuv1bFjR9sxAADARcjMzFSHDh3q3McVhSQ6OlrStwPFxMRYTgMAzUdaWpoeeeQRLVq0SF26dLEdB6jG7/erY8eOVcfxuriikFS+TRMTE0MhAYB/EB0drZCQEEVHR/P/I5qthpxuwUmtAADAOgoJALhYUFCQoqOjFRQUZDsK4IjHDXf79fv98nq98vl8LEkCAOASjTl+s0ICAACso5AAgItlZWVp3rx5ysrKsh0FcIRCAgAuVlZWptzcXJWVldmOAjhCIQEAANZRSAAAgHUUEgAAYB2FBABcLD4+XgsXLlR8fLztKIAjrvjV8QCA2kVERGjQoEG2YwCOsUICAC5WWFiolJQUFRYW2o4COEIhAQAXKygoUEpKigoKCmxHARyhkAAAAOsoJAAAwDoKCQAAsI5CAgAuFhUVpe9///uKioqyHQVwxGOMMbZD1Kcxty8GAADNQ2OO36yQAICLcXM9fFdQSADAxbKysjRv3jxlZWXZjgI4QiEBAADWUUgAAIB1FBIAAGAdhQQAAFjHZb8AACAguOwXAAC4CoUEAFwsOztbixcvVnZ2tu0ogCMUEgBwsZKSEh07dkwlJSW2owCOUEgAAIB1FBIAAGAdhQQAAFhHIQEAF2vXrp3uvvtutWvXznYUwJFg2wEAABcvKipKP/jBD2zHABxjhQQAXMzv9+vNN9+U3++3HQVwhEICAC526tQpPffcczp16pTtKIAjFBIAAGAdhQQAAFhHIQEAANZRSADAxSIiIjRo0CBFRETYjgI44jHGGNsh6tOY2xcDAIDmoTHHb1ZIAMDFKioqdPbsWVVUVNiOAjhCIQEAF0tPT9fPfvYzpaen244COEIhAQAA1lFIAACAdRQSAABgHYUEAABYx91+AcDFOnXqpLVr16ply5a2owCOUEgAwMWCgoL4/Uz4TuAtGwBwsdzcXK1atUq5ubm2owCOUEgAwMWKi4v10Ucfqbi42HYUwBEKCQAAsI5CAgAArKOQAAAA6ygkAOBisbGxmjZtmmJjY21HARzhsl8AcDGv16vx48fbjgE4xgoJALjYmTNntG/fPp05c8Z2FMARCgkAuNjJkye1Zs0anTx50nYUwBEKCQAAsI5CAgAArKOQAAAA6ygkAOBioaGh6tKli0JDQ21HARzxGGOM7RD18fv98nq98vl83NUSAACXaMzxmxUSAABgHYUEAFwsLS1Nt99+u9LS0mxHARyhkACAixljVF5eLhe8+w7UiUICAACso5AAAADrKCQAAMA67vYLAC6WlJSklStXqn379rajAI5QSADAxUJDQ9WhQwfbMQDHeMsGAFwsLy9P69atU15enu0ogCMUEgBwsaKiIm3fvl1FRUW2owCOUEgAAIB1FBIAAGAdhQQAAFhHIQEAF/N6vZowYYK8Xq/tKIAjXPYLAC4WGxuryZMn244BOMYKCQC4WElJiQ4dOqSSkhLbUQBHKCQA4GLZ2dlaunSpsrOzbUcBHKGQAAAA6ygkAADAOgoJAACwjkICAC4WHBys2NhYBQdz0STczWOMMbZD1Mfv98vr9crn8ykmJsZ2HAAA0ACNOX6zQgIAAKyjkACAi2VmZuqee+5RZmam7SiAIxQSAHCx8vJy5efnq7y83HYUwBEKCQAAsI5CAgAArKOQAAAA6ygkAOBiCQkJevDBB5WQkGA7CuAIv0kHAFwsPDxc/fr1sx0DcIwVEgBwsfz8fL344ovKz8+3HQVwhEICAC7m8/m0detW+Xw+21EARygkAADAOgoJAACwjkICAACso5AAgItFR0dr1KhRio6Oth0FcMRjjDG2Q9SnMbcvBgAAzUNjjt+skACAi507d05ZWVk6d+6c7SiAIxQSAHCxEydOaMGCBTpx4oTtKIAjFBIAAGAdhQQAAFhHIQEAANZRSADAxTwej4KDg+XxeGxHARzhsl8AABAQXPYLAABchUICAC524sQJPfDAA1z2C9ejkACAi507d05paWn8YjS4HoUEAABYRyEBAADWUUgAAIB1FBIAcLH27dtrzpw5at++ve0ogCPBtgMAAC5ey5YtNXz4cNsxAMdYIQEAF/P5fHr99dfl8/lsRwEcoZAAgIvl5+frT3/6k/Lz821HARyhkAAAAOsoJAAAwDoKCQAAsI5CAgAuFhkZqSFDhigyMtJ2FMARjzHG2A5Rn8bcvhgAADQPjTl+s0ICAC52/vx5+f1+nT9/3nYUwBEKCQC4WEZGhmbPnq2MjAzbUQBHKCQAAMA6CgkAALCOQgIAAKyjkAAAAOu47BcAXKyiokKlpaUKCwtTixb8jInmpTHH7+BLlAkAEAAtWrRQRESE7RiAY9RpAHCxnJwcrVixQjk5ObajAI5QSADAxc6ePatPP/1UZ8+etR0FcIRCAgAArKOQAAAA6ygkAADAOgoJALhYmzZtNGPGDLVp08Z2FMARLvsFABeLiYnR6NGjbccAHGOFBABc7PTp03r//fd1+vRp21EARygkAOBi33zzjZ5++ml98803tqMAjlBIAACAdRQSAABgHYUEAABYRyEBABcLDw9Xjx49FB4ebjsK4IjHGGNsh6hPY25fDAAAmofGHL9ZIQEAANZRSADAxVJTUzVlyhSlpqbajgI4QiEBAADWUUgAAIB1FBIAAGAdhQQAAFjH3X4BwMU6dOig1atXKzY21nYUwBEKCQC4WEhIiOLi4mzHABzjLRsAcLFvvvlGTz31FHf7hetRSADAxU6fPq1du3bp9OnTtqMAjlBIAACAdRQSAABgHYUEAABYRyEBABdr3bq1Jk6cqNatW9uOAjjCZb8A4GKtWrXSxIkTbccAHGOFBABc7OzZs/r000919uxZ21EARygkAOBiOTk5WrFihXJycmxHARyhkAAAAOsoJAAAwDoKCQAAsI5CAgAuVnlzvZCQENtRAEc8xhhjO0R9/H6/vF6vfD6fYmJibMcBAAAN0JjjNyskAADAOgoJALhYRkaGZs2apYyMDNtRAEcoJADgYufPn1dRUZHOnz9vOwrgCIUEAABYRyEBAADWUUgAAIB1FBIAcLGEhAQ9/PDDSkhIsB0FcCTYdgAAwMULDw9Xz549bccAHGOFBABcLD8/X3/605+Un59vOwrgCIUEAFzM5/Pp9ddfl8/nsx0FcIRCAgAArKOQAAAA6zipFcAlNXLkSGVmZta5T8eOHbVjx45LlAhAc3DZr5B069ZN3bp1sx0DuGxkZmbWed+VjIyMegsL/r/o6GjdeOONio6Oth0FLtYcjoVWV0jOnTun0NBQmxEAWNCpUycdP3681sds/6foNm3bttWdd95pOwbgWINXSJ555hklJSWpoqKi2vYJEybojjvu0FdffaXk5GTFxcUpKipKw4YN09tvv11t3y5dumjp0qWaMWOGvF6vZs6c2TRTAMBlqrS0VKmpqSotLbUdBXDEY4wxDdkxPz9fCQkJev3113X99ddLkgoKChQfH6//+Z//UVxcnPbu3atrrrlG4eHhev755/XYY4/pyy+/VKdOnSR9W0gKCgq0aNEi/fjHP5Yk9ejRo8afVVpaWu2by+/3q2PHjvL5fIqJiXE6czXdunVTZmamOnbs2KSvC6B2ld9vda2Q8D3ZcGVlZSooKFDr1q0VEhJiOw5cqr7vy4vl9/vl9XobdPxu8ApJbGysxo4dqxdeeKFq20svvaTY2Fhdf/31uuKKKzRr1iwNHDhQPXv21NKlS9WtWzdt3bq12utcd911mj9/vnr06FFrGZGk5cuXy+v1Vn3wHxMAAN9tjTqHZOrUqfq3f/s3Pf300woLC9PGjRs1efJkBQUF6cyZM3r44Ye1bds2ff311yovL9fZs2drnLx25ZVX1vvn/OpXv9IvfvGLqs8rV0gCJRCtEEDtGnKOCN+TDZeamqr/+I//0KOPPqquXbvajgOXag7nbjWqkNx0002qqKjQa6+9pmHDhmnnzp16/PHHJUn333+/3njjDa1atUo9evRQRESEfvKTn+jcuXPVXqNly5b1/jlhYWEKCwtrTDQAAOBijSokERERuuWWW7Rx40YdO3ZMvXr10tChQyVJO3fu1IwZM3TzzTdLkk6fPq20tLQmDwzA/TIyMi74E1lGRkbVeWeoX4sWLRQeHq4WLS773+IAl2v0Zb9Tp07VTTfdpM8//1zTpk2r2t6jRw9t3rxZN910kzwejxYtWlTjipzmiGVh4NKq7+3XTp06cd5YI3Tu3Fn/9V//ZTsGXK45HAsbXUiuu+46xcbG6ssvv9SUKVOqtq9evVo//elPdc0116ht27b65S9/Kb/f36RhAbgfv4EVQG0afNmvTY25bAgALicnTpzQE088oblz5yopKcl2HKCagFz2CwBofs6dO6cTJ07UuIAAcBsKCQAAsI5CAgAArKOQAAAA6ygkAOBicXFx+vd//3fFxcXZjgI40ujLfgEAzUdkZGTVL6gE3IwVEgBwscLCQm3ZskWFhYW2owCOUEgAwMUKCgr05z//WQUFBbajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAXa9mypYYPH66WLVvajgI4ws31AABAQHBzPQC4TJSXlys/P1/l5eW2owCOUEgAwMUyMzN1zz33KDMz03YUwBEKCQAAsI5CAgAArKOQAAAA6ygkAADAOi77BQAXM8aovLxcwcHB8ng8tuMA1TTm+B18iTIBAALA4/EoJCTEdgzAMd6yAQAXy87O1iOPPKLs7GzbUQBHKCQA4GIlJSU6fPiwSkpKbEcBHKGQAAAA6ygkAADAOgoJAACwjkICAC7Wtm1bzZw5U23btrUdBXCEy34BwMWio6P1wx/+0HYMwDFWSADAxYqKivTuu++qqKjIdhTAEQoJALhYXl6e1q9fr7y8PNtRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLhYeHq2/fvgoPD7cdBXDEY4wxtkPUpzG3LwYAAM1DY47frJAAgIsZY1RWViYX/GwJ1IlCAgAulpaWpjvuuENpaWm2owCOUEgAAIB1FBIAAGAdhQQAAFhHIQEAANZxt18AcLGOHTvq97//Pb8SAa5HIQEAFwsODlZsbKztGIBjvGUDAC528uRJrVmzRidPnrQdBXCEQgIALnbmzBnt27dPZ86csR0FcIRCAgAArKOQAAAA6ygkAADAOgoJALhY69atdeutt6p169a2owCOcNkvALhYq1atlJycbDsG4BgrJADgYsXFxfrwww9VXFxsOwrgCIUEAFwsNzdXjz32mHJzc21HARyhkAAAAOsoJAAAwDoKCQAAsI5CAgAuFhoaqqSkJIWGhtqOAjjiMcYY2yHq4/f75fV65fP5uMU2AAAu0ZjjNyskAADAOgoJALhYenq6fvrTnyo9Pd12FMARCgkAuFhFRYVKSkpUUVFhOwrgCIUEAABYRyEBAADWUUgAAIB1FBIAcLHExEQ9+uijSkxMtB0FcCTYdgAAwMULCwtT165dbccAHGOFBABcLC8vTxs2bFBeXp7tKIAjFBIAcLGioiK99dZbKioqsh0FcIRCAgAArKOQAAAA6ygkAADAOgoJALiY1+vV+PHj5fV6bUcBHOGyXwBwsdjYWE2bNs12DMAxVkgAwMVKSkp09OhRlZSU2I4COEIhAQAXy87O1pIlS5SdnW07CuAIhQQAAFhHIQEAANZRSAAAgHUUEgBwsaCgIEVHRysoKMh2FMARjzHG2A5RH7/fL6/XK5/Pp5iYGNtxAABAAzTm+M0KCQAAsI5CAgAulpWVpXnz5ikrK8t2FMARCgkAuFhZWZlyc3NVVlZmOwrgCIUEAABYRyEBAADWUUgAAIB1FBIAcLH4+HgtXLhQ8fHxtqMAjgTbDgAAuHgREREaNGiQ7RiAY6yQAICLFRYWKiUlRYWFhbajAI5QSADAxQoKCpSSkqKCggLbUQBHKCQAAMA6CgkAALCOQgIAAKyjkACAi0VFRen73/++oqKibEcBHPEYY4ztEPVpzO2LAQBA89CY4zcrJADgYtxcD98VFBIAcLGsrCzNmzdPWVlZtqMAjlBIAACAdRQSAABgHYUEAABYRyEBAADWcdkvAAAICC77BQAArkIhAQAXy87O1uLFi5WdnW07CuAIhQQAXKykpETHjh1TSUmJ7SiAIxQSAABgHYUEAABYRyEBAADWUUgAwMXatWunu+++W+3atbMdBXAk2HYAAMDFi4qK0g9+8APbMQDHWCEBABfz+/1688035ff7bUcBHKGQAICLnTp1Ss8995xOnTplOwrgCIUEAABYRyEBAADWUUgAAIB1FBIAcLGIiAgNGjRIERERtqMAjniMMcZ2iPo05vbFAACgeWjM8ZsVEgBwsYqKCp09e1YVFRW2owCOUEgAwMXS09P1s5/9TOnp6bajAI5QSAAAgHUUEgAAYB2FBAAAWEchAQAA1nG3XwBwsU6dOmnt2rVq2bKl7SiAIxQSAHCxoKAgfj8TvhN4ywYAXCw3N1erVq1Sbm6u7SiAIxQSAHCx4uJiffTRRyouLrYdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WGxsrKZNm6bY2FjbUQBHuOwXAFzM6/Vq/PjxtmMAjrFCAgAudubMGe3bt09nzpyxHQVwhEICAC528uRJrVmzRidPnrQdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WGhoqLp06aLQ0FDbUQBHPMYYYztEffx+v7xer3w+H3e1BADAJRpz/GaFBAAAWEchAQAXS0tL0+233660tDTbUQBHKCQA4GLGGJWXl8sF774DdaKQAAAA6ygkAADAOgoJAACwjrv9AoCLJSUlaeXKlWrfvr3tKIAjFBIAcLHQ0FB16NDBdgzAMd6yAQAXy8vL07p165SXl2c7CuAIhQQAXKyoqEjbt29XUVGR7SiAIxQSAABgHYUEAABYRyEBAADWUUgAwMW8Xq8mTJggr9drOwrgCJf9AoCLxcbGavLkybZjAI6xQgIALlZSUqJDhw6ppKTEdhTAEQoJALhYdna2li5dquzsbNtRAEcoJAAAwDoKCQAAsI5CAgAArKOQAICLBQcHKzY2VsHBXDQJd/MYY4ztEPXx+/3yer3y+XyKiYmxHQcAADRAY47frJAAAADrKCQA4GKZmZm65557lJmZaTsK4AiFBABcrLy8XPn5+SovL7cdBXCEQgIAAKyjkAAAAOsoJAAAwDoKCQC4WEJCgh588EElJCTYjgI4wm/SAQAXCw8PV79+/WzHABxjhQQAXCw/P18vvvii8vPzbUcBHKGQAICL+Xw+bd26VT6fz3YUwBEKCQAAsI5CAgAArKOQAAAA6ygkAOBi0dHRGjVqlKKjo21HARzxGGOM7RD1acztiwEAQPPQmOM3KyQA4GLnzp1TVlaWzp07ZzsK4AiFBABc7MSJE1qwYIFOnDhhOwrgiCt+U2vlu0p+v99yEgBoXoqKilRWVqaioiL+j0SzU/lvsiFnh7jiHJKsrCx17NjRdgwAAHARMjMz1aFDhzr3cUUhqaio0Ndff63o6Gh5PJ4GPcfv96tjx47KzMy87E6EZXZmZ/bLB7Mze3Oe3RijoqIiJSYmqkWLus8SccVbNi1atKi3WV1ITExMs/7LCiRmZ/bLDbMz++XGDbN7vd4G7cdJrQAAwDoKCQAAsO47W0jCwsK0ZMkShYWF2Y5yyTE7s19umJ3ZLzffxdldcVIrAAD4bvvOrpAAAAD3oJAAAADrKCQAAMA6CgkAALDOtYWkoKBA06dPl9frldfr1fTp01VYWFjnc4wxeuihh5SYmKiIiAiNGjVKn3/+ebV9Ro0aJY/HU+1j8uTJAZyk8QI1+z/uO27cOHk8Hr366qtNP4ADgZp91qxZ6t69uyIiItSuXTslJyfriy++COAkjReI2fPz83Xvvfeqd+/eioyMVKdOnXTffffJ5/MFeJrGCdTf+7p16zRq1CjFxMTI4/HU+5qXwtNPP62uXbsqPDxcQ4cO1c6dO+vcf8eOHRo6dKjCw8PVrVs3rV27tsY+KSkp6tevn8LCwtSvXz+98sorgYrvSFPP/vnnn2vixInq0qWLPB6PnnjiiQCmd6apZ1+/fr1GjBih1q1bq3Xr1rrhhhv0wQcfBHIE54xLjR071gwYMMDs3r3b7N692wwYMMD867/+a53PWbFihYmOjjYpKSnm4MGD5tZbbzUJCQnG7/dX7TNy5Egzc+ZMk52dXfVRWFgY6HEaJVCzV3r88cfNuHHjjCTzyiuvBGiKixOo2Z955hmzY8cOk5qaaj788ENz0003mY4dO5ry8vJAj9RggZj94MGD5pZbbjFbt241x44dM//7v/9revbsaSZOnHgpRmqwQP29r1692ixfvtwsX77cSDIFBQUBnqRuL774ogkJCTHr1683hw4dMnPmzDEtW7Y06enpte5//PhxExkZaebMmWMOHTpk1q9fb0JCQszLL79ctc/u3btNUFCQWbZsmTl8+LBZtmyZCQ4ONnv37r1UYzVIIGb/4IMPzPz5882mTZtMfHy8Wb169SWapnECMfuUKVPMU089ZQ4cOGAOHz5s7rzzTuP1ek1WVtalGqvRXFlIDh06ZCRV+4bas2ePkWS++OKLWp9TUVFh4uPjzYoVK6q2lZSUGK/Xa9auXVu1beTIkWbOnDkBy+5UIGc3xpiPP/7YdOjQwWRnZze7QhLo2f/RJ598YiSZY8eONd0ADlzK2f/yl7+Y0NBQU1ZW1nQDOHApZn/33XebRSG56qqrzOzZs6tt69Onj1m4cGGt+y9YsMD06dOn2rZZs2aZq6++uurzSZMmmbFjx1bbZ8yYMWby5MlNlLppBGL2f9S5c+dmW0gCPbsxxpSXl5vo6Gjz/PPPOw8cIK58y2bPnj3yer0aPnx41barr75aXq9Xu3fvrvU5qampysnJ0ejRo6u2hYWFaeTIkTWes3HjRrVt21b9+/fX/PnzVVRUFJhBLkIgZy8uLtZtt92m3//+94qPjw/cEBcp0H/vlc6cOaMNGzaoa9euzeYu05dqdkny+XyKiYlRcHDzuNXVpZzdpnPnzunDDz+sllmSRo8efcHMe/bsqbH/mDFjtH//fpWVldW5T3P6OgRqdje4VLMXFxerrKxMsbGxTRM8AFxZSHJyctS+ffsa29u3b6+cnJwLPkeS4uLiqm2Pi4ur9pypU6dq06ZN2r59uxYtWqSUlBTdcsstTZjemUDOPm/ePF1zzTVKTk5uwsRNJ5CzS9++hxsVFaWoqCj97W9/01tvvaXQ0NAmSu9MoGevdOrUKT3yyCOaNWuWw8RN51LNblteXp7Onz/fqMw5OTm17l9eXq68vLw692lOX4dAze4Gl2r2hQsXKikpSTfccEPTBA+AZlVIHnrooRonlP7zx/79+yVJHo+nxvONMbVu/0f//Pg/P2fmzJm64YYbNGDAAE2ePFkvv/yy3n77bX300UdNMOGF2Z5969ateuedd6yc9GV79kpTp07VgQMHtGPHDvXs2VOTJk1SSUmJw+nq1lxml769nfmPfvQj9evXT0uWLHEwVcM0p9mbk8Zmrm3/f97ulq9DIGZ3i0DOvnLlSm3atEmbN29WeHh4E6QNjOaxJvt/7rnnnnqvaOnSpYs+/fRT5ebm1njsm2++qdEaK1W+BZGTk6OEhISq7SdPnrzgcyRpyJAhCgkJ0dGjRzVkyJCGjHFRbM/+zjvv6KuvvlKrVq2qPXfixIkaMWKEtm/f3ohpGsf27JUqr+Do2bOnrr76arVu3VqvvPKKbrvttsaO1GDNZfaioiKNHTtWUVFReuWVVxQSEtLYURqtuczeXLRt21ZBQUE1fiquK3N8fHyt+wcHB6tNmzZ17tOcvg6Bmt0NAj37qlWrtGzZMr399tsaNGhQ04Zvapf8rJUmUHmS2759+6q27d27t0Enuf3mN7+p2lZaWlrvCX4HDx40ksyOHTuabgAHAjV7dna2OXjwYLUPSWbNmjXm+PHjgR2qgS7l33tpaamJiIgwGzZsaLL8TgRydp/PZ66++mozcuRIc+bMmcANcZEuxd97czqp9a677qq2rW/fvnWe3Ni3b99q22bPnl3jpNZx48ZV22fs2LHN8qTWpp79HzX3k1oDMfvKlStNTEyM2bNnT9MGDhBXFhJjvv2GGjRokNmzZ4/Zs2ePGThwYI3LAHv37m02b95c9fmKFSuM1+s1mzdvNgcPHjS33XZbtcsAjx07Zh5++GHz97//3aSmpprXXnvN9OnTxwwePLjZXf7Z1LPXRs3sKhtjAjP7V199ZZYtW2b2799v0tPTze7du01ycrKJjY01ubm5l3S+ugRidr/fb4YPH24GDhxojh07Vu1y98vh33x2drY5cOCAWb9+vZFk3nvvPXPgwAFz6tSpSzbbP6q8/PPZZ581hw4dMnPnzjUtW7Y0aWlpxhhjFi5caKZPn161f+Xln/PmzTOHDh0yzz77bI3LP3ft2mWCgoLMihUrzOHDh82KFSua9WW/TTl7aWmpOXDggDlw4IBJSEgw8+fPNwcOHDBHjx695PPVJRCz/+Y3vzGhoaHm5ZdfrvZ9XVRUdMnnayjXFpJTp06ZqVOnmujoaBMdHW2mTp1a46cbSdV+wq2oqDBLliwx8fHxJiwszFx77bXm4MGDVY9nZGSYa6+91sTGxprQ0FDTvXt3c99991n7z+lCAjF7bZpjIQnE7CdOnDDjxo0z7du3NyEhIaZDhw5mypQpF/zp25ZAzF65MlDbR2pq6qUZrAEC9W9+yZIltc5uc2XsqaeeMp07dzahoaFmyJAh1VZn77jjDjNy5Mhq+2/fvt0MHjzYhIaGmi5dupg//OEPNV7zpZdeMr179zYhISGmT58+JiUlJdBjXJSmnj01NbXWv99/fp3moKln79y5c62zL1my5BJMc3E8xvzfmTAAAACWNKurbAAAwOWJQgIAAKyjkAAAAOsoJAAAwDoKCQAAsI5CAgAArKOQAAAA6ygkAADAOgoJ0MyNGjVKc+fODchrX3vttXrhhRcC8tqSNGzYMG3evLlB+z777LMaPXp0wLLYMH/+fN133322YwCuQCEBLlPbtm1TTk5OtTvudunSRU888USNfR966CH9y7/8S7XPPR6PPB6PWrRoocTERE2dOlWZmZnVnrdo0SItXLhQFRUVdWYpLS3V4sWLtWjRoqptZWVl+vWvf63u3bsrPDxcV1xxhf72t7/VyFWZo/Kj8k6/lVatWqW4uDjFxcVp9erV1R7bt2+fhg4dqvPnz9eZT/r29u7r1q3T8OHDFRUVpVatWunKK6/UE088oeLi4lq/TgsWLNCGDRuUmppa7+sDlzsKCXCZ+t3vfqc777xTLVpc3H8D/fv3V3Z2trKysvTnP/9ZBw8e1KRJk6rt86Mf/Ug+n09vvPFGna+VkpKiqKgojRgxomrbgw8+qGeeeUZPPvmkDh06pNmzZ+vmm2/WgQMHas1R+XHw4MGqxw4ePKjFixdr06ZNeuGFF/TAAw/os88+k/Rt4Zk9e7bWrl2roKCgeuedPn265s6dq+TkZL377rv6+OOPtWjRIm3ZskVvvvlmrc9p3769Ro8erbVr19b7+sDljkICuExBQYFuv/12tW7dWpGRkRo3bpyOHj1abZ/169erY8eOioyM1M0336zHH39crVq1qno8Ly9Pb7/9tiZMmHDROYKDgxUfH6/ExESNGDFCM2fO1N69e+X3+6v2CQoK0vjx47Vp06Y6X+vFF1+skeWPf/yjHnjgAY0fP17dunXTXXfdpTFjxuixxx6rNUflR7t27aoeO3z4sAYNGqTrrrtO119/vQYNGqTDhw9Lkn7729/q2muv1bBhw+qd9S9/+Ys2btyoTZs26YEHHtCwYcPUpUsXJScn65133tEPf/jDCz53woQJ9c4PgEICuM6MGTO0f/9+bd26VXv27JExRuPHj1dZWZkkadeuXZo9e7bmzJmjjz/+WDfeeKMeffTRaq/x/vvvKzIyUn379m2STDk5Odq8ebOCgoJqrDZcddVV2rlzZ53P37lzp6688spq20pLSxUeHl5tW0REhN5///1q244eParExER17dpVkydP1vHjx6seGzhwoI4cOaKMjAylp6fryJEjGjBggI4dO6bnnntOS5cubdB8GzduVO/evZWcnFzjMY/HI6/Xe8HnXnXVVcrMzFR6enqD/izgckUhAVzk6NGj2rp1q/7zP/9TI0aM0BVXXKGNGzfqxIkTevXVVyVJTz75pMaNG6f58+erV69euvvuuzVu3Lhqr5OWlqa4uLha36755S9/qaioqGofy5Ytq7HfwYMHFRUVpcjISCUkJGj79u36+c9/rpYtW1bbLykpSRkZGRc8j6SwsFCFhYVKTEystn3MmDF6/PHHdfToUVVUVOitt97Sli1blJ2dXbXP8OHD9d///d964403tH79euXk5Oiaa67RqVOnJEl9+/bVsmXLdOONN2r06NFavny5+vbtq9mzZ2vlypV64403NGDAAA0ePFjvvfdenV/33r17X/DxuiQlJUn69msO4MKCbQcA0HCHDx9WcHCwhg8fXrWtTZs26t27d9VbEV9++aVuvvnmas+76qqrtG3btqrPz549W2P1odL999+vGTNmVNv2u9/9rsYBu3fv3tq6datKS0u1ZcsWvfTSSzVWYqRvVzUqKipUWlqqiIiIGo+fPXtWkmrkWbNmjWbOnKk+ffrI4/Goe/fuuvPOO7Vhw4aqff6xaA0cOFDf+9731L17dz3//PP6xS9+IUmaPXu2Zs+eXbXfc889p+joaH3ve99T79699fe//11ZWVmaPHmyUlNTFRYWViOjMUYej6fWr1d9KmeuPPEVQO0oJICLGGMuuL3ygFnbwfOfn9e2bVsVFBTU+lpt27ZVjx49qm2LjY2tsV9oaGjVfv3799fRo0d111136Y9//GO1/fLz8xUZGVlrGZG+LVQej6dGnnbt2unVV19VSUmJTp06pcTERC1cuFBdu3at9XUkqWXLlho4cGCNc2oq5eXl6de//rXee+897du3T7169VLPnj3Vs2dPlZWV6ciRIxo4cGCN5/Xq1auq8DVWfn5+1TwALoy3bAAX6devn8rLy7Vv376qbadOndKRI0eqzgfp06ePPvjgg2rP279/f7XPBw8erJycnAuWkouxaNEibdq0SR999FG17Z999pmGDBlyweeFhoaqX79+OnToUK2Ph4eHKykpSeXl5UpJSan1PI5KpaWlOnz4sBISEmp9fO7cuZo3b546dOig8+fPV513I0nl5eUXvPx3ypQpOnLkiLZs2VLjMWOMfD7fBTN99tlnCgkJUf/+/S+4DwAKCeAqPXv2VHJysmbOnKn3339fn3zyiaZNm6akpKSqA/W9996r119/ver8i2eeeUZ//etfq62aDB48WO3atdOuXbuaLFu3bt2UnJysxYsXV9u+c+fOen/h2ZgxY2qcrLpv3z5t3rxZx48f186dOzV27FhVVFRowYIFVfvMnz9fO3bsUGpqqvbt26ef/OQn8vv9uuOOO2r8GW+99ZaOHj2qn//855K+fRvriy++0F//+letW7dOQUFBFzxPZNKkSbr11lt12223afny5dq/f7/S09O1bds23XDDDXr33XcvONvOnTs1YsSIC64QAfg/BkCzNnLkSDNnzpyqz/Pz88306dON1+s1ERERZsyYMebIkSPVnrNu3TqTlJRkIiIizI9//GOzdOlSEx8fX22fhQsXmsmTJ1fb1rlzZ7N69eoaGZYsWWKuuOKKC35eadeuXUaS2bt3rzHGmKysLBMSEmIyMzPrnPHw4cMmIiLCFBYWVm3bvn276du3rwkLCzNt2rQx06dPNydOnKj2vFtvvdUkJCSYkJAQk5iYaG655Rbz+eef13j94uJi06tXL3PgwIFq29evX2/i4uJMp06dzLZt2+rMeP78efOHP/zBDBs2zERGRpqYmBgzdOhQs2bNGlNcXHzBr0uvXr3Mpk2b6nxtAMZ4jLnAm9IAvjNmzpypL774otrlt7m5uerfv78+/PBDde7cOSB/7v333y+fz6d169bVu++kSZM0ePBg/epXvwpIFhtee+013X///fr0008VHMwpe0BdeMsG+A5atWqVPvnkEx07dkxPPvmknn/++RpvY8TFxenZZ59VRkZGwHK0b99ejzzySIP2/e1vf6uoqKiAZbHhzJkz2rBhA2UEaABWSIDvoEmTJmn79u0qKipSt27ddO+991a79BUAmhsKCQAAsI63bAAAgHUUEgAAYB2FBAAAWEchAQAA1lFIAACAdRQSAABgHYUEAABYRyEBAADW/T8je6jsLnWr0wAAAABJRU5ErkJggg==", "text/plain": [ "
" ] @@ -874,7 +874,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 13, "metadata": {}, "outputs": [], "source": [ @@ -890,9 +890,17 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 14, "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "CUDA-enabled GPU/TPU is available.\n" + ] + } + ], "source": [ "# Constant parameters accross models\n", "# Detect available accelerator; Downgrade batch size if only CPU available\n", @@ -907,21 +915,6 @@ "LEARNING_RATE = 1e-2" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "df_onehot = pd.get_dummies(df, columns=[\"horTh\", \"menostat\", \"tgrade\"]).astype(\"float\")\n", - "df_onehot.drop(\n", - " [\"horTh_no\", \"menostat_Post\", \"tgrade_I\"],\n", - " axis=1,\n", - " inplace=True,\n", - ")\n", - "df_onehot.head(5)" - ] - }, { "cell_type": "code", "execution_count": null, @@ -987,18 +980,7 @@ "cell_type": "code", "execution_count": null, "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "tensor([1.3927, 1.5773, 0.0192, 0.1983])" - ] - }, - "execution_count": 34, - "metadata": {}, - "output_type": "execute_result" - } - ], + "outputs": [], "source": [ "print('x_test', x_test.shape)\n", "print('events', test_event.shape)\n", From c38c6b38d5c14ba4761037fe5f58e8eb2c1289c6 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Mon, 6 Jan 2025 14:14:30 -0500 Subject: [PATCH 13/19] added doctest --- docs/notebooks/loss_time_covariates.py | 52 +++++++- docs/notebooks/time_varying.ipynb | 173 ++++++++++++++----------- 2 files changed, 144 insertions(+), 81 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index dfd5a20..0279de4 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -7,14 +7,40 @@ def neg_partial_time_log_likelihood( - log_hz: torch.Tensor, # Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time - time: torch.Tensor, # n length vector, time at which someone experiences event - events: torch.Tensor, # n length vector, boolean, true or false to determine if someone had an event + log_hz: torch.Tensor, + time: torch.Tensor, + events: torch.Tensor, reduction: str = "mean", ) -> torch.Tensor: """ - needs further work + Compute the negative partial log-likelihood for time-dependent covariates in a Cox proportional hazards model. + Args: + log_hz (torch.Tensor): A tensor of shape (T, n, p) where T is the number of time points, n is the batch size, + and p is the number of different covariates over time. + time (torch.Tensor): A tensor of length n representing the time at which an event occurs for each individual. + events (torch.Tensor): A boolean tensor of length n indicating whether an event occurred (True) or not (False) for each individual. + reduction (str, optional): Specifies the reduction to apply to the output: 'mean' | 'sum'. Default is 'mean'. + Returns: + torch.Tensor: The computed negative partial log-likelihood. If reduction is 'mean', returns the mean value. + If reduction is 'sum', returns the sum of the values. + Raises: + ValueError: If the specified reduction method is not 'mean' or 'sum'. + + Examples: + >>> _ = torch.manual_seed(52) + >>> n = 10 # number of samples + >>> t = 5 # time steps + >>> time = torch.randint(low=5, high=250, size=(n,)).float() + >>> event = torch.randint(low=0, high=2, size=(n,)).bool() + >>> log_hz = torch.rand((t, n, 1)) + >>> neg_partial_time_log_likelihood(log_hz, time, event) + tensor(0.9456) + >>> neg_partial_time_log_likelihood(log_hz.squeeze(), time, event) # Also works with 2D tensor + tensor(0.9456) + >>> neg_partial_time_log_likelihood(log_hz, time, event, reduction='sum') + tensor(37.8241) """ + # only consider theta at tiem of pll = _partial_likelihood_time_cox(log_hz, time, events) @@ -86,7 +112,15 @@ def _partial_likelihood_time_cox( we want to identify the index of the covariate upon failure. We could either consider the last covariate before a series of zeros (requires special data formatting but could reduce issues as it automatically contains event time information). - + Examples: + >>> _ = torch.manual_seed(52) + >>> n = 3 # number of samples + >>> t = 3 # time steps + >>> time = torch.randint(low=5, high=250, size=(n,)).float() + >>> event = torch.randint(low=0, high=2, size=(n,)).bool() + >>> log_hz = torch.rand((t, n, 1)) + >>> _partial_likelihood_time_cox(log_hz, time, event) + tensor([-1.3772, -1.0683, -0.7879, -0.8220, 0.0000, 0.0000]) """ # Last dimension must be equal to 1 if shape == 3 if len(log_hz.shape) == 3: @@ -114,13 +148,13 @@ def _partial_likelihood_time_cox( log_hz_sorted_tj = torch.gather(log_hz_sorted, 1, idx.expand(log_hz_sorted.size())) # same step as in normal cox loss, just again, we consider Z(tau_j) where tau_j denotes event time to subject j - log_denominator_tj = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) + log_cumulative_hazard = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0) # Keep only patients with events include = events_sorted.expand(log_hz_sorted.size()) # return the partial log likelihood - return (log_hz_sorted_tj - log_denominator_tj)[include] + return (log_hz_sorted_tj - log_cumulative_hazard)[include] def _time_varying_covariance( @@ -168,6 +202,10 @@ def _time_varying_covariance( if __name__ == "__main__": import torch from torchsurv.metrics.cindex import ConcordanceIndex + import doctest + + # Run doctest + results = doctest.testmod() # set seed torch.manual_seed(123) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index da52850..8f87285 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -117,7 +117,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -130,7 +130,7 @@ "from sklearn.model_selection import train_test_split\n", "\n", "# Our package\n", - "#from torchsurv.loss.time_varying import neg_partial_log_likelihood2\n", + "# from torchsurv.loss.time_varying import neg_partial_log_likelihood2\n", "\n", "# PyTorch boilerplate - see https://github.com/Novartis/torchsurv/blob/main/docs/notebooks/helpers_introduction.py\n", "from helpers_introduction import Custom_dataset, plot_losses" @@ -170,7 +170,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -191,14 +191,14 @@ "torch.manual_seed(123)\n", "\n", "n = 100 # Number of subjects\n", - "T = torch.tensor(6) # Number of time points\n", + "T = torch.tensor(6) # Number of time points\n", "time_vec = torch.tensor([0, 1, 2, 3, 4, 5])\n", "\n", "# Simulation parameters\n", "age_mean = 35\n", "age_std = 5\n", "sex_prob = 0.54\n", - "G = torch.tensor([[0.29, -0.00465],[-0.00465, 0.000320]])\n", + "G = torch.tensor([[0.29, -0.00465], [-0.00465, 0.000320]])\n", "Z = torch.tensor([[1, 1, 1, 1, 1, 1], time_vec], dtype=torch.float32).T\n", "sigma = torch.tensor([0.1161])\n", "alpha = 1\n", @@ -220,7 +220,12 @@ "\n", "# Generate expected longitudinal trajectories\n", "# quite frakly this is useless now - it was based on my bad understanding of the algorithm\n", - "trajectories = random_effects[:, 0].unsqueeze(1) + random_effects[:, 1].unsqueeze(1) * Z[:,1] + alpha * age.unsqueeze(1) + error_sample\n", + "trajectories = (\n", + " random_effects[:, 0].unsqueeze(1)\n", + " + random_effects[:, 1].unsqueeze(1) * Z[:, 1]\n", + " + alpha * age.unsqueeze(1)\n", + " + error_sample\n", + ")\n", "\n", "print(trajectories[1:5, :])" ] @@ -280,11 +285,11 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ - "#import lmbert W function\n", + "# import lmbert W function\n", "\n", "from scipy.special import lambertw" ] @@ -302,7 +307,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -328,15 +333,19 @@ "source": [ "# Specify the values for parameters, generate the random variables and call on relevant variables defined previously\n", "\n", - "alpha = torch.tensor([0.05, -0.5]) # regression coefficient for time-invariant covariates\n", - "gamma = torch.tensor(0.3) # association strength between longitudinal measures and time-to-event\n", + "alpha = torch.tensor(\n", + " [0.05, -0.5]\n", + ") # regression coefficient for time-invariant covariates\n", + "gamma = torch.tensor(\n", + " 0.3\n", + ") # association strength between longitudinal measures and time-to-event\n", "lambda_0 = torch.tensor(0.1) # baseline hazard rate\n", "\n", "torch.manual_seed(456)\n", "\n", "# Generate the random variables for hazard of a subject and censoring\n", "Q = dist.Uniform(0, 1).sample((n,)) # Random variable for hazard (Q)\n", - "C = dist.Uniform(3,5.5).sample((n,)) # Random variable for censoring\n", + "C = dist.Uniform(3, 5.5).sample((n,)) # Random variable for censoring\n", "\n", "# age and sex are the names of variables corresponding to those covariates\n", "# create the X matrix of covariates\n", @@ -348,19 +357,19 @@ "\n", "# Generate time to event T using the equation above\n", "log_Q = torch.log(Q)\n", - "lambert_W_nominator = gamma*b2*log_Q\n", - "lambert_W_denominator = torch.exp(alpha@XX.T + gamma*b1)\n", - "# below should give a vector of length sample_size \n", - "lambert_W = lambertw(-lambert_W_nominator/(lambda_0*lambert_W_denominator))\n", - "time_to_event = lambert_W/(gamma*b2)\n", + "lambert_W_nominator = gamma * b2 * log_Q\n", + "lambert_W_denominator = torch.exp(alpha @ XX.T + gamma * b1)\n", + "# below should give a vector of length sample_size\n", + "lambert_W = lambertw(-lambert_W_nominator / (lambda_0 * lambert_W_denominator))\n", + "time_to_event = lambert_W / (gamma * b2)\n", "\n", - "#take the real part of the LBF, the complex part is =0\n", + "# take the real part of the LBF, the complex part is =0\n", "outcome_LWF = time_to_event.real\n", "outcome_LWF = torch.floor(outcome_LWF)\n", "outcome_LWF\n", "\n", "# implement censoring with some level of intensity below\n", - "events = C<5\n", + "events = C < 5\n", "events" ] }, @@ -415,7 +424,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -430,8 +439,8 @@ } ], "source": [ - "#from torchsurv.loss import time_covariates\n", - "#from torchsurv.metrics.cindex import ConcordanceIndex\n", + "# from torchsurv.loss import time_covariates\n", + "# from torchsurv.metrics.cindex import ConcordanceIndex\n", "\n", "# Parameters\n", "input_size = 1\n", @@ -447,7 +456,7 @@ "print(test.shape)\n", "print(inputs.shape)\n", "\n", - "#initializa hidden state\n", + "# initializa hidden state\n", "h0 = torch.randn(num_layers, batch_size, output_size)\n", "print(h0.shape)\n", "# Forward pass time series input\n", @@ -458,7 +467,7 @@ "# print(f\"Estimate shape for {batch_size} samples = {estimates.size()}\") # Estimate shape for 8 samples = torch.Size([8, 1])\n", "\n", "\n", - "#loss = neg_loss_function(outputs, events, time)\n", + "# loss = neg_loss_function(outputs, events, time)\n", "# print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n", "\n", "# cindex = ConcordanceIndex()\n", @@ -476,51 +485,52 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", - "# as a reminder covars is the matrix of covariates where a row corresponds to a subject and a column corresponds to their observation at some time \n", + "# as a reminder covars is the matrix of covariates where a row corresponds to a subject and a column corresponds to their observation at some time\n", "# the columns are padded so if a subject experiences an event, the remaining of the column is zero\n", "\n", "# Generating example torch matrix\n", "torch_matrix = trajectories\n", "# Convert torch matrix to pandas dataframe\n", "\n", - "#set time to integer\n", + "# set time to integer\n", "max_time = max(time_vec.type(torch.int64))\n", "\n", - "vars = []\n", - "#times = []\n", + "variables = []\n", "start = []\n", "stop = []\n", "event = []\n", "subjs = []\n", + "\n", "for i in range(n):\n", " subj_counter = 0\n", - " for j in range(max_time):\n", - " if torch_matrix[i,j] == 0:\n", + " for j in range(1, max_time + 1):\n", + " if torch_matrix[i, j - 1] == 0:\n", " break\n", - " else:\n", - " vars.append(torch_matrix[i,j].item())\n", - " #times.append(j)\n", - " start.append(j-1)\n", - " stop.append(j)\n", - " event.append(False)\n", - " subj_counter += 1\n", + " variables.append(torch_matrix[i, j - 1].item())\n", + " start.append(j - 1)\n", + " stop.append(j)\n", + " event.append(False)\n", + " subj_counter += 1\n", " subjs.extend([i] * subj_counter)\n", - " if events[i]==True: event[-1]=True\n", - "\n", - "df = pd.DataFrame({\n", - " \"subj\": subjs,\n", - " #\"times\": times,\n", - " \"start\":start,\n", - " \"stop\": stop,\n", - " \"events\": event,\n", - " \"var\": vars, \n", - "})\n" + " if events[i]:\n", + " event[-1] = True\n", + "\n", + "df = pd.DataFrame(\n", + " {\n", + " \"subj\": subjs,\n", + " # \"times\": times,\n", + " \"start\": start,\n", + " \"stop\": stop,\n", + " \"events\": event,\n", + " \"var\": variables,\n", + " }\n", + ")" ] }, { @@ -532,7 +542,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -723,7 +733,14 @@ "from lifelines import CoxTimeVaryingFitter\n", "\n", "ctv = CoxTimeVaryingFitter(penalizer=0.1)\n", - "ctv.fit(df, id_col=\"subj\", event_col=\"events\", start_col=\"start\", stop_col=\"stop\", show_progress=True)\n", + "ctv.fit(\n", + " df,\n", + " id_col=\"subj\",\n", + " event_col=\"events\",\n", + " start_col=\"start\",\n", + " stop_col=\"stop\",\n", + " show_progress=True,\n", + ")\n", "ctv.print_summary()\n", "ctv.plot()" ] @@ -874,16 +891,18 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from lifelines.utils import to_long_format, add_covariate_to_timeline\n", "\n", - "base_df = pd.DataFrame([\n", - " {'id': 1, 'duration': 10, 'event': True, 'var1': 0.1},\n", - " {'id': 2, 'duration': 12, 'event': True, 'var1': 0.5}\n", - "])\n", + "base_df = pd.DataFrame(\n", + " [\n", + " {\"id\": 1, \"duration\": 10, \"event\": True, \"var1\": 0.1},\n", + " {\"id\": 2, \"duration\": 12, \"event\": True, \"var1\": 0.5},\n", + " ]\n", + ")\n", "\n", "base_df = to_long_format(base_df, duration_col=\"duration\")" ] @@ -982,39 +1001,39 @@ "metadata": {}, "outputs": [], "source": [ - "print('x_test', x_test.shape)\n", - "print('events', test_event.shape)\n", - "print('times', test_time.shape)\n", + "print(\"x_test\", x_test.shape)\n", + "print(\"events\", test_event.shape)\n", + "print(\"times\", test_time.shape)\n", "\n", "time_sorted, idx = torch.sort(time)\n", "log_hz_sorted = log_hz[idx]\n", "event_sorted = event[idx]\n", "time_unique = torch.unique(time_sorted)\n", - "print('')\n", + "print(\"\")\n", "print(\"time_sorted\", time_sorted.shape)\n", - "print('log_hz_sorted', log_hz_sorted.shape)\n", - "print('event_sorted', event_sorted.shape)\n", + "print(\"log_hz_sorted\", log_hz_sorted.shape)\n", + "print(\"event_sorted\", event_sorted.shape)\n", "print(\"time_unique\", time_unique.shape)\n", "\n", - "print('-'*30)\n", + "print(\"-\" * 30)\n", "cov_fake = torch.clone(x_test)\n", - "print('covariates', cov_fake.shape)\n", + "print(\"covariates\", cov_fake.shape)\n", "covariates_sorted = cov_fake[idx, :]\n", "covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T)\n", - "print('cov_inner', covariate_inner_product.shape)\n", + "print(\"cov_inner\", covariate_inner_product.shape)\n", "log_nominator_left = torch.matmul(log_hz_sorted.T, covariate_inner_product)\n", - "print('log_nom_left', log_nominator_left.shape)\n", + "print(\"log_nom_left\", log_nominator_left.shape)\n", "bracket = torch.mul(log_hz_sorted, covariates_sorted)\n", - "print('bracket', bracket.shape)\n", + "print(\"bracket\", bracket.shape)\n", "log_nominator_right = torch.matmul(bracket, bracket.T)\n", - "print('log_nom_right', log_nominator_right.shape)\n", + "print(\"log_nom_right\", log_nominator_right.shape)\n", "sum_nominator_right = log_nominator_right[0,].unsqueeze(0)\n", - "print('sum_nom', sum_nominator_right.shape)\n", + "print(\"sum_nom\", sum_nominator_right.shape)\n", "log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0).T\n", - "print('log_denom', log_denominator.shape)\n", + "print(\"log_denom\", log_denominator.shape)\n", "last_bit = torch.div(log_nominator_left - sum_nominator_right, log_denominator)\n", - "print('last_bit', last_bit.shape)\n", - "last_bit\n" + "print(\"last_bit\", last_bit.shape)\n", + "last_bit" ] }, { @@ -1047,7 +1066,9 @@ "\n", "# make random positive time to event\n", "time = torch.rand(batch_size) * 100\n", - "print(time) # tensor([32.8563, 38.3207, 24.6015, 72.2986, 19.9004, 65.2180, 73.2083, 21.2663])\n", + "print(\n", + " time\n", + ") # tensor([32.8563, 38.3207, 24.6015, 72.2986, 19.9004, 65.2180, 73.2083, 21.2663])\n", "\n", "# Create simple RNN model\n", "rnn = torch.nn.RNN(input_size, output_size, num_layers)\n", @@ -1058,11 +1079,15 @@ "outputs, _ = rnn(inputs, h0)\n", "estimates = outputs[-1] # Keep only last predictions, many to one approach\n", "print(estimates.size()) # torch.Size([8, 1])\n", - "print(f\"Estimate shape for {batch_size} samples = {estimates.size()}\") # Estimate shape for 8 samples = torch.Size([8, 1])\n", + "print(\n", + " f\"Estimate shape for {batch_size} samples = {estimates.size()}\"\n", + ") # Estimate shape for 8 samples = torch.Size([8, 1])\n", "\n", "\n", "loss = cox.neg_partial_log_likelihood(estimates, events, time)\n", - "print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n", + "print(\n", + " f\"loss = {loss}, has gradient = {loss.requires_grad}\"\n", + ") # loss = 1.0389232635498047, has gradient = True\n", "\n", "cindex = ConcordanceIndex()\n", "print(f\"c-index = {cindex(estimates, events, time)}\") # c-index = 0.20000000298023224" From e18ef07e77b565e549a5d8914366ed6de39aa8bf Mon Sep 17 00:00:00 2001 From: corolth1 Date: Mon, 6 Jan 2025 14:24:52 -0500 Subject: [PATCH 14/19] added doctest --- docs/notebooks/loss_time_covariates.py | 64 +++++++++----------------- src/torchsurv/metrics/brier_score.py | 2 +- 2 files changed, 22 insertions(+), 44 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index 0279de4..650e7cb 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -1,4 +1,3 @@ -import sys import warnings import torch @@ -30,15 +29,23 @@ def neg_partial_time_log_likelihood( >>> _ = torch.manual_seed(52) >>> n = 10 # number of samples >>> t = 5 # time steps + >>> k = 16 # # covariates >>> time = torch.randint(low=5, high=250, size=(n,)).float() >>> event = torch.randint(low=0, high=2, size=(n,)).bool() - >>> log_hz = torch.rand((t, n, 1)) - >>> neg_partial_time_log_likelihood(log_hz, time, event) - tensor(0.9456) - >>> neg_partial_time_log_likelihood(log_hz.squeeze(), time, event) # Also works with 2D tensor - tensor(0.9456) - >>> neg_partial_time_log_likelihood(log_hz, time, event, reduction='sum') - tensor(37.8241) + >>> x = torch.rand((t, n, k)) + >>> h0 = torch.randn(t, n, 1) + >>> rnn = torch.nn.RNN(k, 1, t) + >>> estimates, _ = rnn(x, h0) + >>> neg_partial_time_log_likelihood(estimates, time, event) + tensor(0.9452, grad_fn=) + >>> neg_partial_time_log_likelihood(estimates.squeeze(), time, event) # Also works with 2D tensor + tensor(0.9452, grad_fn=) + >>> neg_partial_time_log_likelihood(estimates, time, event, reduction='sum') + tensor(37.8082, grad_fn=) + >>> from torchsurv.metrics.cindex import ConcordanceIndex + >>> cindex = ConcordanceIndex() + >>> cindex(estimates[-1].squeeze(), event, time) + tensor(0.5152) """ # only consider theta at tiem of @@ -200,42 +207,13 @@ def _time_varying_covariance( if __name__ == "__main__": - import torch - from torchsurv.metrics.cindex import ConcordanceIndex import doctest + import sys # Run doctest results = doctest.testmod() - - # set seed - torch.manual_seed(123) - - # Parameters - input_size = 8 # Irrelevant to the loss function - output_size = 1 # always 1 for Cox - seq_length = 5 # number of time steps - batch_size = 32 # number of samples - - # make random boolean events - events = torch.rand(batch_size) > 0.5 - # make random positive time to event - time = torch.rand(batch_size) * 100 - - # Create simple RNN model - rnn = torch.nn.RNN(input_size, output_size, seq_length) - rnn = torch.compile(rnn) - inputs = torch.randn(seq_length, batch_size, input_size) - h0 = torch.randn(seq_length, batch_size, output_size) - - # Forward pass time series input - outputs, _ = rnn(inputs, h0) - print(f"outputs shape = {outputs.size()}") - - # Loss - loss = neg_partial_time_log_likelihood(outputs, time, events) - print(f"loss = {loss}") - - # Cindex - cindex = ConcordanceIndex() - estimates = outputs[-1].squeeze() # Last outputs matter ?! @Melodie - print(f"C-index = {cindex(estimates, events, time)}") + if results.failed == 0: + print("All tests passed.") + else: + print("Some doctests failed.") + sys.exit(1) diff --git a/src/torchsurv/metrics/brier_score.py b/src/torchsurv/metrics/brier_score.py index e50ebe1..8900a54 100644 --- a/src/torchsurv/metrics/brier_score.py +++ b/src/torchsurv/metrics/brier_score.py @@ -1,5 +1,4 @@ import copy -import sys from typing import Optional import torch @@ -906,6 +905,7 @@ def _update_brier_score_weight( if __name__ == "__main__": import doctest + import sys # Run doctest results = doctest.testmod() From a476fea184f662e6145ddd43a003dad4609d8b97 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Tue, 7 Jan 2025 10:29:20 -0500 Subject: [PATCH 15/19] working notebook --- docs/notebooks/time_varying.ipynb | 338 +++++------------------------- 1 file changed, 56 insertions(+), 282 deletions(-) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 8f87285..76cf27b 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -117,7 +117,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -130,7 +130,7 @@ "from sklearn.model_selection import train_test_split\n", "\n", "# Our package\n", - "# from torchsurv.loss.time_varying import neg_partial_log_likelihood2\n", + "#from torchsurv.loss.time_varying import neg_partial_log_likelihood2\n", "\n", "# PyTorch boilerplate - see https://github.com/Novartis/torchsurv/blob/main/docs/notebooks/helpers_introduction.py\n", "from helpers_introduction import Custom_dataset, plot_losses" @@ -170,7 +170,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "metadata": {}, "outputs": [ { @@ -191,14 +191,14 @@ "torch.manual_seed(123)\n", "\n", "n = 100 # Number of subjects\n", - "T = torch.tensor(6) # Number of time points\n", + "T = torch.tensor(6) # Number of time points\n", "time_vec = torch.tensor([0, 1, 2, 3, 4, 5])\n", "\n", "# Simulation parameters\n", "age_mean = 35\n", "age_std = 5\n", "sex_prob = 0.54\n", - "G = torch.tensor([[0.29, -0.00465], [-0.00465, 0.000320]])\n", + "G = torch.tensor([[0.29, -0.00465],[-0.00465, 0.000320]])\n", "Z = torch.tensor([[1, 1, 1, 1, 1, 1], time_vec], dtype=torch.float32).T\n", "sigma = torch.tensor([0.1161])\n", "alpha = 1\n", @@ -220,12 +220,7 @@ "\n", "# Generate expected longitudinal trajectories\n", "# quite frakly this is useless now - it was based on my bad understanding of the algorithm\n", - "trajectories = (\n", - " random_effects[:, 0].unsqueeze(1)\n", - " + random_effects[:, 1].unsqueeze(1) * Z[:, 1]\n", - " + alpha * age.unsqueeze(1)\n", - " + error_sample\n", - ")\n", + "trajectories = random_effects[:, 0].unsqueeze(1) + random_effects[:, 1].unsqueeze(1) * Z[:,1] + alpha * age.unsqueeze(1) + error_sample\n", "\n", "print(trajectories[1:5, :])" ] @@ -285,11 +280,11 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ - "# import lmbert W function\n", + "#import lmbert W function\n", "\n", "from scipy.special import lambertw" ] @@ -307,7 +302,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "metadata": {}, "outputs": [ { @@ -333,19 +328,15 @@ "source": [ "# Specify the values for parameters, generate the random variables and call on relevant variables defined previously\n", "\n", - "alpha = torch.tensor(\n", - " [0.05, -0.5]\n", - ") # regression coefficient for time-invariant covariates\n", - "gamma = torch.tensor(\n", - " 0.3\n", - ") # association strength between longitudinal measures and time-to-event\n", + "alpha = torch.tensor([0.05, -0.5]) # regression coefficient for time-invariant covariates\n", + "gamma = torch.tensor(0.3) # association strength between longitudinal measures and time-to-event\n", "lambda_0 = torch.tensor(0.1) # baseline hazard rate\n", "\n", "torch.manual_seed(456)\n", "\n", "# Generate the random variables for hazard of a subject and censoring\n", "Q = dist.Uniform(0, 1).sample((n,)) # Random variable for hazard (Q)\n", - "C = dist.Uniform(3, 5.5).sample((n,)) # Random variable for censoring\n", + "C = dist.Uniform(3,5.5).sample((n,)) # Random variable for censoring\n", "\n", "# age and sex are the names of variables corresponding to those covariates\n", "# create the X matrix of covariates\n", @@ -357,19 +348,19 @@ "\n", "# Generate time to event T using the equation above\n", "log_Q = torch.log(Q)\n", - "lambert_W_nominator = gamma * b2 * log_Q\n", - "lambert_W_denominator = torch.exp(alpha @ XX.T + gamma * b1)\n", - "# below should give a vector of length sample_size\n", - "lambert_W = lambertw(-lambert_W_nominator / (lambda_0 * lambert_W_denominator))\n", - "time_to_event = lambert_W / (gamma * b2)\n", + "lambert_W_nominator = gamma*b2*log_Q\n", + "lambert_W_denominator = torch.exp(alpha@XX.T + gamma*b1)\n", + "# below should give a vector of length sample_size \n", + "lambert_W = lambertw(-lambert_W_nominator/(lambda_0*lambert_W_denominator))\n", + "time_to_event = lambert_W/(gamma*b2)\n", "\n", - "# take the real part of the LBF, the complex part is =0\n", + "#take the real part of the LBF, the complex part is =0\n", "outcome_LWF = time_to_event.real\n", "outcome_LWF = torch.floor(outcome_LWF)\n", "outcome_LWF\n", "\n", "# implement censoring with some level of intensity below\n", - "events = C < 5\n", + "events = C<5\n", "events" ] }, @@ -434,13 +425,14 @@ "torch.Size([6, 100, 1])\n", "torch.Size([6, 100, 1])\n", "torch.Size([2, 100, 1])\n", - "torch.Size([6, 100, 1])\n" + "torch.Size([6, 100, 1])\n", + "loss = 1.091948390007019, has gradient = True\n" ] } ], "source": [ - "# from torchsurv.loss import time_covariates\n", - "# from torchsurv.metrics.cindex import ConcordanceIndex\n", + "#from torchsurv.loss import time_covariates\n", + "#from torchsurv.metrics.cindex import ConcordanceIndex\n", "\n", "# Parameters\n", "input_size = 1\n", @@ -456,22 +448,16 @@ "print(test.shape)\n", "print(inputs.shape)\n", "\n", - "# initializa hidden state\n", + "#initializa hidden state\n", "h0 = torch.randn(num_layers, batch_size, output_size)\n", "print(h0.shape)\n", "# Forward pass time series input\n", "outputs, _ = rnn(test, h0)\n", "print(outputs.shape)\n", - "# estimates = outputs[-1] # Keep only last predictions, many to one approach\n", - "# print(estimates.size()) # torch.Size([8, 1])\n", - "# print(f\"Estimate shape for {batch_size} samples = {estimates.size()}\") # Estimate shape for 8 samples = torch.Size([8, 1])\n", - "\n", - "\n", - "# loss = neg_loss_function(outputs, events, time)\n", - "# print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n", "\n", - "# cindex = ConcordanceIndex()\n", - "# print(f\"c-index = {cindex(estimates, events, time)}\") # c-index = 0.20000000298023224" + "#outcome_LWF is the time someone experiences an event\n", + "loss = neg_loss_function(outputs, outcome_LWF, events)\n", + "print(f\"loss = {loss}, has gradient = {loss.requires_grad}\") # loss = 1.0389232635498047, has gradient = True\n" ] }, { @@ -485,52 +471,51 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", - "# as a reminder covars is the matrix of covariates where a row corresponds to a subject and a column corresponds to their observation at some time\n", + "# as a reminder covars is the matrix of covariates where a row corresponds to a subject and a column corresponds to their observation at some time \n", "# the columns are padded so if a subject experiences an event, the remaining of the column is zero\n", "\n", "# Generating example torch matrix\n", "torch_matrix = trajectories\n", "# Convert torch matrix to pandas dataframe\n", "\n", - "# set time to integer\n", + "#set time to integer\n", "max_time = max(time_vec.type(torch.int64))\n", "\n", - "variables = []\n", + "vars = []\n", + "#times = []\n", "start = []\n", "stop = []\n", "event = []\n", "subjs = []\n", - "\n", "for i in range(n):\n", " subj_counter = 0\n", - " for j in range(1, max_time + 1):\n", - " if torch_matrix[i, j - 1] == 0:\n", + " for j in range(max_time):\n", + " if torch_matrix[i,j] == 0:\n", " break\n", - " variables.append(torch_matrix[i, j - 1].item())\n", - " start.append(j - 1)\n", - " stop.append(j)\n", - " event.append(False)\n", - " subj_counter += 1\n", + " else:\n", + " vars.append(torch_matrix[i,j].item())\n", + " #times.append(j)\n", + " start.append(j-1)\n", + " stop.append(j)\n", + " event.append(False)\n", + " subj_counter += 1\n", " subjs.extend([i] * subj_counter)\n", - " if events[i]:\n", - " event[-1] = True\n", - "\n", - "df = pd.DataFrame(\n", - " {\n", - " \"subj\": subjs,\n", - " # \"times\": times,\n", - " \"start\": start,\n", - " \"stop\": stop,\n", - " \"events\": event,\n", - " \"var\": variables,\n", - " }\n", - ")" + " if events[i]==True: event[-1]=True\n", + "\n", + "df = pd.DataFrame({\n", + " \"subj\": subjs,\n", + " #\"times\": times,\n", + " \"start\":start,\n", + " \"stop\": stop,\n", + " \"events\": event,\n", + " \"var\": vars, \n", + "})\n" ] }, { @@ -542,7 +527,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 11, "metadata": {}, "outputs": [ { @@ -605,7 +590,7 @@ " \n", " \n", " time fit was run\n", - " 2025-01-03 20:10:47 UTC\n", + " 2025-01-07 15:10:43 UTC\n", " \n", " \n", "\n", @@ -689,7 +674,7 @@ " number of periods = 500\n", " number of events = 81\n", "partial log-likelihood = -324.16\n", - " time fit was run = 2025-01-03 20:10:47 UTC\n", + " time fit was run = 2025-01-07 15:10:43 UTC\n", "\n", "---\n", " coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%\n", @@ -733,14 +718,7 @@ "from lifelines import CoxTimeVaryingFitter\n", "\n", "ctv = CoxTimeVaryingFitter(penalizer=0.1)\n", - "ctv.fit(\n", - " df,\n", - " id_col=\"subj\",\n", - " event_col=\"events\",\n", - " start_col=\"start\",\n", - " stop_col=\"stop\",\n", - " show_progress=True,\n", - ")\n", + "ctv.fit(df, id_col=\"subj\", event_col=\"events\", start_col=\"start\", stop_col=\"stop\", show_progress=True)\n", "ctv.print_summary()\n", "ctv.plot()" ] @@ -749,7 +727,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Testing it on the lifelines dataset\n", + "## Real life data: heart transplant survival\n", "\n", "This is to demonstrate the method with a neural network, example inspired by the [lifelines example](https://lifelines.readthedocs.io/en/latest/Time%20varying%20survival%20regression.html#).\n", "\n", @@ -888,210 +866,6 @@ "- `transplant`: received transplant 1=yes\n", "- `id`: patient id" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from lifelines.utils import to_long_format, add_covariate_to_timeline\n", - "\n", - "base_df = pd.DataFrame(\n", - " [\n", - " {\"id\": 1, \"duration\": 10, \"event\": True, \"var1\": 0.1},\n", - " {\"id\": 2, \"duration\": 12, \"event\": True, \"var1\": 0.5},\n", - " ]\n", - ")\n", - "\n", - "base_df = to_long_format(base_df, duration_col=\"duration\")" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "CUDA-enabled GPU/TPU is available.\n" - ] - } - ], - "source": [ - "# Constant parameters accross models\n", - "# Detect available accelerator; Downgrade batch size if only CPU available\n", - "if any([torch.cuda.is_available(), torch.backends.mps.is_available()]):\n", - " print(\"CUDA-enabled GPU/TPU is available.\")\n", - " BATCH_SIZE = 128 # batch size for training\n", - "else:\n", - " print(\"No CUDA-enabled GPU found, using CPU.\")\n", - " BATCH_SIZE = 32 # batch size for training\n", - "\n", - "EPOCHS = 100\n", - "LEARNING_RATE = 1e-2" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "df_train, df_test = train_test_split(df_onehot, test_size=0.3)\n", - "df_train, df_val = train_test_split(df_train, test_size=0.3)\n", - "print(\n", - " f\"(Sample size) Training:{len(df_train)} | Validation:{len(df_val)} |Testing:{len(df_test)}\"\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Dataloader\n", - "dataloader_train = DataLoader(\n", - " Custom_dataset(df_train), batch_size=BATCH_SIZE, shuffle=True\n", - ")\n", - "dataloader_val = DataLoader(\n", - " Custom_dataset(df_val), batch_size=len(df_val), shuffle=False\n", - ")\n", - "dataloader_test = DataLoader(\n", - " Custom_dataset(df_test), batch_size=len(df_test), shuffle=False\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "cox_model = torch.nn.Sequential(\n", - " torch.nn.BatchNorm1d(num_features), # Batch normalization\n", - " torch.nn.Linear(num_features, 32),\n", - " torch.nn.ReLU(),\n", - " torch.nn.Dropout(),\n", - " torch.nn.Linear(32, 64),\n", - " torch.nn.ReLU(),\n", - " torch.nn.Dropout(),\n", - " torch.nn.Linear(64, 1), # Estimating log hazards for Cox models\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# This is for testing the loss function\n", - "x_test, (test_event, test_time) = next(iter(dataloader_train))\n", - "\n", - "log_hz = cox_model(x_test)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"x_test\", x_test.shape)\n", - "print(\"events\", test_event.shape)\n", - "print(\"times\", test_time.shape)\n", - "\n", - "time_sorted, idx = torch.sort(time)\n", - "log_hz_sorted = log_hz[idx]\n", - "event_sorted = event[idx]\n", - "time_unique = torch.unique(time_sorted)\n", - "print(\"\")\n", - "print(\"time_sorted\", time_sorted.shape)\n", - "print(\"log_hz_sorted\", log_hz_sorted.shape)\n", - "print(\"event_sorted\", event_sorted.shape)\n", - "print(\"time_unique\", time_unique.shape)\n", - "\n", - "print(\"-\" * 30)\n", - "cov_fake = torch.clone(x_test)\n", - "print(\"covariates\", cov_fake.shape)\n", - "covariates_sorted = cov_fake[idx, :]\n", - "covariate_inner_product = torch.matmul(covariates_sorted, covariates_sorted.T)\n", - "print(\"cov_inner\", covariate_inner_product.shape)\n", - "log_nominator_left = torch.matmul(log_hz_sorted.T, covariate_inner_product)\n", - "print(\"log_nom_left\", log_nominator_left.shape)\n", - "bracket = torch.mul(log_hz_sorted, covariates_sorted)\n", - "print(\"bracket\", bracket.shape)\n", - "log_nominator_right = torch.matmul(bracket, bracket.T)\n", - "print(\"log_nom_right\", log_nominator_right.shape)\n", - "sum_nominator_right = log_nominator_right[0,].unsqueeze(0)\n", - "print(\"sum_nom\", sum_nominator_right.shape)\n", - "log_denominator = torch.logcumsumexp(log_hz_sorted.flip(0), dim=0).flip(0).T\n", - "print(\"log_denom\", log_denominator.shape)\n", - "last_bit = torch.div(log_nominator_left - sum_nominator_right, log_denominator)\n", - "print(\"last_bit\", last_bit.shape)\n", - "last_bit" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## RNN Example from Github" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import torch\n", - "from torchsurv.loss import cox\n", - "from torchsurv.metrics.cindex import ConcordanceIndex\n", - "\n", - "# Parameters\n", - "input_size = 10\n", - "output_size = 1\n", - "num_layers = 2\n", - "seq_length = 5\n", - "batch_size = 8\n", - "\n", - "# make random boolean events\n", - "events = torch.rand(batch_size) > 0.5\n", - "print(events) # tensor([ True, False, True, True, False, False, True, False])\n", - "\n", - "# make random positive time to event\n", - "time = torch.rand(batch_size) * 100\n", - "print(\n", - " time\n", - ") # tensor([32.8563, 38.3207, 24.6015, 72.2986, 19.9004, 65.2180, 73.2083, 21.2663])\n", - "\n", - "# Create simple RNN model\n", - "rnn = torch.nn.RNN(input_size, output_size, num_layers)\n", - "inputs = torch.randn(seq_length, batch_size, input_size)\n", - "h0 = torch.randn(num_layers, batch_size, output_size)\n", - "\n", - "# Forward pass time series input\n", - "outputs, _ = rnn(inputs, h0)\n", - "estimates = outputs[-1] # Keep only last predictions, many to one approach\n", - "print(estimates.size()) # torch.Size([8, 1])\n", - "print(\n", - " f\"Estimate shape for {batch_size} samples = {estimates.size()}\"\n", - ") # Estimate shape for 8 samples = torch.Size([8, 1])\n", - "\n", - "\n", - "loss = cox.neg_partial_log_likelihood(estimates, events, time)\n", - "print(\n", - " f\"loss = {loss}, has gradient = {loss.requires_grad}\"\n", - ") # loss = 1.0389232635498047, has gradient = True\n", - "\n", - "cindex = ConcordanceIndex()\n", - "print(f\"c-index = {cindex(estimates, events, time)}\") # c-index = 0.20000000298023224" - ] } ], "metadata": { From bb594ce8188363c0459ddb776de3eb0593fc5ea2 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Tue, 7 Jan 2025 12:01:39 -0500 Subject: [PATCH 16/19] added test with cindex --- docs/notebooks/loss_time_covariates.py | 24 +++++++++--------------- docs/notebooks/time_varying.ipynb | 8 ++++---- 2 files changed, 13 insertions(+), 19 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index 650e7cb..f575533 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -1,9 +1,5 @@ -import warnings - import torch -MAX_TIME = 1e6 - def neg_partial_time_log_likelihood( log_hz: torch.Tensor, @@ -44,8 +40,11 @@ def neg_partial_time_log_likelihood( tensor(37.8082, grad_fn=) >>> from torchsurv.metrics.cindex import ConcordanceIndex >>> cindex = ConcordanceIndex() - >>> cindex(estimates[-1].squeeze(), event, time) - tensor(0.5152) + >>> cindex_t = torch.stack([cindex(log_hz_t, event, time) for log_hz_t in estimates.unbind(0)]) # Compute for each time step t + >>> cindex_t + tensor([0.6061, 0.2424, 0.5758, 0.3333, 0.5152]) + >>> cindex_t.mean() # Average over all time steps t + tensor(0.4545) """ # only consider theta at tiem of @@ -66,10 +65,11 @@ def neg_partial_time_log_likelihood( return loss +@torch.jit.script def _partial_likelihood_time_cox( - log_hz: torch.Tensor, # Txnxp torch tensor, n is batch size, T number of time points, p is number of different covariates over time - time: torch.Tensor, # n length vector, time at which someone experiences event - events: torch.Tensor, # n length vector, boolean, true or false to determine if someone had an event + log_hz: torch.Tensor, + time: torch.Tensor, + events: torch.Tensor, ) -> torch.Tensor: """ Calculate the partial log likelihood for the Cox proportional hazards model @@ -138,12 +138,6 @@ def _partial_likelihood_time_cox( if time.min() < 0: raise ValueError("Time values must be greater or equal to zero.") - # Maximum values in time do not exceed MAX_TIME and raise a warning - if time.max() > MAX_TIME: - warnings.warn( - f"Maximum value {MAX_TIME} in time vector exceeds the time dimension of the log_hz tensor." - ) - # Sort the time vector and the output of the RNN by the subjects who have earlier event time _, idx = torch.sort(time) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 76cf27b..4f011d4 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -415,7 +415,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "metadata": {}, "outputs": [ { @@ -426,7 +426,7 @@ "torch.Size([6, 100, 1])\n", "torch.Size([2, 100, 1])\n", "torch.Size([6, 100, 1])\n", - "loss = 1.091948390007019, has gradient = True\n" + "loss = 1.095799446105957, has gradient = True\n" ] } ], @@ -590,7 +590,7 @@ " \n", " \n", " time fit was run\n", - " 2025-01-07 15:10:43 UTC\n", + " 2025-01-07 15:26:30 UTC\n", " \n", " \n", "\n", @@ -674,7 +674,7 @@ " number of periods = 500\n", " number of events = 81\n", "partial log-likelihood = -324.16\n", - " time fit was run = 2025-01-07 15:10:43 UTC\n", + " time fit was run = 2025-01-07 15:26:30 UTC\n", "\n", "---\n", " coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%\n", From 21fa787129103b79c55b1ea99a69a162e6d249b1 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Tue, 7 Jan 2025 12:02:23 -0500 Subject: [PATCH 17/19] documentation --- docs/notebooks/loss_time_covariates.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index f575533..d346d54 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -40,8 +40,8 @@ def neg_partial_time_log_likelihood( tensor(37.8082, grad_fn=) >>> from torchsurv.metrics.cindex import ConcordanceIndex >>> cindex = ConcordanceIndex() - >>> cindex_t = torch.stack([cindex(log_hz_t, event, time) for log_hz_t in estimates.unbind(0)]) # Compute for each time step t - >>> cindex_t + >>> cindex_t = torch.stack([cindex(log_hz_t, event, time) for log_hz_t in estimates.unbind(0)]) + >>> cindex_t # Compute c-index for each time step t tensor([0.6061, 0.2424, 0.5758, 0.3333, 0.5152]) >>> cindex_t.mean() # Average over all time steps t tensor(0.4545) From 0c1a6506f448f3aded39cf3fe2efa4d924d3de96 Mon Sep 17 00:00:00 2001 From: corolth1 Date: Tue, 7 Jan 2025 12:03:53 -0500 Subject: [PATCH 18/19] documentation --- docs/notebooks/loss_time_covariates.py | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/notebooks/loss_time_covariates.py b/docs/notebooks/loss_time_covariates.py index d346d54..09d4c58 100644 --- a/docs/notebooks/loss_time_covariates.py +++ b/docs/notebooks/loss_time_covariates.py @@ -158,6 +158,7 @@ def _partial_likelihood_time_cox( return (log_hz_sorted_tj - log_cumulative_hazard)[include] +# Code below will be either deleted or moved to another file (e.g. stats) def _time_varying_covariance( log_hz: torch.Tensor, # nx1 vector event: torch.Tensor, # n vector (i think) From 7757e81256055a9886d3620bae47a922b0048a43 Mon Sep 17 00:00:00 2001 From: Dembowska Date: Tue, 14 Jan 2025 13:40:30 +0100 Subject: [PATCH 19/19] started a test function to compute log-likelihood using lifelines source code --- docs/notebooks/time_varying.ipynb | 111 +++++++++++++++++++++++++++--- 1 file changed, 103 insertions(+), 8 deletions(-) diff --git a/docs/notebooks/time_varying.ipynb b/docs/notebooks/time_varying.ipynb index 4f011d4..09f4aad 100644 --- a/docs/notebooks/time_varying.ipynb +++ b/docs/notebooks/time_varying.ipynb @@ -179,7 +179,7 @@ "text": [ "tensor([[34.2016, 34.2186, 34.2356, 34.2526, 34.2696, 34.2866],\n", " [33.4380, 33.4308, 33.4235, 33.4163, 33.4091, 33.4018],\n", - " [31.5581, 31.5564, 31.5548, 31.5531, 31.5515, 31.5498],\n", + " [31.5581, 31.5565, 31.5548, 31.5531, 31.5515, 31.5498],\n", " [35.7813, 35.7953, 35.8093, 35.8233, 35.8373, 35.8513]])\n" ] } @@ -280,7 +280,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ @@ -302,7 +302,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 6, "metadata": {}, "outputs": [ { @@ -320,7 +320,7 @@ " True, True, True, True, True, True, True, False, False, True])" ] }, - "execution_count": 7, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } @@ -401,7 +401,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ @@ -415,7 +415,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 8, "metadata": {}, "outputs": [ { @@ -426,7 +426,7 @@ "torch.Size([6, 100, 1])\n", "torch.Size([2, 100, 1])\n", "torch.Size([6, 100, 1])\n", - "loss = 1.095799446105957, has gradient = True\n" + "loss = 1.098880648612976, has gradient = True\n" ] } ], @@ -471,11 +471,12 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", + "from numpy import sum as array_sum_to_scalar\n", "\n", "# as a reminder covars is the matrix of covariates where a row corresponds to a subject and a column corresponds to their observation at some time \n", "# the columns are padded so if a subject experiences an event, the remaining of the column is zero\n", @@ -518,6 +519,100 @@ "})\n" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will compute the lgo likelihood using the code from lifelines to compare our method to theirs. This snippet of code is taken from [cox_time_varying_fitter.py](https://github.com/CamDavidsonPilon/lifelines/blob/master/lifelines/fitters/cox_time_varying_fitter.py) on lines 499-550." + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(100,)\n" + ] + }, + { + "data": { + "text/plain": [ + "array([2755.2053555])" + ] + }, + "execution_count": 46, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "stop_times = df['stop']\n", + "event_bool = df['events']\n", + "unique_death_times = np.unique(stop_times[event_bool])\n", + "covariates = df['var']\n", + "#the following is an internal column in lifelines, since we do not define it in this simulation it is set to 1.0\n", + "#this is also done in the code at lines 182-185\n", + "weights = np.ones(len(df))\n", + "#below is defined at line 50, unsure what this means\n", + "matrix_axis_0_sum_to_1d_array = lambda m: np.sum(m, 0)\n", + "#we will be replacing x*beta from the code with out outputs from the network as written in the beginning of this notebook\n", + "# network_out = outputs\n", + "# print(network_out.shape)\n", + "beta=np.array([1],)\n", + "# print(beta)\n", + "# print(beta.shape)\n", + "# np.dot(X_at_t, beta)\n", + "for t in unique_death_times:\n", + "\n", + " # returns a boolean vector of length nxT in our case \n", + " ix = (start < t) & (t <= stop)\n", + " #returns a vector of covariates at event time\n", + " X_at_t = covariates[ix]\n", + " weights_at_t = weights[ix]\n", + " stops_events_at_t = stop_times[ix]\n", + " events_at_t = event_bool[ix]\n", + "\n", + " #changed dot product to multiply cause dot is no longer supported in that way\n", + " phi_i = weights_at_t * np.exp(np.multiply(X_at_t, beta))\n", + " print(phi_i.shape)\n", + " # removed indexing from original code cause we only have 1 dim\n", + " phi_x_i = phi_i * X_at_t\n", + " phi_x_x_i = np.dot(X_at_t.T, phi_x_i)\n", + "\n", + " # Calculate sums of Risk set\n", + " risk_phi = array_sum_to_scalar(phi_i)\n", + " risk_phi_x = matrix_axis_0_sum_to_1d_array(phi_x_i)\n", + " risk_phi_x_x = phi_x_x_i\n", + "\n", + " # Calculate the sums of Tie set\n", + " deaths = events_at_t & (stops_events_at_t == t)\n", + "\n", + " tied_death_counts = array_sum_to_scalar(deaths.astype(int)) # should always at least 1. Why? TODO\n", + "\n", + " xi_deaths = X_at_t[deaths]\n", + "\n", + " x_death_sum = matrix_axis_0_sum_to_1d_array(weights_at_t[deaths] * xi_deaths)\n", + "\n", + " weight_count = array_sum_to_scalar(weights_at_t[deaths])\n", + " weighted_average = weight_count / tied_death_counts\n", + "\n", + "\n", + " # no tensors here, but do some casting to make it easier in the converging step next.\n", + " denom = 1.0 / np.array([risk_phi])\n", + " numer = risk_phi_x\n", + " a1 = risk_phi_x_x * denom\n", + "\n", + "summand = numer * denom[:, None]\n", + "a2 = summand.T.dot(summand)\n", + "log_lik = np.dot(x_death_sum, beta) + weighted_average * np.log(denom).sum()\n", + "\n", + "log_lik" + ] + }, { "cell_type": "markdown", "metadata": {},