Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
… into fix/cgds_unlinkdataset
  • Loading branch information
juanNH committed Oct 15, 2024
2 parents 250bed4 + 2d487e6 commit 98696b3
Show file tree
Hide file tree
Showing 13 changed files with 63 additions and 23 deletions.
4 changes: 2 additions & 2 deletions DEPLOYING.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ To integrate with [Modulector][modulector] and/or [BioAPI][bioapi] using `docker
name: 'multiomix-network'
```
3. The new versions of BioAPI and Modulector already come with service names suitable for integration with Multiomix. But **if you have any old version of those platforms**, change the Modulector and BioAPI configuration so that it does not conflict with the Multiomix configuration:
1. Rename all the services in the Modulector and BioAPI `docker-compose.yml` files with the suffix `_modulector` and `_bioapi`. And rename `web` service to `modulector` or `bioapi` respectively. **NOTE:** do not forget to rename the `depends_on` parameters, and the database connection parameters to point to the new services names.
1. Rename all the services in the Modulector and BioAPI `docker-compose.yml` files with the suffix `_modulector` and `_bioapi`. For example `mongo_bioapi`, `web_bioapi` and `nginx_bioapi` in the case of BioAPI. **NOTE:** do not forget to rename the `depends_on` parameters, and the database connection parameters to point to the new services names.
2. Change the following block in the NGINX configuration files. In Modulector it's `config/nginx/conf.d/modulector.conf`, in BioAPI it's `/nginx/conf.d/default.conf`:
```
# Old
Expand All @@ -191,7 +191,7 @@ To integrate with [Modulector][modulector] and/or [BioAPI][bioapi] using `docker
# New
upstream web {
ip_hash;
server modulector:8000; # Or bioapi, dependening on which config file you're
server web_modulector:8000; # Or web_bioapi, dependening on which config file you're editing
}
```
4. Set Multiomix parameters:
Expand Down
12 changes: 6 additions & 6 deletions docker-compose_dist.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ services:

# Celery worker for correlation analysis
correlation-analysis-worker:
image: omicsdatascience/multiomix:5.3.0-celery
image: omicsdatascience/multiomix:5.4.0-celery
restart: 'always'
depends_on:
- db
Expand All @@ -77,7 +77,7 @@ services:

# Celery worker for feature selection experiments
fs-experiments-worker:
image: omicsdatascience/multiomix:5.3.0-celery
image: omicsdatascience/multiomix:5.4.0-celery
restart: 'always'
depends_on:
- db
Expand All @@ -94,7 +94,7 @@ services:

# Celery worker for statistical validations and trained models
stats-worker:
image: omicsdatascience/multiomix:5.3.0-celery
image: omicsdatascience/multiomix:5.4.0-celery
restart: 'always'
depends_on:
- db
Expand All @@ -111,7 +111,7 @@ services:

# Celery worker for inference experiments
inference-worker:
image: omicsdatascience/multiomix:5.3.0-celery
image: omicsdatascience/multiomix:5.4.0-celery
restart: 'always'
depends_on:
- db
Expand All @@ -128,7 +128,7 @@ services:

# Celery worker for sync CGDSStudies
sync-datasets-worker:
image: omicsdatascience/multiomix:5.3.0-celery
image: omicsdatascience/multiomix:5.4.0-celery
restart: 'always'
depends_on:
- db
Expand All @@ -145,7 +145,7 @@ services:

# Django Backend Server
multiomix:
image: omicsdatascience/multiomix:5.3.0
image: omicsdatascience/multiomix:5.4.0
restart: 'always'
# environment:
# DJANGO_SETTINGS_MODULE: 'multiomics_intermediate.settings_prod'
Expand Down
1 change: 1 addition & 0 deletions src/biomarkers/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ class Biomarker(models.Model):
cnas: QuerySet['CNAIdentifier']
mirnas: QuerySet['MiRNAIdentifier']
mrnas: QuerySet['MRNAIdentifier']

name: str = models.CharField(max_length=300)
description: Optional[str] = models.TextField(null=True, blank=True)
tag: Optional[Tag] = models.ForeignKey(Tag, on_delete=models.SET_NULL, default=None, blank=True, null=True)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Generated by Django 4.2.15 on 2024-10-03 21:02

from django.db import migrations, models


class Migration(migrations.Migration):

dependencies = [
('feature_selection', '0054_alter_trainedmodel_state'),
]

operations = [
migrations.AlterField(
model_name='fsexperiment',
name='app_name',
field=models.CharField(blank=True, help_text='Spark app name to get the results', max_length=100, null=True),
),
migrations.AlterField(
model_name='fsexperiment',
name='attempt',
field=models.PositiveSmallIntegerField(default=0, help_text='Number of attempts to prevent a buggy experiment running forever'),
),
migrations.AlterField(
model_name='fsexperiment',
name='emr_job_id',
field=models.CharField(blank=True, help_text='Job ID in the Spark cluster', max_length=100, null=True),
),
migrations.AlterField(
model_name='fsexperiment',
name='execution_time',
field=models.PositiveIntegerField(default=0, help_text='Execution time in seconds'),
),
migrations.AlterField(
model_name='fsexperiment',
name='task_id',
field=models.CharField(blank=True, help_text='Celery Task ID', max_length=100, null=True),
),
]
15 changes: 8 additions & 7 deletions src/feature_selection/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class ClusteringScoringMethod(models.IntegerChoices):


class SVMKernel(models.IntegerChoices):
"""SVM's kernel """
"""SVM kernel """
LINEAR = 1
POLYNOMIAL = 2
RBF = 3
Expand Down Expand Up @@ -102,10 +102,11 @@ class FSExperiment(models.Model):
rf_times_records: QuerySet['RFTimesRecord']
svm_times_records: QuerySet['SVMTimesRecord']
best_model: 'TrainedModel'

origin_biomarker = models.ForeignKey('biomarkers.Biomarker', on_delete=models.CASCADE,
related_name='fs_experiments_as_origin')
algorithm = models.IntegerField(choices=FeatureSelectionAlgorithm.choices)
execution_time = models.PositiveIntegerField(default=0) # Execution time in seconds
execution_time = models.PositiveIntegerField(default=0, help_text='Execution time in seconds')
created_biomarker = models.OneToOneField('biomarkers.Biomarker', on_delete=models.SET_NULL, null=True, blank=True,
related_name='fs_experiment')
user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)
Expand All @@ -122,14 +123,14 @@ class FSExperiment(models.Model):
methylation_source = models.ForeignKey('api_service.ExperimentSource', on_delete=models.CASCADE, null=True,
blank=True, related_name='fs_experiments_as_methylation')

task_id = models.CharField(max_length=100, blank=True, null=True) # Celery Task ID
task_id = models.CharField(max_length=100, blank=True, null=True, help_text='Celery Task ID')

# Number of attempts to prevent a buggy experiment running forever
attempt = models.PositiveSmallIntegerField(default=0)
attempt = models.PositiveSmallIntegerField(default=0, help_text='Number of attempts to prevent a buggy experiment '
'running forever')

# AWS-EMR fields
app_name = models.CharField(max_length=100, null=True, blank=True) # Spark app name to get the results
emr_job_id = models.CharField(max_length=100, null=True, blank=True) # Job ID in the Spark cluster
app_name = models.CharField(max_length=100, null=True, blank=True, help_text='Spark app name to get the results')
emr_job_id = models.CharField(max_length=100, null=True, blank=True, help_text='Job ID in the Spark cluster')

def get_all_sources(self):
"""Returns a list with all the sources."""
Expand Down
2 changes: 1 addition & 1 deletion src/multiomics_intermediate/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
# +++++ Custom settings +++++

# Current Multiomix version
VERSION: str = '5.3.0'
VERSION: str = '5.4.0'

# Default primary key field type
# https://docs.djangoproject.com/en/4.0/ref/settings/#default-auto-field
Expand Down
2 changes: 1 addition & 1 deletion src/statistical_properties/survival_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ def generate_survival_groups_by_median_expression(
def compute_c_index_and_log_likelihood(df: pd.DataFrame) -> Tuple[float, float]:
"""
Computes the C-Index and the partial Log-Likelihood from a DataFrame.
@param df: Pandas DataFrame. IMPORTANT: has to have 3 colunms: 'E' (event), 'T' (time), and 'group' (group in which
@param df: Pandas DataFrame. IMPORTANT: has to have 3 columns: 'E' (event), 'T' (time), and 'group' (group in which
the sample is).
@return: A tuple with the C-Index and the partial Log-Likelihood.
"""
Expand Down
2 changes: 1 addition & 1 deletion tools/k8s/multiomix-correlation-analysis-worker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
- name: logs-data
emptyDir: {}
containers:
- image: omicsdatascience/multiomix:5.1.2-celery
- image: omicsdatascience/multiomix:5.4.0-celery
name: correlation-analysis-worker
env:
- name: QUEUE_NAME
Expand Down
2 changes: 1 addition & 1 deletion tools/k8s/multiomix-fs-experiments-worker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
- name: logs-data
emptyDir: {}
containers:
- image: omicsdatascience/multiomix:5.1.2-celery
- image: omicsdatascience/multiomix:5.4.0-celery
name: fs-experiments-worker
env:
- name: QUEUE_NAME
Expand Down
2 changes: 1 addition & 1 deletion tools/k8s/multiomix-inference-worker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
- name: logs-data
emptyDir: {}
containers:
- image: omicsdatascience/multiomix:5.1.2-celery
- image: omicsdatascience/multiomix:5.4.0-celery
name: inference-worker
env:
- name: QUEUE_NAME
Expand Down
2 changes: 1 addition & 1 deletion tools/k8s/multiomix-stats-worker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
- name: logs-data
emptyDir: {}
containers:
- image: omicsdatascience/multiomix:5.1.2-celery
- image: omicsdatascience/multiomix:5.4.0-celery
name: stats-worker
env:
- name: QUEUE_NAME
Expand Down
2 changes: 1 addition & 1 deletion tools/k8s/multiomix-sync-datasets-worker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
- name: logs-data
emptyDir: {}
containers:
- image: omicsdatascience/multiomix:5.1.2-celery
- image: omicsdatascience/multiomix:5.4.0-celery
name: sync-datasets-worker
env:
- name: QUEUE_NAME
Expand Down
2 changes: 1 addition & 1 deletion tools/k8s/multiomix.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ spec:
- key: "multiomics_intermediate.conf"
path: "multiomics_intermediate.conf"
containers:
- image: omicsdatascience/multiomix:4.7.1
- image: omicsdatascience/multiomix:5.4.0
name: multiomix
env:
- name: POSTGRES_USERNAME
Expand Down

0 comments on commit 98696b3

Please sign in to comment.