Skip to content

Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI. PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.

Notifications You must be signed in to change notification settings

ppanzx/plip

 
 

Repository files navigation

Leveraging medical Twitter to build a visual–language foundation model for pathology AI

This Github repository contains the evaluation codes.

PLIP

Links

Config Env File

PC_CACHE_FOLDER: is the folder in which cached embeddings are saved
PC_RESULTS_FOLDER: is the folder in which results are going to be saved
PC_EVALUATION_DATA_ROOT_FOLDER: this folder should point to the folder in which there are the evaluation dataset
PC_DEFAULT_BACKBONE: this should point at the backbone to use as default for PLIP
PC_CLIP_ARCH: this is the architecture for PLIP and BLIP (e.g., "ViT-B/32")

Evaluation Scripts

All evaluation scripts are found in path_eval/scripts. They use the abstractions found in path_eval/evaluation

Dataset Details

Data Location (For internal use only)

Classification Task /v2/evaluation_datasets/classification

Retrieval Task /v2/evaluation_datasets/image_retrieval

Setup Images

Data has been generated by splitting using: seed=1_trainratio=0.70_size=224

About

Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI. PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 79.2%
  • Jupyter Notebook 20.2%
  • Shell 0.6%