Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Downstream processing and segmentation visualization #9

Open
TatianaSezin opened this issue Nov 4, 2024 · 2 comments
Open

Downstream processing and segmentation visualization #9

TatianaSezin opened this issue Nov 4, 2024 · 2 comments
Labels
help wanted Extra attention is needed

Comments

@TatianaSezin
Copy link

Hi guys,

just finished generating our first Visium HD data and was very excited to read about ENACT.
I tried to run the output from our HD Visium results in human skin using ENACT. These were the parameters we used:

segmentation: True
bin_to_geodataframes: True
bin_to_cell_assignment: True
cell_type_annotation: True
seg_method: stardist
patch_size: 4000
bin_representation: polygon
bin_to_cell_method: weighted_by_cluster
cell_annotation_method: celltypist
cell_typist_model: Adult_Human_Skin.pkl
use_hvg: True
n_hvg: 1000
n_clusters: 4

We obtained 6 patches. Is there a way to control the number of patches that are generated? Is there a way to visualize the segmentation results? May you please suggest how one should use the ouput from ENACT to continue visualizing the annotated clusters in SquidPy?

Thank you so much in advance for any input and your great support so far!

@AlbertPlaPlanas AlbertPlaPlanas added the help wanted Extra attention is needed label Nov 5, 2024
@AlbertPlaPlanas
Copy link

Hi Tatiana!

For the first question, the patch_size: 4000 parameter will determine the number of patches you generate. The bigger the patch size, the fewer the output patches. We need to break the image to patches due to memory isssues (if you have enough memory & processing power you could increae the patch size until you get a single patch).

For the sequestion question, the easiest way to visualize your outcomes would be through TissUUmaps ( https://tissuumaps.github.io/installation/ ) where you can (1) load the WSI image and then (2) load the resulting csv or h5ad as markers. In this way you'll see the centroids of your segmented cells (and if you group them by cell_type, you'll also see their type)
I see it may be a good idea to write a short tutorial on how to load the data there, we'll eventually add it to the docs.
image

For the squidpy question, I'll let my colleague @Mena-SA-Kamel answer it once he is back from a deserved autumn break.

@TatianaSezin
Copy link
Author

Hi Albert,

thank you so much for this quick reply and your great help. I will give it a try. My end goal is to integrate several datasets so I was wondering if @Mena-SA-Kamel has experience with multiple Anndata integration in squidpy with Visium HD?
thanks again for your support!
Tatiana

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants