Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

analyzing clustered data #147

Open
captcoma opened this issue Jan 25, 2019 · 35 comments
Open

analyzing clustered data #147

captcoma opened this issue Jan 25, 2019 · 35 comments

Comments

@captcoma
Copy link

captcoma commented Jan 25, 2019

I thank Dr. Gallas and colleagues who provide iMRMC for MRMC analysis.

I have a study design with clustered data (4 observations per patient), and learned that handling of clustered data is not yet implemented in the iMRMC R package.
The setting is as follows: 4 readers (readerID) rate the probability of tumor extension on a subjective, ordinal scale (score 1-5) over the prostate capsule in 4 locations (locationID) in study participants (caseID) in modality A and B (modalityID).
The study design is fully crossed (the same readers and cases are used in both modalities and all readers read all cases).
The reference standard is histopathology (truth, either extension or no extension).

Primary hypotheses:
null hypothesis (H0): there is no difference in area under curve (AUC) between modality A and B
alternate hypothesis (H1): there is a difference in AUC between modality A and B

Secondary hypotheses (after dichotomisation score 1-3=0, score 4-5=1):
(using: https://github.com/DIDSR/iMRMC/wiki/Estimate-sensitivity-and-specificity)
H01: there is no difference in Sensitivity between modality A and B
H11: there is a difference in Sensitivity between modality A and B
H02: there is no difference in Specificity between modality A and B
H12: there is a difference in Specificity between modality A and B

I intend to investigate differences (AUC, Sens, Spec) on a reader-average performance level and on the single reader performance level.
In a previous email Dr. Gallas recommended using a hierarchical bootstrap approach as outlined in https://www.tandfonline.com/doi/abs/10.1080/03610920802610084.
Unfortunately, I could not successfully implement this in R.

Could you please add a function to analyze clustered data with the iMRMC R package? Thank you very much.

Urs Mühlematter

@brandon-gallas
Copy link
Member

brandon-gallas commented Jan 28, 2019

Hi Urs,

Thanks for clarifying your plans and sharing your data. As we discussed by email, I recommended the hierarchical bootstrap and to bootstrap cases such that if a case is "in", then the case brings all the observations in that case's cluster. I had heard this solution before and I believe it lives in the literature somewhere, but I can't put my fingers on anyhow right now. I thought this bootstrap was investigated in Obuchowski1997_Biometrics_v53p567, but when I reviewed that paper, I found that it adapts the method from DeLong1988_Biometrics_v44p837 ... not the bootstrap.

Anyhow, I have put together an R package developing the model I recommended, and I shared with you a link to that package. I am not finished, but I thought I would share this version for you to see what I have been up to. Specifically, you can review/run the scripts in the "data-raw" directory and check out the functions that I created in the "R" directory.

Warnings:

  • The number of readers (4) in the data is pretty small for any MRMC analysis.
  • The number of signal-absent observations (19) is pretty small for any AUC estimate and for any MRMC analysis.
  • The degrees of freedom estimates (df, not p-values) when assuming the cluster data are independent are 13.7 (testA), 17.8 (testB), and 9.9 (testA-testB). I expect the dfs without the independent assumption are smaller. As you said, this is the easiest path forward, but you will need to state this is a limitation.
  • Because of the above, a Gaussian assumption is probably too optimistic, and I'm not sure right now what to use for the degrees of freedom in a test using the t distribution. The conservative approach would use 3 (the number of readers minus one= 4-1 = 3). Of course these two approaches bracket the truth and might be the best we are going to get in the short term (besides simply assuming the cluster data was all independent). (addition: could adapt the degrees of freedom estimate in Hillis2007_Stat-Med_v26p596)

I would like to try and get a simulation set up and code the method given by Obuchowski that uses the DeLong method (instead of the bootstrap).

Let me know that you got the package and what you think about what I've done so far. We'll talk later this week or Monday.

Brandon

@brandon-gallas
Copy link
Member

If/when we come up with a solution for the MRMC analysis of the reader-averaged AUC, we will also have the solution for sensitivity, specificity, and single readers. We will publish the solution.

Also, if/when we come up with a solution we should post it to the stack exchange post you made. I'm recording that link here so that it is easier to find.
https://stats.stackexchange.com/questions/381182/hierarchical-bootstrapping-and-calculation-of-variance-in-a-random-effects-roc

Thanks to the pointer to another data set that we could consider for this work: Becker2017_Invest-Radiol_v52p434. I see that the authors say that, "Diagnostic accuracy was expressed as the area under the ROC curve (AUC) and compared with the nonparametric test by DeLong et al [22], and the method proposed by Obuchowski [23] where applicable, which accounts for multiple views per patient."

If you can send me the code that does the above, that would help.

@captcoma
Copy link
Author

captcoma commented Jan 29, 2019

Hi Brandon,

Thank you for sharing an R package developing the model you recommended. I got the package and it looks very promising, I will review it in detail.

I think Becker et al.'s analysis included this R code: https://www.lerner.ccf.org/qhs/software/lib/funcs_clusteredROC.R which is available here: https://www.lerner.ccf.org/qhs/software/roc_analysis.php
However, I will ask my colleague for further details.

I will keep you posted.

@ASBecker
Copy link

@captcoma correct, that is the code I used

@captcoma
Copy link
Author

captcoma commented Feb 3, 2019

Hi Brandon

Thank you again for sharing your a solution using your model. I am aware of the limitations of our rather small data set, however, this is very often the reality in medicine, especially, for example, if one investigates very new imaging techniques.

I reviewed your data and the included R functions. These functions work well with my data and in consideration of my limited knowledge in statistics they are valid.

I still have some questions:

  • How did you obtain aucMTG?

  • Can the doAUCmrmcHierarchicalBootstrapCluster() function also be used to get p-values for each reader comparing testA and testB?

I uploaded the data and R code of Becker2017_Invest-Radiol_v52 (by courtesy of Anton).

I made a note referring to your package and this discussion on stack exchange and I will also link the final implementation as a solution to my question on stack.

Thank you for your help,
Urs

@brandon-gallas
Copy link
Member

I am not yet comfortable with the bootstrapping result.

I am more comfortable with the path described below than with the bootstrap solution.

I have now implemented the Obuchowski and Rockette (OR) MRMC analysis method (Obuchowski1995_Commun-Stat-Simulat_v24p285) with the Hillis degrees of freedom (Hillis2007_Stat-Med_v26p596), where the covariance matrix needed by the OR method uses the cluster method by Obuchowski (Obuchowski1997_Biometrics_v53p567). I used Obuchowski's that you shared from the Cleveland Clinic website to generate the covariance matrix (funcs_clusteredROC.R: with minor adaptations). I have uploaded my new package (iMRMCcluster_0.2.0.tar.gz) at the FDA box folder that I shared with you before. Check out the script 05_doMRMCaucORcluster.R in the data-raw directory. It has a lot that you need. It's not as polished as I would like right now, but I've got family calling me for lunch and Sunday time together.

I still want to run some simulations to validate. Bit by bit, we're getting there.

Brandon

@brandon-gallas
Copy link
Member

Regarding your questions:

  • How did you obtain aucMTG?

Look at 01_readDataMTG. That's where I calculate aucMTG and save it (see commented code at the bottom of the script). I use iMRMC and assume the cluster data is independent.

  • Can the doAUCmrmcHierarchicalBootstrapCluster() function also be used to get p-values for each reader comparing testA and testB?

I'm not yet comfortable for the bootstrap methods that I have shared. The latest work uses the cluster method by Obuchowski (Obuchowski1997_Biometrics_v53p567). The script 05_doMRMCaucORcluster.R provides all the AUCs and covariances for all the readers and modalities. If you assume normality, you can generate all your confidence intervals and p-values. However, I don't think you should present p-values in a paper or a presentation for all your readers. That would lead to too many p-values in a paper and none of them are accounting for multiple hypotheses. I recommend saving the power of your data for the primary (reader-averaged) endpoints and report standard errors and confidence intervals.

Brandon

@captcoma
Copy link
Author

captcoma commented Feb 3, 2019

Thank you for the update.

I could not find the doMRMCaucORcluster.R / data-raw folder in the new package (iMRMCcluster_0.2.0.tar.gz) at the FDA box folder.

Can doMRMCaucORcluster() also be used to analyze sensitivity and specificity?

@brandon-gallas
Copy link
Member

I uploaded the full folder structure.
I also moved the scripts into inst/scripts. When you install the package, the scripts directory should appear in the installed package directory up one level (no inst directory).

Right now there is no doMRMCaucORcluster.R function, just the 05_doMRMCaucORcluster.R script. I assume that this can be modified for sensitivity and specificity as we have done before, but I have not tried to do it. I'll get back to you soon.

Brandon

@brandon-gallas
Copy link
Member

How did you find the data-raw folder before?

@captcoma
Copy link
Author

captcoma commented Feb 4, 2019

I actually did not get a data-raw folder when installing iMRMCcluster_0.1.0.tar.gz or iMRMCcluster_0.2.0.tar.gz. However with iMRMCcluster_0.3.0.tar.gz I get a scrips folder (containing data and scripts) after installation. I could now review not only your functions but also your scripts.

Thank you for your advice regarding p-values, I will focus on the primary endpoints and try to generate the confidence intervals from the AUCs and covariances.

If I succeeded and if there was a chance to use your method for sensitivity and specificity too, I would get my paper presentation ready for the European Congress of Radiology. We can, of course, use the datasets for a validation.

Thank you for your generous support
Urs

@brandon-gallas
Copy link
Member

iMRMCcluster_0.4.0.tar.gz includes two scripts that analyze sensitivity and specificity.

  • 05_doMRMCfpfORcluster.R
  • 05_doMRMCfpfORcluster.R

That should get you everything for a conference. Next step are simulations.

For my benefit: the untar command in R will extract a source package.

@captcoma
Copy link
Author

captcoma commented Feb 5, 2019

Thank you very much, iMRMCcluster_0.4.0.tar.gz is marvellous.

Just to make sure that I understood your scripts correctly:
reader-averaged AUC testA=0.67 and testB=0.75 (p=0.073)
reader-averaged sensitivity testA=0.28 and testB 0.47 (p=0.094)
reader-averaged specificity testA=0.94 and testB 0.90 (p=0.007)

  • How can I explain (in a simple phrase) that the rather small difference in specificity is significant compared to the bigger difference in sensitivity (that is not significant)? e.g. caused by sampling variance due to a small sample size?

  • If I want to calculate the 95%-CI of the reader-averaged AUC (assuming normality), is following correct?

error <- qnorm(0.975)*sqrt(aucMTG.OR.new$se.i[1])/sqrt(40) #n=40 correct?
botCI <- aucMTG.OR.new$theta.i[1]-error
topCI <- aucMTG.OR.new$theta.i[1]+error
  • If I want to calculate the 95%-CI of reader3's sensitivity of testB (assuming normality), is following correct?
error <- qnorm(0.975)*sqrt(tpfMTG.OR.new$cov.hat[7,3])/sqrt(40) #n=40 correct?, #correct variance picked?
botCI <- tpfMTG.OR.new$theta.hat[2,3]-error 
topCI <- tpfMTG.OR.new$theta.hat[2,3]+error #correct variance picked?

@brandon-gallas
Copy link
Member

I regret that you have an error above when you determine "error". This is largely due to the fact that the code and scripts are not fully fleshed-out and polished. I have revised what you wrote, replacing your questions with statements that I think are true, as much as statistics ever are.

reader-averaged AUC testA=0.67 and testB=0.75 (p=0.073)
reader-averaged sensitivity testA=0.28 and testB 0.47 (p=0.094)
reader-averaged specificity testA=0.94 and testB 0.90 (p=0.007)
-BDG: Note that the script labels the result as fpf, the false positive fraction, but this label is wrong. You have managed to see through this typo and interpret the numbers as being specificity. I have updated the script to label the result as tnf, the true negative fraction = specificity.

The rather small difference in specificity is significant while the larger difference in sensitivity is not significant for two reasons. The main reason is because specificity is based on 141 negative locations and sensitivity is based on 19 positive locations. Less important here, the variability of a binomial random variable is maximum when the mean = 0.5 and gets smaller as the mean approaches the extremes: zero or one.

  • To calculate the 95%-CI of the reader-averaged AUC (assuming normality), do the following.
botCI <- aucMTG.OR.new$theta.i[1] - qnorm(0.975) * aucMTG.OR.new$se.i[1]
topCI <- aucMTG.OR.new$theta.i[1] + qnorm(0.975) * aucMTG.OR.new$se.i[1]

-BDG: There is no reason to scale SE produced by the method as it stands for standard error. It is the standard deviation of the mean.

  • To calculate the 95%-CI of reader3's sensitivity of testB (assuming normality), do the following
botCI <- tpfMTG.OR.new$theta.hat[2,3] - qnorm(0.975) * tpfMTG.OR.new$cov.hat[7,7]
topCI <- tpfMTG.OR.new$theta.hat[2,3] + qnorm(0.975) * tpfMTG.OR.new$cov.hat[7,7]

-cov.hat[7,7] is the variance of reader3's testB sensitivity
-cov.hat[7,3]=cov[3,7] is the covariance between reader3's testB sensitivity and reader3's testA sensitivity.
-cov.hat[8,3]=cov[3,8] is the covariance between reader4's testB sensitivity and reader3's testA sensitivity.

Let me know if you are unclear about anything here. If you give me a chance, I'll double check what you put in your presentation. I'll want to do simulations before doing something in a peer-reviewed journal.

@ZhuYunkai123
Copy link

I also have a study design with clustered data about prostate imaging.
The setting is as follows: 2 readers (readerID) rate the probability of prostate cancer on a subjective, ordinal scale (score 1-5) in modality A, and another 2 reader rate the probability of prostate cancer on a subjective, ordinal scale (score 1-5) in modality B. Each prostate gland is divided to 36 regions. Can you share iMRMCcluster_0.4.0.tar.gz?
Thank you very much.

@brandon-gallas
Copy link
Member

I have published work that uses Obuchowski's method for analyzing clustered data (Obuchowski1997_Biometrics_v53p567). Here is a repository with the reference, data, functions, and markdown files executing the analysis.

I'll add that your study should not be analyzed with MRMC methods. The study only has two readers in each modality. I regret that it is not reasonable to estimate a variance from two observations. I recommend, instead, to analyze each reader individually and present those results.

@ZhuYunkai123
Copy link

Thank you very much.
Your suggestion is very helpful.

@brandon-gallas
Copy link
Member

The data and code are provided in the repository. Please try to follow the example. Please refer to the README at https://github.com/DIDSR/mitoticFigureCounts/tree/master

You have not mentioned what makes your data clustered. Are you splitting each prostate into distinct units and collecting scores from your readers on each of these units? If not, this comment doesn't belong in this issue and you should just use the iMRMC software in this repository. Please start a new issue.

What have you tried? What are your errors?

@brandon-gallas
Copy link
Member

Your issue is not about clustered data. Please start a new issue.

If you want to analyze binary data, there is help in the wiki:
https://github.com/DIDSR/iMRMC/wiki/iMRMC-FAQ

There are two manuscripts for which I use R markdown for all analyses. Both do MRMC analyses of sensitivity and specificity. The code is included in the R markdown files. Find a figure or table in the paper and trace that to the code in the R markdown file.

@captcoma
Copy link
Author

Dear Rajesh. Maybe I can help you using the iMRMC package in R. Just send me an email.

@ZhuYunkai123
Copy link

I also need help in using the iMRMC package in R. I have never used R before. Thank you.

@brandon-gallas
Copy link
Member

Urs, Thank you for offering to help.

All: @captcoma (Urs Muhlematter), @rajb29 (Rajesh ???), @ZhuYunkai123 (??? ???)

I want to help, but I'm not a fan of helping anonymous strangers. Please update your GitHub accounts with your affiliation, profession/status ... or even a link to an up to date linkedin page. I also recommend that you add your picture in case we meet one day. Please also email this info to me and include programming skill level and languages and the urgency of your work (upcoming deadlines). My email is on my GitHub profile page. You can click my picture to navigate to my profile page.

I propose a webinar where we hack it out. I think this will work best if you share the data with me and tell me your primary goals (less than 3). I will prepare to instruct, and you will code. I would like to record our webinar as my division director has asked that I create a tutorial presentation. The webinar will help me with the tutorial. I will not share your identity or data without your permission. I will invite some FDA colleagues that will benefit from the tutorial.

I would like to ask you to share your data after it is published, to leave more examples behind for others to follow. I will propose some times by email, likely some Monday at 10, Tuesday at 3, or Thursday at 11 (EDT).

What do you think?
Brandon

@ZhuYunkai123
Copy link

Thank you very much for your help.

I think I have updated my profiles and uploaded my picture. Honestly, my programming skill level is poor. I only started to learn Stata last year and I have not used R package before. I bought some books about R yesterday to start learning. My work is not so urgent since I am still working on expanding my sample size and adding more readers.

The primary goal of my study is to compare ultrasound and MRI in the diagnosis of prostate cancer in correlation with radical prostatectomy pathology. Since each prostate gland is divided into several regions, I think the data is clustered. By far I have two readers. But since ultrasound is an operator-depedent imaging modality, I think adding more readers would be better.

@ZhuYunkai123
Copy link

For MRMC analysis of clustered data, how many readers would be appropriate? I think I can share the datasheet with you after adding more readers.

@ZhuYunkai123
Copy link

@captcoma I read your word published on RADIOLOGY comparing the diagnostic accuracy of MP-MRI vs PET/MRI for Extracapsular Extension and Seminal Vesicle Invasion. It is a great work. I think my data is similar with you, the difference is we compared the diagnostic accuracy of MP-MRI vs new ultrasound modalities for prostate cancer detection. We also used radical prostatectomy histopathology as reference standard.

@brandon-gallas
Copy link
Member

Thanks for the info Yunkai. There are no quick answers about the minimum number of readers for any analysis. It depends a bit on the phase of research, what you are trying to prove, and who you are trying to convince. If you only have a few readers (3-5), give the results of each reader. Think about any variance estimate ... it's really hard to get a good variance estimate with only 5 observations. It also depends on reader variability. If there is no reader variability (the rare situation), you only need one reader!

I haven't yet heard from Rajesh.

@brandon-gallas
Copy link
Member

It sounds to me like you don't have a statistician involved. That's your main problem.

Email me and we can set up a time. We are all busy.

Brandon

@brandon-gallas
Copy link
Member

I never got any email in response to my offer to help.

@technOslerphile
Copy link

Thank you very much, iMRMCcluster_0.4.0.tar.gz is marvellous.

Just to make sure that I understood your scripts correctly: reader-averaged AUC testA=0.67 and testB=0.75 (p=0.073) reader-averaged sensitivity testA=0.28 and testB 0.47 (p=0.094) reader-averaged specificity testA=0.94 and testB 0.90 (p=0.007)

  • How can I explain (in a simple phrase) that the rather small difference in specificity is significant compared to the bigger difference in sensitivity (that is not significant)? e.g. caused by sampling variance due to a small sample size?
  • If I want to calculate the 95%-CI of the reader-averaged AUC (assuming normality), is following correct?
error <- qnorm(0.975)*sqrt(aucMTG.OR.new$se.i[1])/sqrt(40) #n=40 correct?
botCI <- aucMTG.OR.new$theta.i[1]-error
topCI <- aucMTG.OR.new$theta.i[1]+error
  • If I want to calculate the 95%-CI of reader3's sensitivity of testB (assuming normality), is following correct?
error <- qnorm(0.975)*sqrt(tpfMTG.OR.new$cov.hat[7,3])/sqrt(40) #n=40 correct?, #correct variance picked?
botCI <- tpfMTG.OR.new$theta.hat[2,3]-error 
topCI <- tpfMTG.OR.new$theta.hat[2,3]+error #correct variance picked?

All comments in this thread have been very useful. Thanks to all. However, I am not able to find this iMRMCcluster_0.4.0.tar.gz file so that I can use it for analysis of an MRMC study with binary ratings for a clustered data with the endpoint being improvement in sensitivity obtained when using a new modality as compared to an existing modality. Can someone guide me how to get this iMRMCcluster_0.4.0.tar.gz file?

@brandon-gallas
Copy link
Member

I have published work that uses Obuchowski's method for analyzing clustered data (Obuchowski1997_Biometrics_v53p567).

  • Citation: Tabata, K., N. Uraoka, J. Benhamida, M. G. Hanna, S. J. Sirintrapun, B. D. Gallas, Q. Gong, R. G. Aly, K. Emoto, K. M. Matsuda, M. R. Hameed, D. S. Klimstra, and Y. Yagi (2019). "Validation of mitotic cell quantification via microscopy and multiple whole-slide scanners." Diagn Pathol, 14(1): 65.

Here is a repository with the reference, data, functions, and markdown files executing the analysis.

Here's the script you should look at and revise for your application and data.

Let me know if this is what you were looking for.

Brandon

@technOslerphile
Copy link

I have published work that uses Obuchowski's method for analyzing clustered data (Obuchowski1997_Biometrics_v53p567).

  • Citation: Tabata, K., N. Uraoka, J. Benhamida, M. G. Hanna, S. J. Sirintrapun, B. D. Gallas, Q. Gong, R. G. Aly, K. Emoto, K. M. Matsuda, M. R. Hameed, D. S. Klimstra, and Y. Yagi (2019). "Validation of mitotic cell quantification via microscopy and multiple whole-slide scanners." Diagn Pathol, 14(1): 65.

Here is a repository with the reference, data, functions, and markdown files executing the analysis.

Here's the script you should look at and revise for your application and data.

Let me know if this is what you were looking for.

Brandon

Thanks, Dr. Gallas. I could find the script 05_doMRMCaucORcluster.R in the mitoticFigureCounts R package. While this is apt for applying Obuchowski-Rockette analysis for binary data where AUC is the primary end-point, what I am looking for is a method where I can estimate improvement in Sensitivity of readers in a clustered binary data, not AUC. In one of the comments above, I saw that there are two R scripts which are related to this:
05_doMRMCfpfORcluster.R 05_doMRMCfpfORcluster.R
However, I could not find these in the mitoticFigureCounts. It will be very helpful if you can suggest any alternatives or where I can find these scripts.

@brandon-gallas
Copy link
Member

The iMRMC software can be used to do an MRMC analysis of sensitivity and specificity. Please check out this wiki page: https://github.com/DIDSR/iMRMC/wiki/Estimate-sensitivity-and-specificity.

I have also uploaded to the mitoticFigureCounts the requested files:

  • 05_doMRMCfpfORcluster.R
  • 05_doMRMCfpfORcluster.R

ATB

@technOslerphile
Copy link

The iMRMC software can be used to do an MRMC analysis of sensitivity and specificity. Please check out this wiki page: https://github.com/DIDSR/iMRMC/wiki/Estimate-sensitivity-and-specificity.

I have also uploaded to the mitoticFigureCounts the requested files:

  • 05_doMRMCfpfORcluster.R
  • 05_doMRMCfpfORcluster.R

ATB

I really apologize for this late response. I should have responded MUCH earlier. Thank you very much for making these R scripts available. I am sure these will be very helpful. I will get back to you with any questions later when I get a chance to dig deep into these scripts. I hope you don't mind. Thanks again!

@honeybun7
Copy link

Hi, I was wondering on if there were any resources or links on an ROI FROC approach to an mrmc trial, and how clustering should be considered. Any help would be appreciated!

@brandon-gallas
Copy link
Member

To all that have commented in this issue, a user has identified a bug/typo in the script that does an MRMC analysis of clustered/nested data (Link to issue). It was easily fixable. I hope your analyses did not suffer this typo.

@honeybun7, I'm sorry I did not respond to your question. It was pretty vague and combined two things that are contradictory: an ROI analysis is when you partition the image into ROIs and do a regular ROC study on the ROIs. A clustered data analysis is appropriate for that data. An FROC analysis does not include any ROIs. The analysis methods that I have studied do not address FROC studies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants