Codes for effective connectivity (EC) estimation with transformer and dcm (along with additional features, support and dependencies). Code is based on P-DCM for task-fMRI [1], however can be extended to other DCMs such as S-DCM [2].
Install the packages and dependencies first using the requirements.txt file and check out the codes in the src folder.
Along with the vanilla implementation of attention as done in standard transformers [3], an efficient attention scheme is also implemented which contains linear attention with cosine kernel [4] (see src/mdl_v2 folder).
To obtain the connectivity estimates, see the scripts in the ft folder. Full finetuning of transformer encoder may not be needed if pretraining is employed. These scripts can be found in the pt folder. Pretraining scripts are based on Barlow Twins [5] and Simclr losses [6] for time-series data.
References
[1] Havlicek, M., Roebroeck, A., Friston, K., Gardumi, A., Ivanov, D., & Uludag, K. (2015). Physiologically informed dynamic causal modeling of fMRI data. Neuroimage, 122, 355-372.
[2] Friston, K. J., Harrison, L., & Penny, W. (2003). Dynamic causal modelling. Neuroimage, 19(4), 1273-1302.
[3] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems. Advances in neural information processing systems, 30 (2017).
[4] Qin, Z., Sun, W., Deng, H., Li, D., Wei, Y., Lv, B., ... & Zhong, Y. (2022). cosformer: Rethinking softmax in attention. International Conference on Learning Representations (2022).
[5] Zbontar, J., Jing, L., Misra, I., LeCun, Y., & Deny, S. (2021, July). Barlow twins: Self-supervised learning via redundancy reduction. In International conference on machine learning (pp. 12310-12320). PMLR.
[6] Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597-1607). PMLR.