Replies: 5 comments 12 replies
-
Future Features@itsdfish and @DominiqueMakowski: Finally getting around to drafting the "works in progress/future direction" post. I believe we have three exciting developments to discuss; I'd love to hear your thoughts. Features to Consider
I have some ideas and citations for those models I will post in the thread. Excited to keep working with you guys! |
Beta Was this translation helpful? Give feedback.
-
ArviZ now has a port over from python (or at least calls the python package)that could be useful for some worked examples. Have you played around with ArviZ @DominiqueMakowski |
Beta Was this translation helpful? Give feedback.
-
While I consider the approach proposed by the BaysFlow group to be extremely viable, I believe there are additional avenues worth exploring. One of these involves using Likelihood Approximation Networks (LANS; Fengler 2021). LANS offers the advantage of being more straightforward to implement. The concept revolves around learning the mapping between parameters, data, and the generated synthetic or empirical likelihood. This trained network can then facilitate relativity more efficient posterior sampling (if you were considering traditional approximate Bayesian computation, refer to Turner 2018). This strategy would enable us to leverage the capabilities of the ONNX format for translating differentiable likelihood approximators across various backends. The integration could be leverage existing models in HSSM (https://github.com/lnccbrown/HSSM), originally built in PyTorch. We could then just use Flux to interface with Turing. Relevant Links: |
Beta Was this translation helpful? Give feedback.
-
Rather than adding that as a dependency here / create a separate package, it would probably be easier to add that directly into TuringGLM, so that it becomes the main interface for various Turing models. The first step would be probably to extend the @formula(drift ~ 1 + Condition,
threshold ~ 1 + Condition,
ndt ~ 1 + Condition) which would then write the model with default priors |
Beta Was this translation helpful? Give feedback.
-
Not really a future direction point but an interesting paper: https://osf.io/preprints/psyarxiv/h4fde |
Beta Was this translation helpful? Give feedback.
-
A discussion about SSMs.
Beta Was this translation helpful? Give feedback.
All reactions