-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize interaction with the SMT solver #482
Comments
Found a couple of solver-agnostic apis: |
There's also the chance that after studying the example we might find a way to verify it with less calls to the smt solver. I tried some batching that removed some |
Hi @facundominguez -- sorry for not replying, have been thinking this over. The data about The batching sounds very promising! The catch in general is that there is a dependency but still, there could be significant gains as your numbers show -- does I have been wary of in-memory APIs because the |
Yes, it does include serialization.
AmericanFlag overall goes from 18 seconds to 14 seconds. The time spent in the
Perhaps LF could log just the same with a modest amount of work. The main drawbacks of batching are:
The prospect just doesn't excite me :) |
Here's a flamegraph. While some overhead can be attributed to the synchronization between processes it also shows that there are quite a few other areas to improve. It was created with SCCs for all top-level functions in |
With #493, the time of AmericanFlag is near 9 seconds. liquid-fixpoint spends 7 seconds. From here, I'm finding it difficult to bring it down.
I have added batching in #493. If we switched to using the FFI to talk to the SMT solver we might earn a second, two seconds, or none. It depends on how much overhead deserialization takes on the SMT side, and how much overhead it is to build the expressions in the FFI AST. I can't figure out what else to trim down though. It is looking like we would need to reduce the amount of lookups or manipulations that we need to do in the common paths. e.g. who needs sort checking or elaboration? :) But I don't think removing those is possible. |
I dived some more into the FFI path, and its looking laborious. smt-switch doesn't support parametric datatypes, apparently, and neither there is good documentation on the z3 bindings to define them. Add to that the modest improvement, and the hassle to mantain the FFI interface, and it doesn't look so attractive anymore. There is a new hope though. I found that removing logical qualifiers from AmericanFlag.hs.fq speeds up LF from the current 7 seconds to 2 seconds. Maybe there is some room to either identify irrelevant qualifiers, or serve the qualifiers in an order that optimizes the time in the successful case. One odd fact in this space is that out of 250 qualifiers, removing only two qualifiers accounts for most of the time reduction, these are:
Any ideas of what is special about these? |
Wow that is remarkable! Am very surprised that just those two make such
a difference!
They don’t look particularly special though. Do you know how the total
number of iterations or z3 queries changes when those are removed?
…On Sat, Sep 25, 2021 at 7:03 AM Facundo Domínguez ***@***.***> wrote:
I dived some more into the FFI path, and its looking laborious. smt-switch
doesn't support parametric datatypes, apparently, and neither there is good
documentation on the z3 bindings to define them. Add to that the modest
improvement, and the hassle to mantain the FFI interface, and it doesn't
look so attractive anymore.
There is a new hope though. I found that removing logical qualifiers from
AmericanFlag.hs.fq speeds up LF from the current 7 seconds to 2 seconds.
Maybe there is some room to either identify irrelevant qualifiers, or serve
the qualifiers in an order that optimizes the time.
One odd fact in this space is that out of 250 qualifiers, removing only
two qualifiers amount for most of the time reduction, these are:
qualif Auto(VV##0 : int, x : e##xo): ((VV##0 =
(lexSize x))) // SourcePos {sourceName = "benchmarks/vector-algorithms-0.5.4.2/Data/Vector/Algorithms/AmericanFlag.hs", sourceLine = Pos 62, sourceColumn = Pos 3}
qualif Auto(VV##0 : int, x : e##xo): ((VV##0 <
(lexSize x))) // SourcePos {sourceName = "benchmarks/vector-algorithms-0.5.4.2/Data/Vector/Algorithms/AmericanFlag.hs", sourceLine = Pos 65, sourceColumn = Pos 3}
Any ideas of what is special about these?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ucsd-2Dprogsys_liquid-2Dfixpoint_issues_482-23issuecomment-2D927126081&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=QQFjcOboTmstqIvH5BKEQ2pc0Xa0fnNVUWxuBdm_Rgs&e=>,
or unsubscribe
<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAMS4ODCOL5SK55B4M6I7XTUDXJD3ANCNFSM5CMG2QHA&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=ZI-BbP5QxCMiwdQ_RHtd1uG820d8W8nj_LauPB31i6Y&e=>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://urldefense.proofpoint.com/v2/url?u=https-3A__apps.apple.com_app_apple-2Dstore_id1477376905-3Fct-3Dnotification-2Demail-26mt-3D8-26pt-3D524675&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=Jr-KyJt3p8Sax1f2xiHnSzdWGjLz7Q5NJZV71TAV5cw&e=>
or Android
<https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.github.android-26referrer-3Dutm-5Fcampaign-253Dnotification-2Demail-2526utm-5Fmedium-253Demail-2526utm-5Fsource-253Dgithub&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=MpIcGRncvMuGXANpp1Sq4G77a03rIweTcJtEFrYLL9Q&e=>.
|
Those last two bits of information are in the “solverstats” - I wonder how
that changes. Also i presume the result (safe/unsafe) is unchanged by the
qualifier removal?
…On Sat, Sep 25, 2021 at 7:37 AM Ranjit Jhala ***@***.***> wrote:
Wow that is remarkable! Am very surprised that just those two make such
a difference!
They don’t look particularly special though. Do you know how the total
number of iterations or z3 queries changes when those are removed?
On Sat, Sep 25, 2021 at 7:03 AM Facundo Domínguez <
***@***.***> wrote:
> I dived some more into the FFI path, and its looking laborious.
> smt-switch doesn't support parametric datatypes, apparently, and neither
> there is good documentation on the z3 bindings to define them. Add to that
> the modest improvement, and the hassle to mantain the FFI interface, and it
> doesn't look so attractive anymore.
>
> There is a new hope though. I found that removing logical qualifiers from
> AmericanFlag.hs.fq speeds up LF from the current 7 seconds to 2 seconds.
> Maybe there is some room to either identify irrelevant qualifiers, or serve
> the qualifiers in an order that optimizes the time.
>
> One odd fact in this space is that out of 250 qualifiers, removing only
> two qualifiers amount for most of the time reduction, these are:
>
> qualif Auto(VV##0 : int, x : e##xo): ((VV##0 =
> (lexSize x))) // SourcePos {sourceName = "benchmarks/vector-algorithms-0.5.4.2/Data/Vector/Algorithms/AmericanFlag.hs", sourceLine = Pos 62, sourceColumn = Pos 3}
> qualif Auto(VV##0 : int, x : e##xo): ((VV##0 <
> (lexSize x))) // SourcePos {sourceName = "benchmarks/vector-algorithms-0.5.4.2/Data/Vector/Algorithms/AmericanFlag.hs", sourceLine = Pos 65, sourceColumn = Pos 3}
>
> Any ideas of what is special about these?
>
> —
> You are receiving this because you commented.
>
>
> Reply to this email directly, view it on GitHub
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ucsd-2Dprogsys_liquid-2Dfixpoint_issues_482-23issuecomment-2D927126081&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=QQFjcOboTmstqIvH5BKEQ2pc0Xa0fnNVUWxuBdm_Rgs&e=>,
> or unsubscribe
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAMS4ODCOL5SK55B4M6I7XTUDXJD3ANCNFSM5CMG2QHA&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=ZI-BbP5QxCMiwdQ_RHtd1uG820d8W8nj_LauPB31i6Y&e=>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__apps.apple.com_app_apple-2Dstore_id1477376905-3Fct-3Dnotification-2Demail-26mt-3D8-26pt-3D524675&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=Jr-KyJt3p8Sax1f2xiHnSzdWGjLz7Q5NJZV71TAV5cw&e=>
> or Android
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.github.android-26referrer-3Dutm-5Fcampaign-253Dnotification-2Demail-2526utm-5Fmedium-253Demail-2526utm-5Fsource-253Dgithub&d=DwMCaQ&c=-35OiAkTchMrZOngvJPOeA&r=r3JfTqNkpwIJ1InE9-ChC2ld7xwATxgUx5XHAdA0UnA&m=6zEUGMy-73DrYBU_a1fvPEjmUNIggWoPbJ86nmNWJCo&s=MpIcGRncvMuGXANpp1Sq4G77a03rIweTcJtEFrYLL9Q&e=>.
>
>
|
The result is unchanged when eliminating those qualifiers. These are the stats with no modifications
These are the stats with the two qualifiers removed:
And these are the stats with as many qualifiers as I could remove without getting unsafe constraints (near 100 qualifiers survived):
The runtime decreases linearly with the amount of queries at 1 second every 10000 queries. |
Wow so those two irrelevant qualifiers are responsible for nearly 25K SMT queries! |
Will the second parameter of the qualifier |
They originate from the specification of typeclass methods. class Lexicographic e where
terminate :: e -> Int -> Bool
size :: e -> Int
index :: Int -> e -> Int
{-@ measure lexSize :: a -> Int @-}
{-@ assume size :: (Lexicographic e) => x:e -> {v:Nat | v = (lexSize x)} @-}
{-@ assume index :: (Lexicographic e) => Int -> x:e -> {v:Nat | v < (lexSize x)} @-} |
Here's an idea that may address this problem without requiring any support for typeclasses/constraints in fixpoint: restrict measures (e.g. Right now the sort for Instead, walk over the constraints -- after elaboration -- to find all existing terms This could greatly shrink the sorts at which valid instantiations are allowed, and shouldn't affect the constraint solving... |
Aren't the comparisons made with
Worth considering. I'll try removing only the qualifiers that have unconstraint parameters and see how that goes. |
Oops my apologies, you are correct! |
Removing only the qualifiers with unrestricted parameters, gives
Which isn't too bad. My great discovery today was
with these, verification takes only 1 second in LF! (if I don't take into account the parsing overhead which I've been ignoring all along) Which sends me into thinking if there could be a practical way to cache these 10 qualifiers and use them as a starting point whenever the user modifies the code.
|
AmericanFlag.hs takes near 20 seconds to verify, while
z3 AmericanFlag.hs.smt2
takes only 2 seconds. It turns out most of the time is spent inLanguage.Fixpoint.Smt.Interface.command
which is called 280000 times! Each invocation takes near 50 microseconds, which I conjecture is going to be difficult to speed up.Maybe SMT solvers can be linked as libraries to cut on that, or maybe it would be possible to batch a few commands and see if just reducing the amount of exchanges with the SMT solver reduces the overall time.
Any thoughts?
The text was updated successfully, but these errors were encountered: