Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnaryAsyncWrapper resume its continuation more than once #333

Open
remstos opened this issue Jan 8, 2025 · 4 comments
Open

UnaryAsyncWrapper resume its continuation more than once #333

remstos opened this issue Jan 8, 2025 · 4 comments

Comments

@remstos
Copy link

remstos commented Jan 8, 2025

Hi,

We’re seeing quite a lot of random cases of the UnaryAsyncWrapper calling its continuation twice.
From our investigation, we could see that often it is happening when the request was actually cancelled and then its an error.
So something like this flow :

  • request is triggered
  • request is canceled
  • request receives an error (either timeout or our own grpc errors)

UnaryAsyncWrapper.swift Line 52

Fatal error: SWIFT TASK CONTINUATION MISUSE: send () tried to resume its continuation more than once, returning ResponseMessage<Services_Call_Proto_V1_ListVoicesResponse>(code: Connect.Code.deadlineExceeded, headers: [:], result:
Swift.Result<TheMessagingApp.Services_Call_Proto_V1_ListVoicesResponse, Connect.ConnectError>.failure(Connect.ConnectError(code:
Connect.Code.deadlineExceeded, message: Optional ("timed out"), exception: nil, details: [], metadata: [:])), trailers: [:])!

Reproducibility

I was able to reproduce unconsistently, by

  • reducing the timeout to 5s
  • Have a network request sent and canceled quickly
  • playing with the Network Link Conditioner to slow down network requests to make them fail

It is not consistent so I’m guessing there’s a race depending on how fast we’re cancelling the request.

Screenshot 2025-01-08 at 12 51 29

I was trying to implement a fix, but I’m not sure what would be the best way to check that whether or not the cancelable closure has been canceled or not.

Thanks for any help here 🙏

@rebello95
Copy link
Collaborator

👋🏽 hi @remstos - thank you for flagging this issue! Can you share a bit more about how you're cancelling these requests? If you are able to put together an example (even if it inconsistently reproduces) within the Eliza example app in this repo, that would be super helpful as well.

@dan-zy-y
Copy link

hi @rebello95 we are also encountering this issue and every time it happens is because of timeout error. As mentioned by the author, you can reproduce it by making the timeout of requests somewhere around 5-6s. So I assume that the author, like us is not manually cancelling the requests, they are cancelled automatically because of timeout.

@eseay
Copy link
Contributor

eseay commented Jan 21, 2025

Hey @dan-zy-y @remstos - I am going to look at this as well. Stay tuned for updates.

@eseay
Copy link
Contributor

eseay commented Jan 21, 2025

@dan-zy-y @remstos Can you clarify two things for me?

  • Are you using gRPC or Connect protocol?
  • Am I understanding you correctly that the circumstances under which you're experiencing this are that:
    • You set the request timeout low, such that it will time out before the server responds
    • The server responds after the client-side timeout has occurred
    • The server response is an error response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants