-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Obs window size #5
base: stateful-actors-implementation
Are you sure you want to change the base?
Conversation
take first element form list as arg in obs_w_30sec convert str to int: obs_w_30sec
@@ -111,7 +111,12 @@ def __init__(self, address, actor_handles, | |||
raise ValueError("Multiple Patients specified." | |||
"Specify only one.") | |||
patient_name = patient_name[0] | |||
prediction_tensor = torch.zeros((1, 1, PREDITICATE_INTERVAL)) | |||
|
|||
obs_w_30sec = query_kwargs.pop("obs_w_30sec", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why obs_window has to be per query basis? Can it part of http server instantiation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that when profile client make request it can change the obs_window.
If we put it as part of http server instantiation then the client will always have to send the same obs_window.
For example, when the patients are on stable condition client will only look at interval of obs_window 30s, however when the patients are critical condition, the client can send obs_window 30s, 2min, 5 min, etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes the client has to be consistent in sending the information. Any reasons for the assumption you are making?
if obs_w_30sec is None: | ||
raise ValueError("Specify obs_w_30sec in query") | ||
obs_w_30sec = int(obs_w_30sec[0]) | ||
prediction_tensor = torch.zeros((obs_w_30sec, 1, PREDITICATE_INTERVAL)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need one more profiling :
@ray.remote
def echo_tensor(a : torch.Tensor):
return a
>>> %timeit ray.get(echo_tensor.remote(torch.zeros((obs_w_30sec, 1, PREDITICATE_INTERVAL))))
This is the result from experimenting with ray profiling: @ray.remote
def echo_tensor(a : torch.Tensor):
return a Simple function means we input a tensor with ray remote and return a boolean whether serialization finished or not @ray.remote
def echo_tensor_simple(a : torch.Tensor):
return 1 The results are performed by doing the following: >>> PREDITICATE_INTERVAL = 7500
>>> %timeit ray.get(echo_tensor.remote(torch.zeros((obs_w_30sec, 1, PREDITICATE_INTERVAL))))
>>> %timeit ray.get(echo_tensor_simple.remote(torch.zeros((obs_w_30sec, 1, PREDITICATE_INTERVAL)))) |
This is the latency profile for different
|
This is the latency profile for different The experiment uses tensor size of
|
Change the frequency from 125Hz to 250Hz
We change the firing frequency according to dataset.
This means prediction will happen after 30 sec * 250Hz = 7500 request
Change the server to accomodate obs_w_30sec.
obs_w_30sec is equal to batch size of waveform with observation window of 30 sec.
For example if we want to do profile_ensemble of 5 minutes of waveform we will send tensor in size of (obs_w_30sec, 1, 7500).