-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for Comment on setting a batch queue maximum wallclock and a new "long" queue #407
Comments
I wonder if there is a simpler way. Is there a way to restrict jobs that run on the nodes from this mysterious "specific group" to only those with a walltime limit set to, say, <24h? That way, only jobs that are guaranteed to complete in a reasonable amount of time are ever run on the "specific group" hardware, and no special queues are needed---it is transparent to the submitter. The "specific group" can have a special queue with no such limits. |
I think having a separate queue for long running jobs is a good idea. One thing that I have seen implemented in other systems I am using, is an automatic queue assignment based on walltime. So to the end user everything would more or less stay the same. Default queue stays |
The group isn't mysterious I just don't expose names. Its the two groups that purchased nodes for their queues that were added over the shutdown. If you pop into Slack I can elaborate. Your item to only run jobs with a lower walltime will be added to the possible options. To be honest I do not know if I can do that but will look at Moab config options. |
@akahles's idea of "automatic queue assignment based on walltime" may be another way to implement my suggestion. |
@akahles I will look into if that is possible so people do not have to select the longer queue. That would be I believe handled by Torque as it does the initial scheduling. |
It may also be a submit filter. |
+1 for limiting walltime of jobs run on group specific nodes. Segmentation into more queues ultimately penalizes certain types of jobs and also creates inefficiencies as @akahles has pointed out. That would be ok if walltime was always very predictable, but in many instances it is not. |
Note for myself the concept of a routing queue is Torque supported. I am trying to determine if walltime is a supported routable attribute however. |
+1 to @akahles suggestion for different queues based on wallclock submission time in the script. I am also in favour of a max wallclock time on the batch queue (say 48 or 96 hrs), in order to decrease latency in the queue, even if this means that we may all have to increase the use of checkpoints for our longer running jobs. |
We are attempting to allow the use by the
batch
queue of a set of nodes purchased by a specific group when idle. But some concerns have been expressed about the walltime limit on thebatch
queue being currently unlimited and the possibility jobs could be scheduled there of long duration interfering with their needs.I've seen discussed off and on over the years the setting of a walltime limit on batch but wanted to ask per discussions with @juanperin for comments here.
We have a few options which I'll try to explain and if folks have opinions please comment so we can make sure we understand all possible impacts. Our goal is maximizing node usage. We will make no changes without discussion.
Modification to batch queue maximum walltime
batch
as a way of preventing these additional nodes from being tied up for long duration. For example four days (96 hours) could become the newbatch
maximum walltime.Creation of a queue for longer running jobs
batch
allowing us to adjust its pool of nodes based on monitored demand.Thank you for any opinions you might have.
The text was updated successfully, but these errors were encountered: