[Clusterusers] A Little More Cluster Time
Thomas Helmuth
thelmuth at cs.umass.edu
Fri Jan 24 22:48:02 EST 2014
Hi Chris,
That is perfect, thanks! I'm not sure how to change the priority of my
current runs, but that doesn't seem to be an issue as they seem to be off
and running now that you lowered your priority. I don't expect to need to
start any new jobs before the deadline, but if I do I'll make sure to spool
them at priority 121. Thanks for being flexible!
-Tom
On Fri, Jan 24, 2014 at 10:00 PM, Chris Perry <perry at hampshire.edu> wrote:
>
> It seems that there's an easy solution which is to just lower the priority
> of our jobs (or raise the priority of your jobs). This should mean that
> whenever a task finishes on our jobs, tractor will give your
> higher-priority tasks the attention they need first. And as long as your
> higher-priority job is running, it will get the procs before the
> lower-priority jobs. Will this work okay for the time being?
>
> I just lowered the priority for the tube and lilyd3 jobs that are running.
> I also did 'retry running tasks' on lilyd3 which killed the running renders
> and respooled them, which will automatically send them only to the nodes
> that your jobs are not running on given the lower priority. As Bassam's
> frames finish (they seem pretty fast and the already-running ones should be
> done within minutes), new ones will only spool behind your jobs.
>
> Tom - moving forward during this crunch period, you should just go ahead
> and spool at priority 121. This will cause your jobs to run at an even
> higher priority than our single-frame renders which means that your jobs
> will always get priority on the procs you can run on, and ours will receive
> whatever's left.
>
> Anyone worried about this approach? Seems to be exactly what the priority
> system is built for.
>
> - chris
>
>
> On Jan 24, 2014, at 9:29 PM, Thomas Helmuth wrote:
>
> > Hi fly cluster users,
> >
> > Lee and I have a paper deadline coming up on January 29th, and I am
> hoping to get a few sets of runs before the deadline. If it isn't too much
> of an inconvenience, I was wondering if it would be possible to pause other
> task launches until all of my runs have started.
> >
> > If it helps, I never use certain nodes because I have had problems with
> crashed runs on them, so you'd be able to still use those. I'm not sure
> exactly which ones I don't use -- Josiah has set up the service tag "tom"
> for them. I'm sure I'm not using any asheclass nodes, and I believe a few
> nodes on racks 1 and 4. Maybe Josiah could even set up a service tag that
> includes the nodes I don't use? I'm not sure if this would be helpful for
> already spooled jobs, but could with future jobs.
> >
> > Thanks, and I'll let you know when all my runs have nodes!
> > Tom
> > _______________________________________________
> > Clusterusers mailing list
> > Clusterusers at lists.hampshire.edu
> > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>
> _______________________________________________
> Clusterusers mailing list
> Clusterusers at lists.hampshire.edu
> https://lists.hampshire.edu/mailman/listinfo/clusterusers
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.hampshire.edu/pipermail/clusterusers/attachments/20140124/f48cf7ab/attachment.html>
More information about the Clusterusers
mailing list