[Clusterusers] Freeing up some cluster procs?

Lee Spector lspector at hampshire.edu
Sun Apr 29 15:00:28 EDT 2018


I just killed some old GP runs of mine too.

 -Lee


> On Apr 29, 2018, at 2:25 PM, Thomas Helmuth <thelmuth at hamilton.edu <mailto:thelmuth at hamilton.edu>> wrote:
> 
> Hi Chris,
> 
> I went ahead and killed a less-important job with 17 active processes, and it looks like the render jobs have started to use those nodes. Let me know if it's still a problem!
> 
> Thanks,
> Tom
> 
> On Sun, Apr 29, 2018 at 1:32 PM, Chris Perry <perry at hampshire.edu <mailto:perry at hampshire.edu>> wrote:
>  
> Hi Tom - These particular jobs will be quite fast. They're queued and set at priority 2. They're the jobs whose titles start with "tmp RENDER gfp". If none have gone through in a few hours would you mind killing a job to give them space? It's okay to wait for most of the afternoon but if by tonight they're not running that could be an issue.
> 
> Thanks!
> 
> - chris
> 
>  
> On 2018-04-29 13:22, Thomas Helmuth wrote:
> 
>> Hi Chris,
>> 
>> Sorry about that! I have a honors thesis student who needed some last-minute GP runs for his thesis, which is due tomorrow, so I started a bunch of runs. That said, I don't think many started after right now will finish by tomorrow, so I reduced my priority to less than 1. Any new renders that start should have priority once a GP run finishes.
>> 
>> If you're also on a tight schedule and need these sooner, let me know and I can kill off some of the runs.
>> 
>> Best,
>> Tom
>> 
>> On Sun, Apr 29, 2018 at 1:12 PM, Chris Perry <perry at hampshire.edu <mailto:perry at hampshire.edu>> wrote:
>>  
>>  
>> Hi all,
>> 
>> Some animation students are trying to get renders through the cluster but it appears that every single cluster proc is currently being taken up by GP runs.
>> 
>> Can we free up any machines for some relatively short animation renders? Or should we just up the priority of our renders and let tractor take care of the job management?
>> 
>> I want to play along well with what you all have going but don't know what the current etiquette is.
>> 
>> Thanks,
>> 
>> - chris
>> 
>> 
>> _______________________________________________
>> Clusterusers mailing list
>> Clusterusers at lists.hampshire.edu <mailto:Clusterusers at lists.hampshire.edu>
>> https://lists.hampshire.edu/mailman/listinfo/clusterusers <https://lists.hampshire.edu/mailman/listinfo/clusterusers>
>> 
>> 
>> _______________________________________________
>> Clusterusers mailing list
>> Clusterusers at lists.hampshire.edu <mailto:Clusterusers at lists.hampshire.edu>
>> https://lists.hampshire.edu/mailman/listinfo/clusterusers <https://lists.hampshire.edu/mailman/listinfo/clusterusers>
> 
> _______________________________________________
> Clusterusers mailing list
> Clusterusers at lists.hampshire.edu <mailto:Clusterusers at lists.hampshire.edu>
> https://lists.hampshire.edu/mailman/listinfo/clusterusers <https://lists.hampshire.edu/mailman/listinfo/clusterusers>
> 
> 
> _______________________________________________
> Clusterusers mailing list
> Clusterusers at lists.hampshire.edu <mailto:Clusterusers at lists.hampshire.edu>
> https://lists.hampshire.edu/mailman/listinfo/clusterusers

--
Lee Spector, Professor of Computer Science
Director, Institute for Computational Intelligence
Hampshire College, Amherst, Massachusetts, 01002, USA
lspector at hampshire.edu <mailto:lspector at hampshire.edu>, http://hampshire.edu/lspector/ <http://hampshire.edu/lspector/>, 413-559-5352

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.hampshire.edu/pipermail/clusterusers/attachments/20180429/2d9b350a/attachment-0001.html>


More information about the Clusterusers mailing list