[Clusterusers] Switching to new keg

Thomas Helmuth thelmuth at cs.umass.edu
Tue Jun 23 15:46:04 EDT 2015


Yeah, sorry I'm hogging the linux nodes right now. I just realized last
night that there are a bunch more GP runs I need to finish for my
dissertation, which I'm trying to finish by the end of June.

Is there something about the minimum RAM that your renders require, or
something like that that might be making them not use the Macs? I think
Josiah was tweaking something like that when some of Jordan's runs were
overflowing memory and causing other things to crash.

Tom

On Tue, Jun 23, 2015 at 3:42 PM, Bassam Kurdali <bassam at urchn.org> wrote:

> Hi, just spooled another shot, it's waiting to continue.. I know at
> least the mac nodes should be open.
> On Mon, 2015-06-22 at 23:41 -0400, Wm. Josiah Erikson wrote:
> > There was something about a file descriptor limit in tractor-engine,
> > so
> > I restarted tractor-engine, and it appears to have cleared up. It was
> >
> > probably something having to do with held open file handles with
> > new/old
> > NFS file handles. You're also spooling all your jobs with the default
> >
> > tag of "slow", which isn't necessarily bad, but I think you could
> > also
> > do "linux,OSX" or something and get all the nodes at once. I think
> > that's the syntax, but I could be remembering wrong.
> > I also have to email Wendy about the fact that our license expires in
> > 10
> > days, and when I go to get a new one, it still does....
> >      -Josiah
> >
> >
> > On 6/22/15 9:30 PM, Bassam Kurdali wrote:
> > > Hmm, all of sudden all my tasks are blocked even though there are
> > > nodes
> > > available - down to the last 4 frames (which are retried errors)
> > > and a
> > > collate.
> > > Tim spooled some tasks but the macs are open, any ideas?
> > >
> > > On Mon, 2015-06-22 at 20:16 -0400, Wm. Josiah Erikson wrote:
> > > > Nice. Saw some pretty impressive bandwidth usage there for a
> > > > second:
> > > >
> > > > http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html (this
> > > > is
> > > > actually keg1's network connection - gotta correct that on the
> > > > map,
> > > > or
> > > > just swap over to the old connection)
> > > >
> > > >       -Josiah
> > > >
> > > >
> > > > On 6/22/15 7:29 PM, Bassam Kurdali wrote:
> > > > > I figured and I spooled and it seemed to be working...
> > > > > hopefully
> > > > > jordan
> > > > > doesn't restart his eternal render for a few hours ;)
> > > > >
> > > > > On Mon, 2015-06-22 at 18:26 -0400, Wm. Josiah Erikson wrote:
> > > > > > It's done!
> > > > > >        -Josiah
> > > > > >
> > > > > >
> > > > > > On 6/22/15 5:49 PM, Bassam Kurdali wrote:
> > > > > > > dats punk! is it done?
> > > > > > > On Mon, 2015-06-22 at 20:49 +0200, Chris Perry wrote:
> > > > > > > > I can't wait. Thanks, Josiah!
> > > > > > > >
> > > > > > > > - chris
> > > > > > > >
> > > > > > > > > On Jun 22, 2015, at 7:50 PM, Wm. Josiah Erikson <
> > > > > > > > > wjerikson at hampshire.edu> wrote:
> > > > > > > > >
> > > > > > > > > Keg will be going down shortly, and coming back up as a
> > > > > > > > > harder,
> > > > > > > > > faster,
> > > > > > > > > better, stronger version shortly as well, I hope :)
> > > > > > > > >       -Josiah
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > On 6/11/15 9:53 AM, Wm. Josiah Erikson wrote:
> > > > > > > > > > Hi all,
> > > > > > > > > >       I'm pretty sure there is 100% overlap between
> > > > > > > > > > people
> > > > > > > > > > who
> > > > > > > > > > care
> > > > > > > > > > about fly and people who care about keg at this point
> > > > > > > > > > (though
> > > > > > > > > > there
> > > > > > > > > > are some people who care about fly and not so much
> > > > > > > > > > keg,
> > > > > > > > > > like
> > > > > > > > > > Lee
> > > > > > > > > > and
> > > > > > > > > > Tom - sorry), so I'm sending this to this list.
> > > > > > > > > >       I have a new 32TB 14+2 RAID6 with 24GB of RAM
> > > > > > > > > > superfast
> > > > > > > > > > (way
> > > > > > > > > > faster than gigabit) keg all ready to go! I would
> > > > > > > > > > like to
> > > > > > > > > > bring
> > > > > > > > > > it up
> > > > > > > > > > on Monday, June 22nd. It would be ideal if rendering
> > > > > > > > > > was
> > > > > > > > > > NOT
> > > > > > > > > > happening
> > > > > > > > > > at that time, to make my rsyncing life easier :) Any
> > > > > > > > > > objections?
> > > > > > > > > >       -Josiah
> > > > > > > _______________________________________________
> > > > > > > Clusterusers mailing list
> > > > > > > Clusterusers at lists.hampshire.edu
> > > > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > > > _______________________________________________
> > > > > > Clusterusers mailing list
> > > > > > Clusterusers at lists.hampshire.edu
> > > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > > _______________________________________________
> > > > > Clusterusers mailing list
> > > > > Clusterusers at lists.hampshire.edu
> > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > _______________________________________________
> > > > Clusterusers mailing list
> > > > Clusterusers at lists.hampshire.edu
> > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > _______________________________________________
> > > Clusterusers mailing list
> > > Clusterusers at lists.hampshire.edu
> > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> >
> > _______________________________________________
> > Clusterusers mailing list
> > Clusterusers at lists.hampshire.edu
> > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> _______________________________________________
> Clusterusers mailing list
> Clusterusers at lists.hampshire.edu
> https://lists.hampshire.edu/mailman/listinfo/clusterusers
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.hampshire.edu/pipermail/clusterusers/attachments/20150623/b3620b8d/attachment.html>


More information about the Clusterusers mailing list