[Clusterusers] Switching to new keg

Bassam Kurdali bassam at urchn.org
Tue Jun 23 18:18:53 EDT 2015


ah ok.
It's possible that with no tags, the renders are trying to spool only
on the cluster and not on the macs. I wonder why they don't have tags
though? if that's also happening to you it can't be a helga thing as I
had suspected.
Bassam
PS so long as we can spool on the macs we should be ok, I also think
something else is going on since last night we did manage to spool on
60 nodes while some of your jobs were running - indicating that we
normally can coexist. Perhaps it's the no tag thing...
On Tue, 2015-06-23 at 17:21 -0400, Thomas Helmuth wrote:
> I'm not seeing tags on my runs either. Interestingly, they seem to be 
> working though, since none of my runs got launched on nodes one which 
> they're not supposed to run with the "tom" tag.
> 
> On Tue, Jun 23, 2015 at 5:17 PM, Bassam Kurdali <bassam at urchn.org> 
> wrote:
> > Tried it again, still waiting.
> > Interestingly enough, I think the Tags are not getting picked up; I
> > checked use custom tags and kept the case as indicated, and there 
> > are
> > not Tags showing up in tractor - maybe it's the keg update or maybe 
> > I'm
> > doing it wrong (tm)
> > On Tue, 2015-06-23 at 16:37 -0400, Wm. Josiah Erikson wrote:
> > > Make sure not to capitalize "Linux", too - I think it is case
> > > sensitive.
> > >      -Josiah
> > >
> > >
> > > On 6/23/15 4:36 PM, Wm. Josiah Erikson wrote:
> > > > It's because I lied to you. It should be "linux,MacOSX" for the
> > > > service tags.
> > > >     -Josiah
> > > >
> > > >
> > > > On 6/23/15 3:42 PM, Bassam Kurdali wrote:
> > > > > Hi, just spooled another shot, it's waiting to continue.. I 
> > know
> > > > > at
> > > > > least the mac nodes should be open.
> > > > > On Mon, 2015-06-22 at 23:41 -0400, Wm. Josiah Erikson wrote:
> > > > > > There was something about a file descriptor limit in 
> > tractor
> > > > > > -engine,
> > > > > > so
> > > > > > I restarted tractor-engine, and it appears to have cleared 
> > up.
> > > > > > It was
> > > > > >
> > > > > > probably something having to do with held open file handles
> > > > > > with
> > > > > > new/old
> > > > > > NFS file handles. You're also spooling all your jobs with 
> > the
> > > > > > default
> > > > > >
> > > > > > tag of "slow", which isn't necessarily bad, but I think you
> > > > > > could
> > > > > > also
> > > > > > do "linux,OSX" or something and get all the nodes at once. 
> > I
> > > > > > think
> > > > > > that's the syntax, but I could be remembering wrong.
> > > > > > I also have to email Wendy about the fact that our license
> > > > > > expires in
> > > > > > 10
> > > > > > days, and when I go to get a new one, it still does....
> > > > > >       -Josiah
> > > > > >
> > > > > >
> > > > > > On 6/22/15 9:30 PM, Bassam Kurdali wrote:
> > > > > > > Hmm, all of sudden all my tasks are blocked even though 
> > there
> > > > > > > are
> > > > > > > nodes
> > > > > > > available - down to the last 4 frames (which are retried
> > > > > > > errors)
> > > > > > > and a
> > > > > > > collate.
> > > > > > > Tim spooled some tasks but the macs are open, any ideas?
> > > > > > >
> > > > > > > On Mon, 2015-06-22 at 20:16 -0400, Wm. Josiah Erikson 
> > wrote:
> > > > > > > > Nice. Saw some pretty impressive bandwidth usage there 
> > for
> > > > > > > > a
> > > > > > > > second:
> > > > > > > >
> > > > > > > > 
> > http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html
> > > > > > > >  (this
> > > > > > > > is
> > > > > > > > actually keg1's network connection - gotta correct that 
> > on
> > > > > > > > the
> > > > > > > > map,
> > > > > > > > or
> > > > > > > > just swap over to the old connection)
> > > > > > > >
> > > > > > > >        -Josiah
> > > > > > > >
> > > > > > > >
> > > > > > > > On 6/22/15 7:29 PM, Bassam Kurdali wrote:
> > > > > > > > > I figured and I spooled and it seemed to be 
> > working...
> > > > > > > > > hopefully
> > > > > > > > > jordan
> > > > > > > > > doesn't restart his eternal render for a few hours ;)
> > > > > > > > >
> > > > > > > > > On Mon, 2015-06-22 at 18:26 -0400, Wm. Josiah Erikson
> > > > > > > > > wrote:
> > > > > > > > > > It's done!
> > > > > > > > > >         -Josiah
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On 6/22/15 5:49 PM, Bassam Kurdali wrote:
> > > > > > > > > > > dats punk! is it done?
> > > > > > > > > > > On Mon, 2015-06-22 at 20:49 +0200, Chris Perry 
> > wrote:
> > > > > > > > > > > > I can't wait. Thanks, Josiah!
> > > > > > > > > > > >
> > > > > > > > > > > > - chris
> > > > > > > > > > > >
> > > > > > > > > > > > > On Jun 22, 2015, at 7:50 PM, Wm. Josiah 
> > Erikson <
> > > > > > > > > > > > > wjerikson at hampshire.edu> wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > Keg will be going down shortly, and coming 
> > back
> > > > > > > > > > > > > up as a
> > > > > > > > > > > > > harder,
> > > > > > > > > > > > > faster,
> > > > > > > > > > > > > better, stronger version shortly as well, I 
> > hope
> > > > > > > > > > > > > :)
> > > > > > > > > > > > >        -Josiah
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > > On 6/11/15 9:53 AM, Wm. Josiah Erikson 
> > wrote:
> > > > > > > > > > > > > > Hi all,
> > > > > > > > > > > > > >        I'm pretty sure there is 100% 
> > overlap
> > > > > > > > > > > > > > between
> > > > > > > > > > > > > > people
> > > > > > > > > > > > > > who
> > > > > > > > > > > > > > care
> > > > > > > > > > > > > > about fly and people who care about keg at 
> > this
> > > > > > > > > > > > > > point
> > > > > > > > > > > > > > (though
> > > > > > > > > > > > > > there
> > > > > > > > > > > > > > are some people who care about fly and not 
> > so
> > > > > > > > > > > > > > much
> > > > > > > > > > > > > > keg,
> > > > > > > > > > > > > > like
> > > > > > > > > > > > > > Lee
> > > > > > > > > > > > > > and
> > > > > > > > > > > > > > Tom - sorry), so I'm sending this to this 
> > list.
> > > > > > > > > > > > > >        I have a new 32TB 14+2 RAID6 with 
> > 24GB
> > > > > > > > > > > > > > of RAM
> > > > > > > > > > > > > > superfast
> > > > > > > > > > > > > > (way
> > > > > > > > > > > > > > faster than gigabit) keg all ready to go! I
> > > > > > > > > > > > > > would
> > > > > > > > > > > > > > like to
> > > > > > > > > > > > > > bring
> > > > > > > > > > > > > > it up
> > > > > > > > > > > > > > on Monday, June 22nd. It would be ideal if
> > > > > > > > > > > > > > rendering
> > > > > > > > > > > > > > was
> > > > > > > > > > > > > > NOT
> > > > > > > > > > > > > > happening
> > > > > > > > > > > > > > at that time, to make my rsyncing life 
> > easier
> > > > > > > > > > > > > > :) Any
> > > > > > > > > > > > > > objections?
> > > > > > > > > > > > > >        -Josiah
> > > > > > > > > > > _______________________________________________
> > > > > > > > > > > Clusterusers mailing list
> > > > > > > > > > > Clusterusers at lists.hampshire.edu
> > > > > > > > > > > 
> > https://lists.hampshire.edu/mailman/listinfo/clusteru
> > > > > > > > > > > sers
> > > > > > > > > > _______________________________________________
> > > > > > > > > > Clusterusers mailing list
> > > > > > > > > > Clusterusers at lists.hampshire.edu
> > > > > > > > > > 
> > https://lists.hampshire.edu/mailman/listinfo/clusteruse
> > > > > > > > > > rs
> > > > > > > > > _______________________________________________
> > > > > > > > > Clusterusers mailing list
> > > > > > > > > Clusterusers at lists.hampshire.edu
> > > > > > > > > 
> > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > > > > > _______________________________________________
> > > > > > > > Clusterusers mailing list
> > > > > > > > Clusterusers at lists.hampshire.edu
> > > > > > > > 
> > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > > > > _______________________________________________
> > > > > > > Clusterusers mailing list
> > > > > > > Clusterusers at lists.hampshire.edu
> > > > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > > > _______________________________________________
> > > > > > Clusterusers mailing list
> > > > > > Clusterusers at lists.hampshire.edu
> > > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > > > _______________________________________________
> > > > > Clusterusers mailing list
> > > > > Clusterusers at lists.hampshire.edu
> > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > > >
> > > > _______________________________________________
> > > > Clusterusers mailing list
> > > > Clusterusers at lists.hampshire.edu
> > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > >
> > > _______________________________________________
> > > Clusterusers mailing list
> > > Clusterusers at lists.hampshire.edu
> > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > _______________________________________________
> > Clusterusers mailing list
> > Clusterusers at lists.hampshire.edu
> > https://lists.hampshire.edu/mailman/listinfo/clusterusers
> > 


More information about the Clusterusers mailing list