[Clusterusers] Switching to new keg

Wm. Josiah Erikson wjerikson at hampshire.edu
Wed Jun 24 10:03:11 EDT 2015


The tags are on the individual tasks, not the whole job. And they're
called service keys. And I'm being dumb, because we have a service key
called "blender", which is designed to do exactly what you are trying to
do, Bassam. I should drink more coffee or do fewer things at once or
something. Sorry!
I am not sure why the comma in between the service keys didn't work,
however. The tractor-spool manual says this:
--svckey=SVCKEY 	specifies an additional job-wide service key
restriction for Cmds in the spooled job, the key(s) are ANDed with any
keys found on the Cmds themselves. When used with -c or --rib option, it
overrides "PixarRender" as the sole service key used to select matching
blades for those Cmds.


It just says "key(s)". Huh. But if you just use "blender" you'll be fine.

    -Josiah


On 6/23/15 5:21 PM, Thomas Helmuth wrote:
> I'm not seeing tags on my runs either. Interestingly, they seem to be
> working though, since none of my runs got launched on nodes one which
> they're not supposed to run with the "tom" tag.
>
> On Tue, Jun 23, 2015 at 5:17 PM, Bassam Kurdali <bassam at urchn.org
> <mailto:bassam at urchn.org>> wrote:
>
>     Tried it again, still waiting.
>     Interestingly enough, I think the Tags are not getting picked up; I
>     checked use custom tags and kept the case as indicated, and there are
>     not Tags showing up in tractor - maybe it's the keg update or
>     maybe I'm
>     doing it wrong (tm)
>     On Tue, 2015-06-23 at 16:37 -0400, Wm. Josiah Erikson wrote:
>     > Make sure not to capitalize "Linux", too - I think it is case
>     > sensitive.
>     >      -Josiah
>     >
>     >
>     > On 6/23/15 4:36 PM, Wm. Josiah Erikson wrote:
>     > > It's because I lied to you. It should be "linux,MacOSX" for the
>     > > service tags.
>     > >     -Josiah
>     > >
>     > >
>     > > On 6/23/15 3:42 PM, Bassam Kurdali wrote:
>     > > > Hi, just spooled another shot, it's waiting to continue.. I know
>     > > > at
>     > > > least the mac nodes should be open.
>     > > > On Mon, 2015-06-22 at 23:41 -0400, Wm. Josiah Erikson wrote:
>     > > > > There was something about a file descriptor limit in tractor
>     > > > > -engine,
>     > > > > so
>     > > > > I restarted tractor-engine, and it appears to have cleared up.
>     > > > > It was
>     > > > >
>     > > > > probably something having to do with held open file handles
>     > > > > with
>     > > > > new/old
>     > > > > NFS file handles. You're also spooling all your jobs with the
>     > > > > default
>     > > > >
>     > > > > tag of "slow", which isn't necessarily bad, but I think you
>     > > > > could
>     > > > > also
>     > > > > do "linux,OSX" or something and get all the nodes at once. I
>     > > > > think
>     > > > > that's the syntax, but I could be remembering wrong.
>     > > > > I also have to email Wendy about the fact that our license
>     > > > > expires in
>     > > > > 10
>     > > > > days, and when I go to get a new one, it still does....
>     > > > >       -Josiah
>     > > > >
>     > > > >
>     > > > > On 6/22/15 9:30 PM, Bassam Kurdali wrote:
>     > > > > > Hmm, all of sudden all my tasks are blocked even though
>     there
>     > > > > > are
>     > > > > > nodes
>     > > > > > available - down to the last 4 frames (which are retried
>     > > > > > errors)
>     > > > > > and a
>     > > > > > collate.
>     > > > > > Tim spooled some tasks but the macs are open, any ideas?
>     > > > > >
>     > > > > > On Mon, 2015-06-22 at 20:16 -0400, Wm. Josiah Erikson wrote:
>     > > > > > > Nice. Saw some pretty impressive bandwidth usage there for
>     > > > > > > a
>     > > > > > > second:
>     > > > > > >
>     > > > > > >
>     http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html
>     > > > > > >  (this
>     > > > > > > is
>     > > > > > > actually keg1's network connection - gotta correct that on
>     > > > > > > the
>     > > > > > > map,
>     > > > > > > or
>     > > > > > > just swap over to the old connection)
>     > > > > > >
>     > > > > > >        -Josiah
>     > > > > > >
>     > > > > > >
>     > > > > > > On 6/22/15 7:29 PM, Bassam Kurdali wrote:
>     > > > > > > > I figured and I spooled and it seemed to be working...
>     > > > > > > > hopefully
>     > > > > > > > jordan
>     > > > > > > > doesn't restart his eternal render for a few hours ;)
>     > > > > > > >
>     > > > > > > > On Mon, 2015-06-22 at 18:26 -0400, Wm. Josiah Erikson
>     > > > > > > > wrote:
>     > > > > > > > > It's done!
>     > > > > > > > >         -Josiah
>     > > > > > > > >
>     > > > > > > > >
>     > > > > > > > > On 6/22/15 5:49 PM, Bassam Kurdali wrote:
>     > > > > > > > > > dats punk! is it done?
>     > > > > > > > > > On Mon, 2015-06-22 at 20:49 +0200, Chris Perry
>     wrote:
>     > > > > > > > > > > I can't wait. Thanks, Josiah!
>     > > > > > > > > > >
>     > > > > > > > > > > - chris
>     > > > > > > > > > >
>     > > > > > > > > > > > On Jun 22, 2015, at 7:50 PM, Wm. Josiah
>     Erikson <
>     > > > > > > > > > > > wjerikson at hampshire.edu
>     <mailto:wjerikson at hampshire.edu>> wrote:
>     > > > > > > > > > > >
>     > > > > > > > > > > > Keg will be going down shortly, and coming back
>     > > > > > > > > > > > up as a
>     > > > > > > > > > > > harder,
>     > > > > > > > > > > > faster,
>     > > > > > > > > > > > better, stronger version shortly as well, I hope
>     > > > > > > > > > > > :)
>     > > > > > > > > > > >        -Josiah
>     > > > > > > > > > > >
>     > > > > > > > > > > >
>     > > > > > > > > > > > > On 6/11/15 9:53 AM, Wm. Josiah Erikson wrote:
>     > > > > > > > > > > > > Hi all,
>     > > > > > > > > > > > >        I'm pretty sure there is 100% overlap
>     > > > > > > > > > > > > between
>     > > > > > > > > > > > > people
>     > > > > > > > > > > > > who
>     > > > > > > > > > > > > care
>     > > > > > > > > > > > > about fly and people who care about keg at
>     this
>     > > > > > > > > > > > > point
>     > > > > > > > > > > > > (though
>     > > > > > > > > > > > > there
>     > > > > > > > > > > > > are some people who care about fly and not so
>     > > > > > > > > > > > > much
>     > > > > > > > > > > > > keg,
>     > > > > > > > > > > > > like
>     > > > > > > > > > > > > Lee
>     > > > > > > > > > > > > and
>     > > > > > > > > > > > > Tom - sorry), so I'm sending this to this
>     list.
>     > > > > > > > > > > > >        I have a new 32TB 14+2 RAID6 with 24GB
>     > > > > > > > > > > > > of RAM
>     > > > > > > > > > > > > superfast
>     > > > > > > > > > > > > (way
>     > > > > > > > > > > > > faster than gigabit) keg all ready to go! I
>     > > > > > > > > > > > > would
>     > > > > > > > > > > > > like to
>     > > > > > > > > > > > > bring
>     > > > > > > > > > > > > it up
>     > > > > > > > > > > > > on Monday, June 22nd. It would be ideal if
>     > > > > > > > > > > > > rendering
>     > > > > > > > > > > > > was
>     > > > > > > > > > > > > NOT
>     > > > > > > > > > > > > happening
>     > > > > > > > > > > > > at that time, to make my rsyncing life easier
>     > > > > > > > > > > > > :) Any
>     > > > > > > > > > > > > objections?
>     > > > > > > > > > > > >        -Josiah
>     > > > > > > > > > _______________________________________________
>     > > > > > > > > > Clusterusers mailing list
>     > > > > > > > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > > > > > > >
>     https://lists.hampshire.edu/mailman/listinfo/clusteru
>     > > > > > > > > > sers
>     > > > > > > > > _______________________________________________
>     > > > > > > > > Clusterusers mailing list
>     > > > > > > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > > > > > >
>     https://lists.hampshire.edu/mailman/listinfo/clusteruse
>     > > > > > > > > rs
>     > > > > > > > _______________________________________________
>     > > > > > > > Clusterusers mailing list
>     > > > > > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > > > > >
>     https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     > > > > > > _______________________________________________
>     > > > > > > Clusterusers mailing list
>     > > > > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     > > > > > _______________________________________________
>     > > > > > Clusterusers mailing list
>     > > > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     > > > > _______________________________________________
>     > > > > Clusterusers mailing list
>     > > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     > > > _______________________________________________
>     > > > Clusterusers mailing list
>     > > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     > >
>     > > _______________________________________________
>     > > Clusterusers mailing list
>     > > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     >
>     > _______________________________________________
>     > Clusterusers mailing list
>     > Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     > https://lists.hampshire.edu/mailman/listinfo/clusterusers
>     _______________________________________________
>     Clusterusers mailing list
>     Clusterusers at lists.hampshire.edu
>     <mailto:Clusterusers at lists.hampshire.edu>
>     https://lists.hampshire.edu/mailman/listinfo/clusterusers
>
>
>
>
> _______________________________________________
> Clusterusers mailing list
> Clusterusers at lists.hampshire.edu
> https://lists.hampshire.edu/mailman/listinfo/clusterusers

-- 
Wm. Josiah Erikson
Assistant Director of IT, Infrastructure Group
System Administrator, School of CS
Hampshire College
Amherst, MA 01002
(413) 559-6091

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.hampshire.edu/pipermail/clusterusers/attachments/20150624/b0a3fe01/attachment-0001.html>


More information about the Clusterusers mailing list