<div dir="ltr"><div><div>I looked up the service key syntax: <a href="https://renderman.pixar.com/resources/18/tractor/scripting.html#srvkeys">https://renderman.pixar.com/resources/18/tractor/scripting.html#srvkeys</a><br><br></div>It turns out, comma means to AND the service keys together. So, "linux,MacOSX" will only run on a node that is both linux and MaxOSX, which is obviously bad. You have to use "linux||MacOSX" to get linux OR MaxOSX.<br><br></div>Tom<br><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 24, 2015 at 10:07 AM, Wm. Josiah Erikson <span dir="ltr"><<a href="mailto:wjerikson@hampshire.edu" target="_blank">wjerikson@hampshire.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    It looks like any service keys specified when spooling replace
    "slow". The job spooled right now shows up as having "linux,MacOSX"
    on each task.<span class=""><font color="#888888"><br>
        -Josiah</font></span><div><div class="h5"><br>
    <br>
    <br>
    <div>On 6/24/15 3:38 AM, Chris Perry wrote:<br>
    </div>
    <blockquote type="cite">
      <p> </p>
      <p>How are you spooling, Bassam?</p>
      <p>If the "old way" (via website), then I think "slow" might be
        hardcoded into each remoteCmd declaration. Check not just the
        job but the various subtasks for the tags, they may show up
        there. I kind of remember noticing that when Owen and I were
        rewriting things this spring; check and see what you find. We
        can update this all when I get back (at the latest).</p>
      <p>- chris</p>
      <p> </p>
      <div> </div>
      <p>On 2015-06-23 23:17, Bassam Kurdali wrote:</p>
      <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">
        <pre>Tried it again, still waiting.
Interestingly enough, I think the Tags are not getting picked up; I
checked use custom tags and kept the case as indicated, and there are
not Tags showing up in tractor - maybe it's the keg update or maybe I'm
doing it wrong (tm)
On Tue, 2015-06-23 at 16:37 -0400, Wm. Josiah Erikson wrote:</pre>
        <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">Make sure not
          to capitalize "Linux", too - I think it is case sensitive.
          -Josiah On 6/23/15 4:36 PM, Wm. Josiah Erikson wrote:
          <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">It's because
            I lied to you. It should be "linux,MacOSX" for the service
            tags. -Josiah On 6/23/15 3:42 PM, Bassam Kurdali wrote:
            <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">Hi, just
              spooled another shot, it's waiting to continue.. I know at
              least the mac nodes should be open. On Mon, 2015-06-22 at
              23:41 -0400, Wm. Josiah Erikson wrote:
              <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">There
                was something about a file descriptor limit in tractor
                -engine, so I restarted tractor-engine, and it appears
                to have cleared up. It was probably something having to
                do with held open file handles with new/old NFS file
                handles. You're also spooling all your jobs with the
                default tag of "slow", which isn't necessarily bad, but
                I think you could also do "linux,OSX" or something and
                get all the nodes at once. I think that's the syntax,
                but I could be remembering wrong. I also have to email
                Wendy about the fact that our license expires in 10
                days, and when I go to get a new one, it still does....
                -Josiah On 6/22/15 9:30 PM, Bassam Kurdali wrote:
                <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">Hmm,
                  all of sudden all my tasks are blocked even though
                  there are nodes available - down to the last 4 frames
                  (which are retried errors) and a collate. Tim spooled
                  some tasks but the macs are open, any ideas? On Mon,
                  2015-06-22 at 20:16 -0400, Wm. Josiah Erikson wrote:
                  <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">Nice.
                    Saw some pretty impressive bandwidth usage there for
                    a second: <a href="http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html" target="_blank">http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html</a>(this
                    is actually keg1's network connection - gotta
                    correct that on the map, or just swap over to the
                    old connection) -Josiah On 6/22/15 7:29 PM, Bassam
                    Kurdali wrote:
                    <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">I
                      figured and I spooled and it seemed to be
                      working... hopefully jordan doesn't restart his
                      eternal render for a few hours ;) On Mon,
                      2015-06-22 at 18:26 -0400, Wm. Josiah Erikson
                      wrote:
                      <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">It's
                        done! -Josiah On 6/22/15 5:49 PM, Bassam Kurdali
                        wrote:
                        <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">dats punk! is it done? On
                          Mon, 2015-06-22 at 20:49 +0200, Chris Perry
                          wrote:
                          <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">I can't wait.
                            Thanks, Josiah! - chris
                            <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">On Jun 22, 2015, at 7:50
                              PM, Wm. Josiah Erikson < <a href="mailto:wjerikson@hampshire.edu" target="_blank">wjerikson@hampshire.edu</a>>
                              wrote: Keg will be going down shortly, and
                              coming back up as a harder, faster,
                              better, stronger version shortly as well,
                              I hope :) -Josiah
                              <blockquote type="cite" style="padding-left:5px;border-left:2px solid rgb(16,16,255);margin-left:5px">On 6/11/15 9:53 AM, Wm.
                                Josiah Erikson wrote: Hi all, I'm pretty
                                sure there is 100% overlap between
                                people who care about fly and people who
                                care about keg at this point (though
                                there are some people who care about fly
                                and not so much keg, like Lee and Tom -
                                sorry), so I'm sending this to this
                                list. I have a new 32TB 14+2 RAID6 with
                                24GB of RAM superfast (way faster than
                                gigabit) keg all ready to go! I would
                                like to bring it up on Monday, June
                                22nd. It would be ideal if rendering was
                                NOT happening at that time, to make my
                                rsyncing life easier :) Any objections?
                                -Josiah</blockquote>
                            </blockquote>
                          </blockquote>
                          _______________________________________________
                          Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
                          <a href="https://lists.hampshire.edu/mailman/listinfo/clusteru" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusteru</a>
                          sers</blockquote>
                        _______________________________________________
                        Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
                        <a href="https://lists.hampshire.edu/mailman/listinfo/clusteruse" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusteruse</a>
                        rs</blockquote>
                      _______________________________________________
                      Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
                      <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
                    _______________________________________________
                    Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
                    <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
                  _______________________________________________
                  Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
                  <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
                _______________________________________________
                Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
                <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
              _______________________________________________
              Clusterusers mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
              <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
            _______________________________________________ Clusterusers
            mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
            <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
          _______________________________________________ Clusterusers
          mailing list <a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
          <a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a></blockquote>
        <pre>_______________________________________________
Clusterusers mailing list
<a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
<a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a>
</pre>
      </blockquote>
      <br>
      <fieldset></fieldset>
      <br>
      <pre>_______________________________________________
Clusterusers mailing list
<a href="mailto:Clusterusers@lists.hampshire.edu" target="_blank">Clusterusers@lists.hampshire.edu</a>
<a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a>
</pre>
    </blockquote>
    <br>
    </div></div><span class=""><pre cols="72">-- 
Wm. Josiah Erikson
Assistant Director of IT, Infrastructure Group
System Administrator, School of CS
Hampshire College
Amherst, MA 01002
<a href="tel:%28413%29%20559-6091" value="+14135596091" target="_blank">(413) 559-6091</a>
</pre>
  </span></div>

<br>_______________________________________________<br>
Clusterusers mailing list<br>
<a href="mailto:Clusterusers@lists.hampshire.edu">Clusterusers@lists.hampshire.edu</a><br>
<a href="https://lists.hampshire.edu/mailman/listinfo/clusterusers" rel="noreferrer" target="_blank">https://lists.hampshire.edu/mailman/listinfo/clusterusers</a><br>
<br></blockquote></div><br></div></div></div></div>