[Clusterusers] Switching to new keg

Chris Perry perry at hampshire.edu
Wed Jun 24 03:38:18 EDT 2015


 

How are you spooling, Bassam? 

If the "old way" (via website), then I think "slow" might be hardcoded
into each remoteCmd declaration. Check not just the job but the various
subtasks for the tags, they may show up there. I kind of remember
noticing that when Owen and I were rewriting things this spring; check
and see what you find. We can update this all when I get back (at the
latest). 

- chris 

On 2015-06-23 23:17, Bassam Kurdali wrote: 

> Tried it again, still waiting.
> Interestingly enough, I think the Tags are not getting picked up; I
> checked use custom tags and kept the case as indicated, and there are
> not Tags showing up in tractor - maybe it's the keg update or maybe I'm
> doing it wrong (tm)
> On Tue, 2015-06-23 at 16:37 -0400, Wm. Josiah Erikson wrote:
> Make sure not to capitalize "Linux", too - I think it is case sensitive. -Josiah On 6/23/15 4:36 PM, Wm. Josiah Erikson wrote: It's because I lied to you. It should be "linux,MacOSX" for the service tags. -Josiah On 6/23/15 3:42 PM, Bassam Kurdali wrote: Hi, just spooled another shot, it's waiting to continue.. I know at least the mac nodes should be open. On Mon, 2015-06-22 at 23:41 -0400, Wm. Josiah Erikson wrote: There was something about a file descriptor limit in tractor -engine, so I restarted tractor-engine, and it appears to have cleared up. It was probably something having to do with held open file handles with new/old NFS file handles. You're also spooling all your jobs with the default tag of "slow", which isn't necessarily bad, but I think you could also do "linux,OSX" or something and get all the nodes at once. I think that's the syntax, but I could be remembering wrong. I also have to email Wendy about the fact that our license expires in 10 days, and when I g
 o to get
a new one, it still does.... -Josiah On 6/22/15 9:30 PM, Bassam Kurdali wrote: Hmm, all of sudden all my tasks are blocked even though there are nodes available - down to the last 4 frames (which are retried errors) and a collate. Tim spooled some tasks but the macs are open, any ideas? On Mon, 2015-06-22 at 20:16 -0400, Wm. Josiah Erikson wrote: Nice. Saw some pretty impressive bandwidth usage there for a second: http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html [1](this is actually keg1's network connection - gotta correct that on the map, or just swap over to the old connection) -Josiah On 6/22/15 7:29 PM, Bassam Kurdali wrote: I figured and I spooled and it seemed to be working... hopefully jordan doesn't restart his eternal render for a few hours ;) On Mon, 2015-06-22 at 18:26 -0400, Wm. Josiah Erikson wrote: It's done! -Josiah On 6/22/15 5:49 PM, Bassam Kurdali wrote: dats punk! is it done? On Mon, 2015-06-22 at 20:49 +0200, Chris Perry wrote: I can't wait. Th
 anks,
Josiah! - chris On Jun 22, 2015, at 7:50 PM, Wm. Josiah Erikson < wjerikson at hampshire.edu> wrote: Keg will be going down shortly, and coming back up as a harder, faster, better, stronger version shortly as well, I hope :) -Josiah On 6/11/15 9:53 AM, Wm. Josiah Erikson wrote: Hi all, I'm pretty sure there is 100% overlap between people who care about fly and people who care about keg at this point (though there are some people who care about fly and not so much keg, like Lee and Tom - sorry), so I'm sending this to this list. I have a new 32TB 14+2 RAID6 with 24GB of RAM superfast (way faster than gigabit) keg all ready to go! I would like to bring it up on Monday, June 22nd. It would be ideal if rendering was NOT happening at that time, to make my rsyncing life easier :) Any objections? -Josiah
 _______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusteru [3] sers
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusteruse [4] rs
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]
_______________________________________________ Clusterusers mailing
list Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2] 

_______________________________________________
Clusterusers mailing list
Clusterusers at lists.hampshire.edu
https://lists.hampshire.edu/mailman/listinfo/clusterusers [2]

 

Links:
------
[1] http://artemis.hampshire.edu/mrtg/172.20.160.204_10110.html
[2] https://lists.hampshire.edu/mailman/listinfo/clusterusers
[3] https://lists.hampshire.edu/mailman/listinfo/clusteru
[4] https://lists.hampshire.edu/mailman/listinfo/clusteruse
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.hampshire.edu/pipermail/clusterusers/attachments/20150624/dbccd859/attachment.html>


More information about the Clusterusers mailing list