[Clusterusers] Opinions....
Bassam Kurdali
bassam at urchn.org
Fri Jan 10 16:43:59 EST 2014
Hi Josiah, I'm fine with whatever your decision is- I'm sure you have
the right compute power per dollar calculation figured out. we can run
happily on faster computers or on more slower cores, so it isn't an
issue for us.
On Fri, Jan 10, 2014 at 3:22 PM, Lee Spector <lspector at hampshire.edu>
wrote:
>
> Thank YOU!
>
> -Lee
>
>
> On Jan 10, 2014, at 1:33 PM, Wm. Josiah Erikson wrote:
>
>> What I meant by "larger" in this case was faster cores, more RAM,
>> so everybody should be able to take full advantage of them. They'll
>> be the closest thing to (or maybe faster than) the machine Lee's got
>> in his office. Same processors, or very similar, I think. So I'll do
>> both - a $3100 node and another node like the one we already have in
>> Rack 2. Cool. Thanks guys.
>> -Josiah
>>
>>
>> On 1/10/14 10:48 AM, Thomas Helmuth wrote:
>>> For my needs, I think more cheap nodes is better than fewer
>>> awesome nodes. Rack 2 has been great for most (or all?) of the
>>> things I've wanted to do since it was put in. I rarely use the
>>> larger nodes at all, partly because others can make better use of
>>> them and partly because I've had weird error when I try to use them.
>>>
>>> Of course, I'm mostly happy with the computing power we have now,
>>> so if others want larger nodes, their voices should probably take
>>> priority.
>>>
>>> -Tom
>>>
>>>
>>> On Fri, Jan 10, 2014 at 10:41 AM, Wm. Josiah Erikson
>>> <wjerikson at hampshire.edu> wrote:
>>> In buying more nodes for the cluster this year, should I
>>> prioritize:
>>>
>>> (1) More cheap nodes like we currently have in rack 2 (I could buy
>>> somewhere around 12 - 16 more of those nodes, or maybe 4 if I go
>>> with option 2 as well)
>>> (2) Faster, more modern nodes with more RAM, but fewer of them (4
>>> of them, to be precise)
>>>
>>> I'm leaning towards (2), because:
>>> 1. They will last longer
>>> 2. We're almost out of space
>>> 3. We don't have very many really fast, semi-modern Intel
>>> nodes.
>>>
>>> I can buy 4 nodes that are kindof like dual-processor six-core (so
>>> 12 cores total, shows up like 24 because they each have two
>>> execution units) versions of compute-1-17 with 48GB of RAM each for
>>> a total of $3100 used on eBay... or with 96GB of RAM each for
>>> $4600. Seems like a good plan (we don't ever need quite that much
>>> RAM though, do we?). Or I could buy a whole bunch more of what
>>> we've got in rack 2, though I'd have to get rid of a couple of the
>>> rack 1 nodes, and I couldn't put them on UPS, which is maybe OK,
>>> since the power problems seem to have been resolved.
>>>
>>> Thoughts?
>>>
>>> --
>>> Wm. Josiah Erikson
>>> Assistant Director of IT, Infrastructure Group
>>> System Administrator, School of CS
>>> Hampshire College
>>> Amherst, MA 01002
>>> (413) 559-6091
>>>
>>> _______________________________________________
>>> Clusterusers mailing list
>>> Clusterusers at lists.hampshire.edu
>>> https://lists.hampshire.edu/mailman/listinfo/clusterusers
>>>
>>>
>>>
>>> _______________________________________________
>>> Clusterusers mailing list
>>>
>>> Clusterusers at lists.hampshire.edu
>>> https://lists.hampshire.edu/mailman/listinfo/clusterusers
>>>
>>
>> --
>> Wm. Josiah Erikson
>> Assistant Director of IT, Infrastructure Group
>> System Administrator, School of CS
>> Hampshire College
>> Amherst, MA 01002
>> (413) 559-6091
>>
>> _______________________________________________
>> Clusterusers mailing list
>> Clusterusers at lists.hampshire.edu
>> https://lists.hampshire.edu/mailman/listinfo/clusterusers
>>
> --
> Lee Spector, Professor of Computer Science
> Cognitive Science, Hampshire College
> 893 West Street, Amherst, MA 01002-3359
> lspector at hampshire.edu, http://hampshire.edu/lspector/
> Phone: 413-559-5352, Fax: 413-559-5438
>
> _______________________________________________
> Clusterusers mailing list
> Clusterusers at lists.hampshire.edu
> https://lists.hampshire.edu/mailman/listinfo/clusterusers
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.hampshire.edu/pipermail/clusterusers/attachments/20140110/2b13fa96/attachment.html>
More information about the Clusterusers
mailing list