[Push] Speed
PerPlex Ed
edperplex at yahoo.com
Wed Jun 30 18:54:39 EDT 2010
Just to conclude this benchmarking fest, these are two screenies of the running averages of timing while doing the usual symbolic regression:
http://img404.imageshack.us/img404/6797/stats01.jpg
http://img85.imageshack.us/img85/2065/stats02.jpg
"Overall time" is the time spent in the main parallel loop (the program was running on your next-door 4 cores CPU). That's the real overall time spent to evaluate all individuals (1000), all inclusive. Here it's 168-169 msec.
"Total time" is the sum of the times spent to evaluate all fitness functions of all individuals. It's roughly four times the "Overall time", obviously, 662-668 msec.
"Total interpreter time" is the sum of the times spent by the interpreters to execute all code, without the time spent to initialize the interpreters. Pure execution time, 415-429 msec.
Then I used "execution" when referring to the number of actual points executed by an interpreter, while "evaluation" while referring to the "static" size of the program being evaluated. "Execution speed" is more interesting maybe. "Overall speeds" are obviously about 4 times higher than the "Fitness speeds" (the speed calculated on the whole time needed to evaluate the fitness, including creating the interpreter, the "Total time" above), which is lower than the pure "Interpreter speed" (based on the "Total interpreter time", as defined above).
Anyway "Fitness execution speed" is 1.2 and 1.3 million points per second.
Point execution time is based on "Fitness execution speed" and is 0.85 and 0.89 microseconds.
To register this kind of measure I compiled everything in so-called "Release" mode and ran the program without the debugger attached.
I'm not a benchmarking maniac (I rarely measure anything, in fact) but I think these measures are correct.
Since these are much better than your timings, in case I'll ever realize I made some miscalculation, I'll let you know.
More information about the Push
mailing list