stephenbrooks.orgForumMuon1Q&AWorkload - Can this be set?
Username: Password:
Search site:
Subscribe to thread via RSS
Ripper
2003-11-19 13:54:18
I have been examining the stats and things just don't add up.  I now have 7 computers crunching on this project and I see that "ranking" is based on number of results and number of particles.  I have no problem with this.  However, I also see that some people have accumulated almost twice the number of Total Particles while having fewer units completed.  I have no problem withthat either.

My question/concern, is how can someone optimize their systems so that this actually "designed" to work this way.  I understand that the units are random for the most part, but can I do something to maximize the output for the amount of time that my computer is running?  My computers are all running 24/7, along with RSATTACK576. However, DP has priority on the CPU.  In 3 cases, I am using HT to process both processes using %50 CPU utilization.

Just that it doesn't seem to be "fair" that some computers are working less and getting better output.  If this is a matter of tweaking some settings, then I would like the inside skinny.

I have added the TOP1000 to all my results.dat files for all computers.  Will this help or hinder my output?

THANKS

Iley A. Pullen
[DPC]Stephan202
2003-11-19 14:43:09
Though the stats are sorted by results, you should look at the mpts.  Mpts stands for million particles per timestep.  Hence, the mpts value indicates the amount of CPU time donated.  So when you look at the mpts, everyone is sorted in a fair way.

As for the TOP1000: I don't know where you got it from, but using a topNNN file does not neccesarily mean that you are providing better results to the project.  To understand this, you should search the forums to read some old topics in which we have discussed the pro's and cons of using a topNNN file.
Because of the cons, Stephen only provides a sampleNNN file at this time on the moun1 homepage.  This sampleNNN-file contains all kinds of good and bad results.

It's too late over here in the evening for me to explain this issue thoroughly, but my opinion is that a guy like you, someone with 7 computers, should not use any top- or sample-file, and just start from scratch and merge the results.dat of your 7 computers every now and then to put this combined file in the dir of every client.  That way you may never get a 16% yield, but on the other hand you may also find a yield that's higher than any found up till now.  It's all about randomness, mutations and fresh 'genes'.

I have one computer over here that uses an old topNNN file.  It just won't get beyond 16.10%. The other client I got is on a university computer that I run when I'm there.  I started from scratch on that one and it's around 0.70% now.  That's after less than 24 hours on a 2.4Ghz.  So, for the sake of science, and for me (because I took the time to write this large post Smile), don't use a topNNN file.

Ugh.  Stephan has spoken Cool

---
Dutch Power Cow.
MOOH!
Ripper
2003-11-19 14:53:53
Thanks.  That almost makes sense....that is scary when talking to someone like me.

Would it be beneficial for me to erase the results.dat file and start over, so to speak?

My mistake, I used the sample file off of the main page....somehow, it stuck in my head that it was a TOP1000 file....sorry.

I suffer from short term memory loss and so it is a wonder that I remember to come back and check my messages.

Iley A. Pullen
[DPC]Stephan202
2003-11-20 00:17:33
Big Grin

Erasing the .dat file would not directly be beneficial for you, but it would be for the project, in my opinion.  Stephen hasn't responded in this thread yet, but I think he'll agree with me.

If you're unsure, or do not want to 'take action' after hearing one persons opinion, it's best to wait till someone else replies to this thread.

Just remember: It's the mpts that counts, not the number of results.

---
Dutch Power Cow.
MOOH!
Stephen Brooks
2003-11-20 02:39:22
Yeah, actually if someone appears to have a large number of 'results' compared to their Mpts score, it probably indicates that they're rubbish ones since normally fast-running results have low yields!

It's up to you if you want to run from the project's sample file or come up with an independent breed.  Here I have a fast computer and it's got half-way up to the max yield currently found in v4.3/SolenoidsTo15cm so far on its own.  So with 7 computers it might be worth a try, if you're willing to put in the effort to merge the results.dat with each other manually (or to write a script to do that).

HB Pencils, also sold as "Moron's Choice" Graphite Cigars.
Herb[Romulus2]
2003-11-20 12:20:18
The risk with using the sample file or a "best of xxx" is just inbreeding, and you got stuck then.

On the other hand, within the last version, we made extensive use of the best of database provided by courtesy of the DPC guys, which suddenly showed up with a new breed from badger[ocau], which did finally made the topresults.

Hey Stephen, I believe this is what actually made that gap in your distribution chart Wink all the people switched over to the newer design line.  Actually you can reproduce this in the current design baseline.  There is quite a gap between them 9.3x and them 9.5+ results.  As soon as there are better values published, the mass of the results will work within these parameters.

I'm working with visualizations via excel sheets to detect new trends in parameters, sometimes easier than just math stuff Wink http://webwi.de/data/queue.xls don't ask for explanations, they would last a weekWink and the corresponding http://webwi.de/data/results.dat

-------------------------------
I'd say more, but I can't reach the keyboard from the floor.
: contact : - - -
E-mail: sbstrudel characterstephenbrooks.orgTwitter: stephenjbrooksMastodon: strudel charactersjbstrudel charactermstdn.io RSS feed

Site has had 25162631 accesses.