kitsura 2004-10-12 09:49:47 | Ok its a theory, wondering if indeed the evolution is limited by CPU speed. My main PC has been optimising chicane for months but is still stuck in negative values and not going any higher. If the evolution is indeed CPU speed limited would that mean that if you have a supercomputer crunching it you would get super good optimisations? |
Deleenheir 2004-10-12 12:15:00 | looks like my computer has more luck with the chicane |
kitsura 2004-10-12 15:42:55 | Qn: Did you set your config to download sample files? If yes, you do not get what I'm talking about. I've always run my clients isolated. |
Deleenheir 2004-10-12 15:46:43 | no i just instaled the client and let it run for a few days.... |
AySz88 2004-10-12 21:19:49 | I'm fairly sure that if your population is able to generate more results, you'll get more combinations and better results.... So, yes, speed is definately a plus. |
kitsura 2004-10-13 03:37:03 | quote: Go under your config.txt and locate this line: "Download sample results file after a number of days (0=don't):" if its anything but 0 that means that you're downloading other clients' optimised results on your PC and further optimising them. quote: I'm talking more of a concrete limit, for e.g. if your PC = 1GHz, optimal ChicaneLinacB = -1.715 even if no. of days = infinity. |
Stephen Brooks 2004-10-13 05:38:12 | No that's not right. A 1GHz machine ought to do as well as a 3GHz one, just 3x slower. In fact a 1GHz machine should theoretically do as well as the entire project, just 400x slower |
kitsura 2004-10-13 10:12:03 | Well I'm testing this theory to the fullest. Just wondering how some clients managed to come up with optimisations so fast at the beginning of the project when there weren't sample files to optimise. Anyway if what you say is true if I have 10 pcs at 1GHz would it be 10x faster what if I have 1 10GHz PC would that be better? |
Stephen Brooks 2004-10-13 12:51:54 | Well, it's really a bit more complicated than that, I guess. The 10GHz computer will probably be faster than 10 separate 1GHz ones since it would have more runs _in series_ to learn from. If you worked a system where you distributed your results.dat between the 10 computers, though, it would only be marginally slower, as I'm guessing evolution isn't so fast as to mind whether 10 simulations are done in parallel or series before being added to the database. The percentage increased "unusually fast" to begin with because (as well as the optimisation just being easier at that point) the evolution process goes in fits and starts. If you start several computers off from a blank start, you'll see that they run into different plateaus for different lengths of time. What the sample results process gave us is the best of all those plateaus, so the project-wide evolution broke out of them much faster than a single machine would (and I don't think any single machine actually performed as well as the project graph on its own). |
David Bass 2004-10-13 16:31:44 | I believe that a single machine might well develop solutions faster than the parallel network for a couple of reasons. One is the delay in submitting results due to the 100k sendfile default. This means that new candidate best scores will get queued for retests more often - machine one might not know about an already proven better solution on machine two that has not been pooled yet. It would then waste effort proving a solution that was locally better but not globally so. The other one is a bit more obscure, and I'm not sure about it, or whether I can articulate the argument, but here goes: The use of a commonly pooled "best of" results file might favour results that were submitted quickly. This could lead to local maximma of percentages that used the minimum simulation time. It is then possible that the widespread use of these solutions as seeds could trap the entire network in a local maximum that excludes high-effort simulations, by evolving a maximum percentage too high to be jumped out of by the normal mutation etc. methods. Hope that's clear. |
UnderTitan 2004-10-13 22:49:40 | David, Are you suggesting that the larger pooled results could somehow cause a dilution of the potential? |
kitsura 2004-10-14 07:31:01 | Theoratically yes. Since the best results often come from the same breed. Other breeds with better potential might get left out since they don't evolve as fast. Stephen has mentioned this severals times, which is the main reason the sample files contain a wide spectrum of results and not just the highest breeds. I have always had the theory that keeping my clients isolated might someday yield a better more unique breed. Just take a look at the Madagascar flora and fauna, many of which can't be found outside the island. |
David Bass 2004-10-14 17:01:38 | [to UnderTitan] It's not a question of the overall number of results, it's a question of the direction those results are pushed in by relatively good solutions that use less processing power to achieve. However, there are many ways in which a simulation can get stuck in a local maximum, I've just suggested one of them. The various cross-breeding and mutation methods used in muon1 hopefully are sufficient to allow the solution to jump out of a local solution. David |
Stephan Hermens[RKN] 2004-10-15 08:45:07 | I believe the main reason, the networked system went to positive results so fast, is, that some people took the best design from SoleniodsTo15cm (or at least the general idea of long solenoids with high field and big radius, no reversal of field direction, short drift between the solenoids) and handcrafted parameters for ChicaneLinac and PhaseRot. After some trial and error testing fairly moderate results were achieved. These designs and their "offsprings" laster found their way into the sample.dat files. So the behaviour of your computers cannot be compared to that of the overall network since there was some heavy human interference at the beginning of the project. |