stephenbrooks.orgForumMuon1GeneralVersion usage
Username: Password:
Search site:
Subscribe to thread via RSS
waffleironhead
2006-10-17 17:15:41
I was just looking thru the best 100 decayrotB list and noticed that a lot of the best scores were obtained using the .c version while the .d version has been out for quite some time(months if I remember correctly).  I was wondering the reasons for not updating the client.  Are users simply not aware that a new version is out there, Is the updating of many clients over a network too much hassle, or some other reason I am not aware of. 

Just curious!

waffleironhead
Herb[Romulus2]
2006-10-17 19:41:10
http://www.stephenbrooks.org/forum/?thread=1148&bork=qibtlebsmr my message is still valid.
RGtx
2006-10-17 19:51:17
By coincidence, I was also considering this topic, and was on the point of posting a similar question.  At the time of release of v4.43d it was noticed, that on transferring a queued result generated with version 4.43d to 4.43c and allowing both to run to completion, for DecayRotB, v4.43c always gave a higher average result, (for DecayRotD the picture was not so clear over a limited number of runs.)

My question is, is it possible that any improved optimisation generated by v4.43d clients are being masked by those inflated results from 4.43c clients?
RGtx
2006-10-17 19:57:11
Herb[Romulus2], I must have deleted my v4.43c client, would you, kindly, post a few comparison results for the two versions in question, as at the time of your original query, Stephen seemed doubtful of their validity
Herb[Romulus2]
2006-10-18 13:32:37
I have no current comparisms and I'm currently too busy to reconfigure some boxes.  I had an install of 4.43d running accidently some weeks ago and it produced a series of 25 results which I deleted afterwards and repeated them with 4.43c on DecayRot, they were around 0.01 lower when we were tackling at 0.395%

I'm anyway considering to give up the manual tweaking as my time is becoming limited more and more recently.
RGtx
2006-10-18 17:35:05
I hope that the following will clarif my previous points.  The following results were taken directly from Queue.txt, the fifth run being interpolated from the final value (I assumed that the final value is simply the mean of the five contibutory runs):
1 0.399049 0.400196 0.391853 0.399049 0.400196
2 0.386235 0.387109 0.373898 0.386235 0.387109
3 0.397639 0.397859 0.392191 0.397639 0.397859
4 0.385878 0.385229 0.373490 0.385878 0.385229
5 0.380784 0.379937 0.367303 0.380784 0.379937

One doesn't need to be a statistician to see that the values of succeeding runs follow a consistent pattern.  When I did have a v4.43c client I had simply copied the Queue.txt and run it under the v4,43d client, where again a pattern emerged, but the average value of runs 2 through 5 was higher.

I would like to add that, yes, results 1 and 4, 2 and 5 are identical, this is not a mistake on my part, and if you look at the DecayRotB results you wiil find many instances where a number of users hare returned identical maximum values, at the current time nine clients have returned the maximum result 0.391942.

I would guess, that both these issues arise from the non-random seeding employed with the DecayRotB lattice, where successive runs are seeded with values 0,1,2,...,5. I first noticed this when once running the command line version, (successive runs of PhaseRotD always seeded with a seeming pseudo-random values), and looking at the lattice file for DecayRotB, we find "seed 0 seedpitch 1" which do not appear in the lattice file for PhaseRotD.

One further issue, yesterday I returned results whose average after 5 runs was marginally greater than 0.39000, yet the highest value from a v4.43c client appearing in the latest sample file is 0.389534. The question is, whilst we are not playing on a level field, should we who are now limited to v4.43d only attempt to optimize for PhaseRotD?
waffleironhead
2006-10-20 14:29:28
hmm, the difference of scores between the two versions running recayrotb is interesting.  I wonder if there is then some difficulty in both .c and .d best scores being used by each other.  I had been wondering this for a while, and just a few weeks ago I deleted all of my previous data and am running the client from scratch, with no benifit from any previous runs by .c or .d clients.  I am limiting the client to only run decayrotB.  I have seen a steady climb of efficiency having gone from -.55 to around -.18 in only around 3500 runs over 100,000 mpts of cpu time.

So I was wondering if anyone thought this was a valid experiment.
waffleironhead
2006-10-22 18:58:40
I figured I would explain my reasoning to start from scratch.  As my client keeps trying to improve on best designs from .c clients it will not come near or exceed the scores of the .c clients and will be rejected, but if the same run was initiated from a .c client maybe it would be an improvement.  So thus my client keeps trying to run better the results from the .c clients and dissmisses them as non-improving even tho it could be an improvement.  Instead I am running now solely on .d results gained from .d clients which in my mind so far are much more efficient at determination and improvement. 

does this seem sound?  I know benifits are gained from trying to establish new branches of evolution in and of themselves, so I guess only time will tell.
Stephen Brooks
2006-10-27 16:09:48
Usually the practical way I fix this is to release a new lattice that can only be run by the newest version(!) That way, people can seed it manually from the results of earlier lattices, but no runs with the earlier client will "pollute" the results of the new lattice.

I'm modifying Muon1 to do some "a bit different" things at the moment... cooling ring preparation, but will take a month or two yet.
Stephen Brooks
2006-10-27 16:11:27
I'm not sure the "phase rotation" is optimising well enough with my current version, so I'll move onto doing other things with it (with a fixed phase rotation derived from other concerns) until I work out how to get around that.

Sooner than that, I guess I could put a "seed" in from some other work I have here, to give the current optimisations a small kick.
RGtx
2006-11-10 12:30:28
Stephen, I am glad to hear that you are still with us.  One little query.  Like waffleironhead, I too have started with a 'virgin' run, and after only 1600 runs have gone from ~-0.55 to 0.117574% (~90Gpts).  It would appear to be optimising fairly rapidly, several increments in maximum muon% daily.  My point is, at this stage all runs of 5 trials have not broken the 200Mpt barrier, yet similar valued runs in the PhaseRotB sample file have much greater Mpt values.  Should I continue running this project, to see how far it optimises, or will I only be generating worthless stats?
RGtx
2006-11-10 13:21:55
Errata: For PhaseRotB read DecayRotB.
RGtx
2006-12-17 18:49:22
After circa 3400 runs, I abandoned the above, seemingly unable to exceed a result above 0.175%, and reverted back to the mainstream pool.  Yesterday, out of curiosity I resurrected the results.dat file and gave it another trial, and now have results around 0.189% and rising (49MPt/run after 3600 trials).

To achieve such progress, I had noticed that nearly all leaps in maximum value were through mutation, and whenether it seemed that progess had stalled, I swapped to a rewritten config.txt file so as to only generated mutations, reverting to the normal config file after any increment.  I guess, earlier I was a little too hasty in abandoning the trial!

If, it is the case that progress is maintained by optimization following mutation, would it not be an idea to separate the relevant line from the config file and have it dynamically updated (much as is done with the sample results file).

Herb[Romulus2]
2006-12-18 13:21:22
Edit the ratio to Mutate=9 and you're done, even further edit all others to a value of 0 and you will have only Mutate runs with an oaccasionally Random run, which you can not turn off.
RGtx
2006-12-18 13:59:50
That is exactly, what I do (though with Mutate=7, and all others set to 0).  Unfortunately,looking at Stephen's plot of MPts/run against Muon%,it would appear, that my results are converging to the current mainstream optimization.
Stephen Brooks
2006-12-18 15:30:43
The Mpts/run graph only shows two dimensions though!  Your design could be different in other directions.  How close is it to the current maximum?  (I assume you switched sample files etc. off)
RGtx
2006-12-18 18:26:01
After a total of 3900 trials, i am now up to 0.191244% (242.8Mpts/5Runs).  Yes, this is a purely virgin run, without sample files, and all updating turned off.  Merging a copy of this designs results.dat with that from the design utilizing the sample files and viewing various parameters with your viewresults program does indeed show that the two designs are wildly variant in a large number of ways. 
RGtx
2006-12-23 22:06:00
Here is a typical example, of my technique:

Last night, my design reached a muon% of 0.196039, since then it has produced 15 results with this value, the reason being multiple values of prf6p give the same result, switching to purely mutation, within two hours the maximum muon% has jumped to 0.197887, and I can switch to normal optimization.
: contact : - - -
E-mail: sbstrudel characterstephenbrooks.orgTwitter: stephenjbrooksMastodon: strudel charactersjbstrudel charactermstdn.io RSS feed

Site has had 26761553 accesses.