stephenbrooks.orgForumMuon1Generalbest 100 Samples
Username: Password:
Search site:
Subscribe to thread via RSS
Herb[Romulus2]
2006-09-17 17:25:26
It would make sense to filter results without "gen" attribute to the best result only per user.  The current best results are flooded by my tweaks, however they build up on each other so they do not represent different evolution lines.  We would see this way much more variation.  Also gen=6 should appear in there only once per user for the sake of more variation.
Xanathorn
2006-09-18 15:07:24
That is why I started PhaserotD over completely from scratch (empty results and sample files) about 3 weeks ago.  There was just no more progress of the results built on those in the sample file for a very long time.  Only minor thing is it's not very fast getting in the positive again, my client's just back at -0.269 and I'm afraid I will run into a barrier again soon.
Herb[Romulus2]
2006-09-18 18:21:31
I wouldn't go that far.  It's just that the given examples don't represent really different evolution lines.  I could tell you which are independent lines when I look at them, however I don't see a methodology to filter them by a routine.
I've seen now some parameters which simply stuck for a bit of time anything, until some drastic change appears on some other parameter which was harmless so far. 
I know shit about the real technics, I just look at it from a statistcal viewpoint.
tomaz
2006-09-18 20:53:09
Can somebody here use machine-learning tolls ?  I think it would be interesting to run some classification, decision trees or maybe train a neural netvork with result computed so far for a given lattice...Just an idea but I cant do it for know.  Hopefully I will learn some day. 
Stephen Brooks
2006-09-19 11:21:43
I have at times considered putting things like that into Muon1 itself.  But there are a lot of different techniques to try, so if any of you find a technique that (via queued result generation) improves the optimiser's performance, I'd be interested to hear about it, so I can add it to the program!
Herb[Romulus2]
2006-09-19 12:37:23
Don't know how complicate that will be, however here it goes:

For any specific parameter I plant a series of values for either 999/750/500/250/0 or 999/666/333/0 to get a figure which width it is spanning against any specific baseline result (maybe the current best, but not necessarily).  If the span is low I choose a large increment while if the span is large I use smaller increments, eg:
baseline value pd34=434 0,360964
pd34=999 0,348436
pd34=750 0,358209
pd34=250 0,358265
pd34=000 0,345452
pd35=500 was left off as the baseline result was close to it.
the deltas correspond to -0,012528, -0,002755 -0,002699 -0,015512
I chose intervall level=4 which means checking 438 and 430, next test according results 432 and 428, final hit at 431.
With 9 checks the optimum was cornered.  For further checks the first 4 basic checks may be left off then.

I mark the result as stable, maybe in a program a stored local master reference can keep it through other tests.

While watching other results from the best list, I notice one result where all parameter differences against my best result are explainable (already tested, under work, etc.) but one, pd34=363 appears new !
I retest it under my current baseline
result=0,398086 b/l=0,398048 +0,000038
and it creates a new baseline.

Now put on the todo list:
pd34 363 431 crosscheck
pd34 363 362 corner with 1
pd34 363 364 corner with 1

Next new baseline maybe tomorrow.

This way I've always a few hundred test queue's open and every 10th or so improves the baseline a little bit.

I don't know if such a strategy can be programmed, it will be only of use to squeeze the current values to their best, it doesn't generate new evolution lines.  But the principle to declare defined parameters as sticky for further testing could be easily done with a reference result file.
tomaz
2006-09-19 13:35:51
Stephen, you might want to take a look at http://www.cs.waikato.ac.nz/ml/weka/
I intended to play with it but I am flooded with my own work.  And I think results from muon should be rearanged to right format for input to weka.
Combination of conceptual and empirical(statistical) modeling would be great improvement.  Conceptual model is obviously resource hungry so it would be good to test only statisticaly most interesting instances which could be predicted by machine learning (data mining) tools.  Of course some data base of already computed results is needed at the begining.  It is similar to Herb's system but in larger scale of course.
But on the other hand doesn't genetic algorithm doing this in similar way ?  I asume difference would be in predicting phase which is more powerful with data minig tools ?
Stephen Brooks
2006-09-19 15:02:10
It looks like WEKA's input format is just CSV with a header.  So use of Results2CSV (save it) on a large results.dat database and then writing the first lines in manually would work.
waffleironhead
2006-09-21 19:41:39
weka will convert the .csv files to .arff itself using the explorer gui open the .csv file and save it as arff.  this is a very interesting program, but I need to study it a bit more to harness any of the power.
tomaz
2006-09-22 07:37:25
Yes WEKA is most powerful thing since A-bomb but I need to study too.
Results2csv will do most of the job preparing data.  One must make headers for himself.  If you use data without headers you got it labeled by numbers (1,2,..).  It is better to do header (it took me only 5 min in Excel, I can send it if somebody is interested) because you have same labels as in Muon.  In results.txt file variables aren't in same order everytime so if you don't have right labels you may mix everything.  When you have headers in excel just paste data below and save as csv.  Then rename csv in arff and that's it.


: contact : - - -
E-mail: sbstrudel characterstephenbrooks.orgTwitter: stephenjbrooksMastodon: strudel charactersjbstrudel charactermstdn.io RSS feed

Site has had 25159611 accesses.