tomaz 2006-03-23 12:30:06 | Stephen, can you give more details how Muon1 algorithm works? Is it a "raw force" algorithm which search every posible combination of parameters or can it learn something from previous steps ? I have a feeling that good results are somehow grouped in results.txt file i.e. two or three good then nothing then few good and so on. I think it would be great improvement if you could adopt a self learning algorithm which would recognize which results are good to take for seed for next computations, which are good for crossover, which for extrapolation etc, which parameters are more important and which are not and so on... By the way, can you explain what all those parameters mean on real accelerator? (ie tantalumrodz, tantlumrodr, d1l,...) Another very interesting thing would be to try to make empirical model on the basis of computed data. Would be possible to predict good results with such a model? If nothing else it would be very easy to investigate which parameters (or combination of them) are most relevant... Can you tell how diferent are, let's say top 100 computed desigs for a given lattice? Are they just a slight variatin of one design or are there significant diferences etc? Thank you for reading this. It seems I got writting inspiration this evening. It might improve my English at least (and last) Tomaz |
Stephen Brooks 2006-03-24 07:10:05 | It is not a brute force algorithm: it learns by tending to use crossover etc. on the designs that have got the higher scores. I've been experimenting with some models that try and fit through the data and predict where would be better (TrialType=LocalGrad is one such). There's no reason why the higher scores should be grouped in results.txt unless you're talking about results.dat where the samplefiles are of course pasted in. The results.dat is the file that it uses to "learn" from. results.txt is just the buffer of unsent results. There is more detail here: http://stephenbrooks.org/ral/report/2004-2/index.html#sec2.2 |