-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] added functionality to test current implementation of nsgaii algorithm and new version of nsgaii #246
Conversation
…ectivefunction* support still open.
… into New-Algorithm-DDS
…ectivefunction* support still open.
Hi @thouska, I have added the the option to skip the simulation of parent population after it is combined to offspring population, I am wondering if this should actually be the default. It does have a lot of gain in performance when the simulation model is heavy (such as VIC). Regarding the burn-in phase, I am not sure I understand what you mean about that (it is not in the paper by the way), do you think should be included in nsga-ii? |
Hi @iacopoff, ok great the nsgaii.py looks good to me. I added a small example to the tutorial_nsgaii.py. I think for now, we can skip the burn-in phase, the closer we can get to the published version the better. However, the skip of parameter duplicates sounds like a solid idea to me, if you want, we can make that the default. If so, it needs a comment and the option to deactivate. |
@thouska, I may be wrong but it seems that in the tutorial_nsgaii.py you are still using the "old" version of nsgaii? I was developing the new one under nsgaii_dev (my bad if it was not clear!). The algorithm is minimising, there is a script, tutorial_nsgai_dev.py, where it is used with mean absolute error to optimise the dtlz1 problem. Regarding the burn-in phase, if you like we can talk about it when I am back from holiday in September? I would release as alpha if you agree. I have a couple of days more to work on it before I go in holiday, I can do any change or comment |
Hi @iacopoff, oh yes thats true. I was getting an error with the developement version. But maybe I should report it, instead of switching to the old version :) File "spotpy\algorithms\nsgaii_dev.py", line 271, in sample ValueError: cannot reshape array of size 43830 into shape (30,3) Do you have any idea, why this is happening? |
This will ensure new_value within the predefined min and max bounds
Hi @thouska,
each element of the list should contain: (index, parameters, obj-functions) and that's why I am using index [2] to select the objective functions:
The issue is that in dtlz1 it works fine but with the hymod example instead of the objective function it seems to return the time series in index [2], and it breaks the code. What do you think about that? |
A quick update: for some reason the objectivefunction is not called by the algorithm in the hymod version. I will investigate later, however you may already know what is possibly causing that? |
Hi @iacopoff,
is returning (index, parameters and simulation_results) and not the objective function. In the optimization approach the dltz1 problem is basically the objective function too, which may have resulted in this confusion (also because I am using in the rosenbrock/ackley/griewank optimization problems an objective function (which are basically telling, how far the results are from the optimal point). What we need instead is the call of postprocessing. This takes the simulation_results and the oberved_data and puts them into the "def objectivefunction". So, the solution is probably something like this:
|
Update padds.py
@thouska, ok I got that thanks! I think I need to refactor a bit the code though. I have another question: this piece of code in _algorithm.py under the postprocessing function (in particular line 301)
is basically saying that if the like is a list, then get the first item. This cause the self.postprocessing to return 1 obj function instead of 3. I was wondering how do you handle that in the other multi-obj algorithm? thanks for your help! |
Oh good point. I think I have to do some refactoring of the code :) |
great thanks! Also, in the mutation function of the nsagii algorithm, I am using min and max bounds of the parameters and currently I am using self.setup.params to get those whilst in the tutorial the parameters are defined at the start of the spot_setup class. Do you think I could add a def parameters in the setup_hymod_python.py or add a try/except in the nsgaii algorithm to handle this case? |
Changes the access to parameter min and maxbounds in nsgaii slightly
Ok, all objectivefunction values should now be returned, if you call Regarding the min/max boundaries of the parameters: There are actually several values supported, how the paramters are defined in the spot_setup class. As long as you call the final |
Hi @thouska, thanks for the changes. It looks it is working fine now, I think I need to check further the skip_duplicate in terms of performance. Please try to run the tutorial and if you want you can check the resulting pareto distribution running the plot_nsgaii_tutorial.py script. thanks! |
Additionally slight adjustments in tutorials
Indeed, looks very nice, its working and, on top, seems to be quite powerful!
results in a |
Hi @thouska, it was an easy fix actually, now it is working fine. It seems that travis-ci has failed on the padds algorithm though, however I suppose it is not related to the nsgaii algorithm. |
No description provided.