Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AverageLearners in a BalancingLearner don't work well #198

Open
basnijholt opened this issue Jun 18, 2019 · 2 comments
Open

AverageLearners in a BalancingLearner don't work well #198

basnijholt opened this issue Jun 18, 2019 · 2 comments

Comments

@basnijholt
Copy link
Member

basnijholt commented Jun 18, 2019

In a BalancingLearner where one of the learners returns contant+-eps (where eps < 1e-15) the points are not correctly balanced over the learners. The one that returns 0 (or a constant) might have values that are relatively exponentially larger but in absolute terms almost the same.

For example:

import adaptive
adaptive.notebook_extension()

def f(x):
    import random
    return random.gauss(1, 1)


def constant(x):
    import random
    return random.gauss(1, 1) * 10**-random.randint(50, 200)

learners = [adaptive.AverageLearner(f, rtol=0.01), adaptive.AverageLearner(constant, rtol=0.01)]
learner = adaptive.BalancingLearner(learners)
runner = adaptive.runner.simple(learner, goal=lambda l: l.learners[0].npoints > 2000)

print(learners[0].npoints, learners[1].npoints)
2001 12769

@akhmerov or @jbweston do you have an idea how to solve this?

@akhmerov
Copy link
Contributor

Would only using atol work?

@basnijholt
Copy link
Member Author

basnijholt commented Jun 20, 2019

With atol it does work but I am not sure if that's good enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants