Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clearing state after claw.run() completes #70

Open
dansteingart opened this issue Aug 11, 2015 · 14 comments
Open

Clearing state after claw.run() completes #70

dansteingart opened this issue Aug 11, 2015 · 14 comments

Comments

@dansteingart
Copy link

Within a python script, using PyClaw/ClawPack Often times I can run two simulations within ~1 seconds of each other (the same simulation), and completely different results are obtained. One is completely non-physical, so it's relatively easy to discard the poor result.

I think there's a memory sharing issue, but I cannot figure out how to completely clear the claw class from memory within a python script, e.g:

def test_run():
 claw = pyclaw.Controller()
 claw.keep_copy = True
 claw.solution = pyclaw.Solution(state,domain)
 claw.solver = solver
 claw.tfinal = tf
 claw.verbosity = False
 claw.num_output_times = nt
 claw.write_aux_init = True
 claw.run()
 frames = claw.frames
 del claw
 return frames

This def runs fine if I call it once, but if I call it again it I often get a different, unphysical result.

@ahmadia
Copy link
Member

ahmadia commented Aug 11, 2015

Where is solver coming from in your code block? I'm not sure we have any tests that check that state has been cleared, and I know there were issues in the past with the Fortran shared modules not being properly flushed.

@ahmadia
Copy link
Member

ahmadia commented Aug 11, 2015

I think there's a memory sharing issue, but I cannot figure out how to completely clear the claw class from memory within a python script, e.g:

The memory sharing issue is happening in the Fortran kernels, which are not thread-safe and share context. There are a few ways to mitigate this, depending on what you're trying to do. There is definitely a potential bug here on PyClaw side.

@mandli
Copy link
Member

mandli commented Aug 11, 2015

@ahmadia What state do you think is being stored in the Fortran code that might be causing this? I was thinking that perhaps a state was not being copied correctly and was being carried over although I think your suggestion that the solver is at fault is more probable.

@ahmadia
Copy link
Member

ahmadia commented Aug 11, 2015

What state do you think is being stored in the Fortran code that might be causing this?

Obviously depends on the solver, but anything in the common blocks is shared memory and not thread-safe.

@mandli
Copy link
Member

mandli commented Aug 11, 2015

Ah, that's true. @dansteingart what solver are you using with this?

@dansteingart
Copy link
Author

I've tried to dig around and clear the fortran memory with little luck, @ahmadia do you have workaround I could use? Effectively I just want to revert back to "clean slatE" between each run, I don't need to keep any variables from the last run?

@dansteingart
Copy link
Author

@mandli classic I think, will double check

@mandli
Copy link
Member

mandli commented Aug 11, 2015

What about the Riemann solver?

@ahmadia
Copy link
Member

ahmadia commented Aug 11, 2015

I've tried to dig around and clear the fortran memory with little luck, @ahmadia do you have workaround I could use? Effectively I just want to revert back to "clean slatE" between each run, I don't need to keep any variables from the last run?

I would suggest writing a second script to spawn Python subprocesses per job, but this is a pretty poor workaround.

@ahmadia
Copy link
Member

ahmadia commented Aug 11, 2015

Unfortunately, the Fortran kernels are all "stateful", so fixing this, at minimum, would require exposing a Fortran routine for your solver that clears any common data.

@dansteingart
Copy link
Author

@ahmadia @mandli I'm using the riemann.vc_acoustics_2D solver. I've used the spawn approach in the past and it works with one caveat -> if I read/write to a file (e.g. read parameters from a file) from the subprocess I see the same effect. So having a handle on what's happening in fortran would be really helpful.

@mandli
Copy link
Member

mandli commented Aug 11, 2015

Are the advection speeds changing in time in your case? If that is the case how are you implementing the change in velocities?

@dansteingart
Copy link
Author

@mandli I am not changing the velocities (modeling acoustics through solids).

@ketch
Copy link
Member

ketch commented Aug 12, 2015

@dansteingart I suggest that you post the full script. Without knowing where your state and solver are coming from, there are a lot of things that could potentially be wrong.

If things are the way I think they are, the easiest fix is to create a new solver object each time. Is there some reason you need to reuse it?

Also, if you are reusing state you will run into problems simply because it has already been advanced to the final time.

hudaquresh pushed a commit to hudaquresh/clawpack that referenced this issue Feb 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants