Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding JSSP environment #112

Closed
wants to merge 9 commits into from
Closed

Adding JSSP environment #112

wants to merge 9 commits into from

Conversation

DavideTr8
Copy link

@DavideTr8 DavideTr8 commented Jan 28, 2024

Description

Added the JSSP env from Learning to Dispatch.

Motivation and Context

We wanted to have a JSSP environment from a published paper.

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • New feature (non-breaking change which adds core functionality)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link
Member

@fedebotu fedebotu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job! 💪🏼

init_quality_flag = False


configs = Configs()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About config: is this supposed not to be ever changed by the user?

Because if it can be changed, isn't it better to set these as hyperparameters to the JSSPEnv class?

def __init__(self, num_jobs, num_machines, **kwargs):
super().__init__(**kwargs)
self.batch_size = torch.Size([1])
adjacency_spec = DiscreteTensorSpec(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Minor] For specs, I think it would be cleaner to move them to a _make_spec(...) class as done in the tutorial

def done(self):
if len(self.partial_sol_sequence) == self.num_tasks:
return torch.tensor(True)
return torch.tensor(False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important note:
In general, whenever there is a torch.tensor(...), torch.ones(...) etc to be instantiated , it is better to do so on the device. Since we use GPU for training and we can move the envs there for offloading the CPU, it would be best to instantiate directly there.
For instance:

...
return torch.tensor(False, device=self.device)

The same applies to other tensors instantiated this way - at the moment, trying to set the env on GPU raises some device mismatch error, but it can be easily fixed :)

self.num_machines = num_machines
self.num_tasks = self.num_jobs * self.num_machines
# the task id for first column
self.first_col = torch.arange(
Copy link
Member

@fedebotu fedebotu Jan 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it seems that there are some tensors declared inside the __init__ signature; this makes the environment stateful (as explained here) .

I don't hold a grudge against stateful envs, but I wonder whether it would be better (i.e. more efficient) to have them as in the pendulum example, which is, simply passed upon generation? I.e., the TensorDict holds all of the parameters inside?

self.positive_reward += reward
self.max_end_time = self.LBs.max()

tensordict = TensorDict(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Minor] in routing problems (e.g. here), we simply use td.update(...) here which is slightly faster, since we don't need to re-write variables that don't change over time.


NB: we make this with a "trick" in the base env, this might be useful in case it seems not to work

"cell_type": "code",
"outputs": [],
"source": [
"from torchrl.envs.vec_envs import ParallelEnv\n",
Copy link
Member

@fedebotu fedebotu Jan 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important
Check out env instantiation and stepping directly on GPU, not working in the current version (see other comments) - but can be easily fixed

@fedebotu fedebotu removed the request for review from Junyoungpark March 7, 2024 07:45
@fedebotu
Copy link
Member

fedebotu commented Jun 7, 2024

Closing PR for now since we have added the batched JSSP. I'd keep an eye on this though since since ParallelEnv can be interesting for hard-to-batch envs - @DavideTr8 if you manage to make those envs work, feel free to push a PR, looking forward to it!

@fedebotu fedebotu closed this Jun 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants