Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

The difference between offical pseudo code and this repository about "num_unroll_steps" #221

Open
1 task done
ZF4444 opened this issue Apr 18, 2023 · 0 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@ZF4444
Copy link

ZF4444 commented Apr 18, 2023

Search before asking

  • I have searched the MuZero issues and found no similar bug report.

馃悰 Describe the bug

this is offical pseudocode about update weight:

def update_weights(optimizer: tf.train.Optimizer, network: Network, batch,
                   weight_decay: float):
  loss = 0
  for image, actions, targets in batch:
    # Initial step, from the real observation.
    value, reward, policy_logits, hidden_state = network.initial_inference(
        image)
    predictions = [(1.0, value, reward, policy_logits)]

    # Recurrent steps, from action and previous hidden state.
    for action in actions:
      value, reward, policy_logits, hidden_state = network.recurrent_inference(
          hidden_state, action)
      predictions.append((1.0 / len(actions), value, reward, policy_logits))

      hidden_state = scale_gradient(hidden_state, 0.5)

    for prediction, target in zip(predictions, targets):
      gradient_scale, value, reward, policy_logits = prediction
      target_value, target_reward, target_policy = target

      l = (
          scalar_loss(value, target_value) +
          scalar_loss(reward, target_reward) +
          tf.nn.softmax_cross_entropy_with_logits(
              logits=policy_logits, labels=target_policy))

      loss += scale_gradient(l, gradient_scale)

  for weights in network.get_weights():
    loss += weight_decay * tf.nn.l2_loss(weights)

  optimizer.minimize(loss)

and it only train action happend in history, exclude anything past the end of games锛宐ut will train action past the end of games in muzero_general

# States past the end of games are treated as absorbing states

Add an example

as mentioned above

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

@ZF4444 ZF4444 added the bug Something isn't working label Apr 18, 2023
@ZF4444 ZF4444 closed this as completed Apr 18, 2023
@ZF4444 ZF4444 reopened this Apr 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant