Skip to content

Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)

Notifications You must be signed in to change notification settings

jiamings/fast-weights

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using Fast Weights to Attend to the Recent Past

Reproducing the associative model experiment on the paper

Using Fast Weights to Attend to the Recent Past by Jimmy Ba et al. (Incomplete)

Prerequisites

Tensorflow (version >= 0.8)

How to Run the Experiments

Generate a dataset

$ python generator.py

This script generates a file called associative-retrieval.pkl, which can be used for training.

Run the model

$ python fw.py

Findings

The following is the accuracy and loss graph for R=20. The experiments are barely tuned.

Layer Normalization is extremely crucial for the success of training.

  • Otherwise, training will not converge when the inner step is larger than 1.
  • Even when inner step of 1, the performance without layer normalization is much worse. For R=20, only 0.4 accuracy can be achieved (which is same as the level of other models.)
  • Even with Layer Normalization, using slow weights (ie. vanilla RNN) is much worse than using fast weights.

Further improvements:

  • Complete fine-tuning
  • Work on other tasks

References

Using Fast Weights to Attend to the Recent Past. Jimmy Ba,  Geoffrey Hinton, Volodymyr Mnih, Joel Z. Leibo, Catalin Ionescu.

Layer Normalization. Jimmy Ba, Ryan Kiros, Geoffery Hinton.

Releases

No releases published

Packages

No packages published

Languages