Understanding and introspecting torch.autograd.backward
In order to locate a bug, I am trying to introspect the backward calculation in PyTorch. Following the description of torch’s Autograd mechanics, I added backward hooks to each parameter of my model as well as hooks on the grad_fn
of each activation. The following code snippet illustrates how I add the hooks to the grad_fn
: