Relative Content

Tag Archive for pythonpytorchartificial-intelligenceautograd

How to avoid sum from ‘autograd.grad’ output in Physics Informed Neural Network?

I’m working on a Physics Informed Neural Network and I need to take the derivatives of the outputs w.r.t the inputs and use them in the loss function.
The issue is related to the neural network’s multiple outputs. I tried to use ‘autograd.grad’ to calculate the derivatives of the outputs, but it sums all the contributions.
For example, if my output ‘u’ has shape [batch_size, n_output], the derivative ‘dudx’ has shape [batch_size, 1], instead of [batch_size, n_output].
Due to the sum, I can’t use the derivatives in the loss function. I tried with a for loop to calculate each derivative but the training takes forever. Do you have any idea how to solve this problem?