Hello guys, I don’t understand how multiplying the gradients by a number then storing the value in some number will affect the gradient calculations in the future.
An example explaining what you mean here would be beneficial.
When working with pytorch tensors, they track back how they were calculated. This is part of autograd module.
After all the calculations are done, you might need the gradients, to modify parameters in order to lower the error.
no_grad() the gradients aren’t calculated and stored (
This is used mainly during evaluation, to avoid unnecessary calculations/memory usage.
torch.no_grad to indicate to PyTorch that we shouldn’t track, calculate, or modify gradients while updating the weights and biases.
This was in the notebook. I don’t understand why if i make a calculation on the gradients will they ever be affected.
If you use
no_grad() then the
grad attribute won’t be calculated during
Ok thank you very much