How will not using no_grad affect the future gradient calculations?

Hello guys, I don’t understand how multiplying the gradients by a number then storing the value in some number will affect the gradient calculations in the future.

An example explaining what you mean here would be beneficial.

About no_grad():
When working with pytorch tensors, they track back how they were calculated. This is part of autograd module.
After all the calculations are done, you might need the gradients, to modify parameters in order to lower the error.

With no_grad() the gradients aren’t calculated and stored (grad is None).
This is used mainly during evaluation, to avoid unnecessary calculations/memory usage.

We use torch.no_grad to indicate to PyTorch that we shouldn’t track, calculate, or modify gradients while updating the weights and biases.

This was in the notebook. I don’t understand why if i make a calculation on the gradients will they ever be affected.

If you use no_grad() then the grad attribute won’t be calculated during backward() call.

Ok thank you very much