Hi there! Just wanted to ask why do we use:
-
detach()
in validation_step() -
torch.stack()
in validation_epoch_end()
Link to the cell: https://jovian.ai/aakashns/housing-linear-minimal/v/2#C9
Hi there! Just wanted to ask why do we use:
detach()
in validation_step()torch.stack()
in validation_epoch_end()Link to the cell: https://jovian.ai/aakashns/housing-linear-minimal/v/2#C9
https://jovian.ai/forum/t/lecture-2-working-with-images-logistic-regression/1542/279?u=aakashns
Both good questions.
loss.backward()
, the final loss tensor holds a reference to the entire computation graph in memory i.e. all the weights, biases, inputs & targets. As we keep recording losses batch-by-batch, all these stale & unused variables will be kept in memory, and over time you may run out RAM. .detach
simply drops the reference to the computation graph, and just returns the value of the tensor. Check the documentation for more.torch.stack
coverts a list of tensors [torch.tensor([1,2]), torch.tensor([3,4]), torch.tensor([5,6])]
into a single tensor torch.tensor([[1,2],[3,4],[5,6])
. Check the docs for more details.