When to use different PyTorch's different dtypes ? How will it affect precision and performance?

Im confused about when using ‘dtype=torch.int32/dtype=torch.float32’, I know that integers are rounded and floating data has no loss of precision. But if floating data is more precious, why we even bother to use integer? I will be much appreciated if anyone can explain to me thanks!

In general, you can always use float32 as neural network weights and input/output tensors are almost always floating point numbers. There are some cases where you a get a performance boost on certain GPUs while training your model if you use the float16 data type, but you have to do it a bit carefully.

Finally, the int and long datatypes are generally useful for covering data from Numpy arrays to PyTorch tensors easily. And there are some cases where results of torch tensor operations are integers.