How it works nn.NLLLoss in pyTorch?

loss_object = torch.nn.NLLLoss()
lsoftmax = torch.nn.LogSoftmax(dim=-1)

loss = loss_object(lsoftmax(outputs), targets)

Traceback (most recent call last):
 File "source.py", line 60, in <module>
 loss = loss_object(lsoftmax(outputs), targets) 
 File "/home/m/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
 result = self.forward(*input, **kwargs)
 File "/home/m/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 204, in forward
 return F. nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
 File "/home/m/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1848, in nll_loss
 out_size, target.size()))
ValueError: Expected target size (64, 768), got torch.Size([64, 20])


inputs.shape
torch.Size([64, 20, 768])
targets.shape
torch.Size([64, 20])

Why expected shape = (64, 728), and not at least the same size as the inputs?

How to implement tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')?
April 4th 20 at 12:56
1 answer
April 4th 20 at 12:58
Solution
The issue of:

loss_object = torch.nn.CrossEntropyLoss()
loss = loss_object(outputs.view(-1, outputs.size(-1)), targets.contiguous().view(-1))

Find more questions by tags Neural networks