Skip to contents

Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input x (a 2D mini-batch Tensor) and output y (which is a 2D Tensor of target class indices). For each sample in the mini-batch:

Usage

nn_multilabel_margin_loss(reduction = "mean")

Arguments

reduction

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed.

Details

loss(x,y)=ijmax(0,1(x[y[j]]x[i]))x.size(0)

where x{0,,x.size(0)1}, \ y{0,,y.size(0)1}, \ 0y[j]x.size(0)1, \ and iy[j] for all i and j. y and x must have the same size.

The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes.

Shape

  • Input: (C) or (N,C) where N is the batch size and C is the number of classes.

  • Target: (C) or (N,C), label targets padded by -1 ensuring same shape as the input.

  • Output: scalar. If reduction is 'none', then (N).

Examples

if (torch_is_installed()) {
loss <- nn_multilabel_margin_loss()
x <- torch_tensor(c(0.1, 0.2, 0.4, 0.8))$view(c(1, 4))
# for target y, only consider labels 4 and 1, not after label -1
y <- torch_tensor(c(4, 1, -1, 2), dtype = torch_long())$view(c(1, 4))
loss(x, y)
}
#> torch_tensor
#> 0.85
#> [ CPUFloatType{} ]