Interface: Transforms
torchlive/torchvision.Transforms
Transforms are common image transformations available in the torchvision.transforms module.
https://pytorch.org/vision/0.12/transforms.html
Methods
centerCrop
▸ centerCrop(size
): Transform
Crops the image Tensor at the center. It is expected to have […, H, W]
shape, where …
means an arbitrary number of leading dimensions. If image
size is smaller than output size along any edge, image is padded with 0
and then center cropped.
https://pytorch.org/vision/0.12/generated/torchvision.transforms.CenterCrop.html
Parameters
Name | Type | Description |
---|---|---|
size | number | [number ] | [number , number ] | Desired output size of the crop. If size is an int instead of sequence like (h, w) , a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]) . |
Returns
Defined in
torchlive/torchvision.ts:42
grayscale
▸ grayscale(numOutputChannels?
): Transform
Convert image to grayscale. It is expected to have […, 3, H, W] shape, where … means an arbitrary number of leading dimensions.
https://pytorch.org/vision/0.12/generated/torchvision.transforms.Grayscale.html
Parameters
Name | Type | Description |
---|---|---|
numOutputChannels? | 1 | 3 | Number of channels desired for output image. |
Returns
Defined in
torchlive/torchvision.ts:52
normalize
▸ normalize(mean
, std
, inplace?
): Transform
Normalize a tensor image with mean and standard deviation. Given mean:
(mean[1],...,mean[n])
and std: (std[1],..,std[n])
for n
channels,
this transform will normalize each channel of the input torch.
Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
.
https://pytorch.org/vision/0.12/generated/torchvision.transforms.Normalize.html
Parameters
Name | Type | Description |
---|---|---|
mean | number [] | Sequence of means for each channel. |
std | number [] | Sequence of standard deviations for each channel. |
inplace? | boolean | Bool to make this operation in-place. |
Returns
Defined in
torchlive/torchvision.ts:67
resize
▸ resize(size
, interpolation?
, maxSize?
, antialias?
): Transform
Resize the input tensor image to the given size. It is expected to have
[…, H, W]
shape, where …
means an arbitrary number of leading
dimensions.
https://pytorch.org/vision/0.12/generated/torchvision.transforms.Resize.html
Parameters
Name | Type | Description |
---|---|---|
size | number | [number ] | [number , number ] | Desired output size. If size is a sequence like (h, w) , output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width , then image will be rescaled to (size * height / width, size) . |
interpolation? | InterpolationMode | Desired interpolation enum. |
maxSize? | number | The maximum allowed for the longer edge of the resized image. |
antialias? | boolean | Antialias flag. The flag is false by default and can be set to true for InterpolationMode.BILINEAR only mode. |
Returns
Defined in
torchlive/torchvision.ts:86