inferno.extensions.layers package¶
Submodules¶
inferno.extensions.layers.activations module¶
inferno.extensions.layers.convolutional module¶
-
class
inferno.extensions.layers.convolutional.
ConvActivation
(in_channels, out_channels, kernel_size, dim, activation, stride=1, dilation=1, groups=None, depthwise=False, bias=True, deconv=False, initialization=None)[source]¶ Bases:
torch.nn.modules.module.Module
Convolutional layer with ‘SAME’ padding followed by an activation.
-
class
inferno.extensions.layers.convolutional.
ConvELU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvELU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvSigmoid2D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, Sigmoid and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvSigmoid3D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, Sigmoid and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DeconvELU2D
(in_channels, out_channels, kernel_size=2)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D deconvolutional layer with ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DeconvELU3D
(in_channels, out_channels, kernel_size=2)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D deconvolutional layer with ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
StridedConvELU2D
(in_channels, out_channels, kernel_size, stride=2)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D strided convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
StridedConvELU3D
(in_channels, out_channels, kernel_size, stride=2)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D strided convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DilatedConvELU2D
(in_channels, out_channels, kernel_size, dilation=2)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D dilated convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DilatedConvELU3D
(in_channels, out_channels, kernel_size, dilation=2)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D dilated convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
Conv2D
(in_channels, out_channels, kernel_size, dilation=1, activation=None)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D convolutional layer with same padding and orthogonal weight initialization. By default, this layer does not apply an activation function.
-
class
inferno.extensions.layers.convolutional.
Conv3D
(in_channels, out_channels, kernel_size, dilation=1, activation=None)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D convolutional layer with same padding and orthogonal weight initialization. By default, this layer does not apply an activation function.
-
class
inferno.extensions.layers.convolutional.
BNReLUConv2D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D BN-ReLU-Conv layer with ‘SAME’ padding and He weight initialization.
-
class
inferno.extensions.layers.convolutional.
BNReLUConv3D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D BN-ReLU-Conv layer with ‘SAME’ padding and He weight initialization.
-
class
inferno.extensions.layers.convolutional.
BNReLUDepthwiseConv2D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D BN-ReLU-Conv layer with ‘SAME’ padding, He weight initialization and depthwise convolution. Note that depthwise convolutions require in_channels == out_channels.
-
class
inferno.extensions.layers.convolutional.
ConvSELU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with SELU activation and the appropriate weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvSELU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases:
inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with SELU activation and the appropriate weight initialization.
inferno.extensions.layers.device module¶
-
class
inferno.extensions.layers.device.
DeviceTransfer
(target_device, device_ordinal=None, async=False)[source]¶ Bases:
torch.nn.modules.module.Module
Layer to transfer variables to a specified device.
-
class
inferno.extensions.layers.device.
OnDevice
(module, target_device, device_ordinal=None, async=False)[source]¶ Bases:
torch.nn.modules.module.Module
Moves a module to a device. The advantage of using this over torch.nn.Module.cuda is that the inputs are transferred to the same device as the module, enabling easy model parallelism.
inferno.extensions.layers.reshape module¶
-
class
inferno.extensions.layers.reshape.
View
(as_shape)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
inferno.extensions.layers.reshape.
As3D
(channel_as_z=False, num_channels_or_num_z_slices=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
inferno.extensions.layers.reshape.
As2D
(z_as_channel=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
inferno.extensions.layers.reshape.
Concatenate
(dim=1)[source]¶ Bases:
torch.nn.modules.module.Module
Concatenate input tensors along a specified dimension.
-
class
inferno.extensions.layers.reshape.
Cat
(dim=1)[source]¶ Bases:
inferno.extensions.layers.reshape.Concatenate
An alias for Concatenate. Hey, everyone knows who Cat is.
-
class
inferno.extensions.layers.reshape.
ResizeAndConcatenate
(target_size, pool_mode='average')[source]¶ Bases:
torch.nn.modules.module.Module
Resize input tensors spatially (to a specified target size) before concatenating them along the channel dimension. The downsampling mode can be specified (‘average’ or ‘max’), but the upsampling is always ‘nearest’.
-
POOL_MODE_MAPPING
= {'avg': 'avg', 'average': 'avg', 'mean': 'avg', 'max': 'max'}¶
-
-
class
inferno.extensions.layers.reshape.
PoolCat
(target_size, pool_mode='average')[source]¶ Bases:
inferno.extensions.layers.reshape.ResizeAndConcatenate
Alias for ResizeAndConcatenate, just to annoy snarky web developers.