MyCaffe  1.12.2.41
Deep learning software for Windows C# programmers.
MyCaffe.layers Namespace Reference

The MyCaffe.layers namespace contains all layers that have a solidified code base, including the Layer class. More...

Namespaces

namespace  alpha
 The MyCaffe.layers.alpha namespace contains all experimental layers that have a fluid and changing code base.
 
namespace  beta
 The MyCaffe.layers.beta namespace contains all beta stage layers.
 
namespace  gpt
 The MyCaffe.layers.gpt namespace contains all GPT related layers.
 
namespace  hdf5
 The MyCaffe.layers.hdf5 namespace contains all HDF5 related layers.
 
namespace  lnn
 The MyCaffe.layers.lnn namespace contains all Liquid Neural Network (LNN) related layers.
 
namespace  nt
 The MyCaffe.layers.nt namespace contains all Neural Transfer related layers.
 
namespace  ssd
 The MyCaffe.layers.ssd namespace contains all Single-Shot MultiBox (SSD) related layers.
 
namespace  tft
 The MyCaffe.layers.tft namespace contains all TFT related layers.
 

Classes

class  AbsValLayer
 The AbsValLayer computes the absolute value of the input. More...
 
class  AccuracyLayer
 The AccuracyLayer computes the classification accuracy for a one-of-many classification task. This layer is initialized with the MyCaffe.param.AccuracyParameter. More...
 
class  ArgMaxLayer
 The ArgMaxLayer computes the index of the K max values for each datum across all dimensions $ (C \times H \times W) $. This layer is initialized with the MyCaffe.param.ArgMaxParameter. More...
 
class  AttentionLayer
 [DEPRECIATED] The AttentionLayer provides focus for LSTM based encoder/decoder models. More...
 
class  BaseConvolutionLayer
 The BaseConvolutionLayer is an abstract base class that factors out BLAS code common to ConvolutionLayer and DeconvolutionLayer More...
 
class  BaseDataLayer
 The BaseDataLayer is the base class for data Layers that feed Blobs of data into the Net. More...
 
class  BasePrefetchingDataLayer
 The BasePrefetchingDataLayer is the base class for data Layers that pre-fetch data before feeding the Blobs of data into the Net. More...
 
class  Batch
 The Batch contains both the data and label Blobs of the batch. More...
 
class  BatchNormLayer
 The BatchNormLayer normalizes the input to have 0-mean and/or unit (1) variance across the batch. This layer is initialized with the BatchNormParameter. More...
 
class  BatchReindexLayer
 The BatchReindexLayer provides an index into the input blob along its first axis. More...
 
class  BiasLayer
 The BiasLayer computes a sum of two input Blobs, with the shape of the latter Blob 'broadcast' to match the shape of the former. Equivalent to tiling the latter Blob, then computing the elementwise sum. This layer is initialized with the MyCaffe.param.BiasParameter. More...
 
class  BNLLLayer
 The Binomial Normal Log Liklihod Layer. More...
 
class  ClipLayer
 The ClipLayer provides a neuron layer that clips the data to fit within the [min,max] range. This layer is initialized with the MyCaffe.param.ClipParameter. More...
 
class  ConcatLayer
 The ConcatLayer takes at least two Blobs and concatentates them along either the num or channel dimension, outputing the result. This layer is initialized with the MyCaffe.param.ConcatParameter. More...
 
class  ConstantLayer
 The ConstantLayer provides a layer that just outputs a constant value. This layer is initialized with the MyCaffe.param.ConstantParameter. More...
 
class  ContrastiveLossLayer
 The ContrastiveLossLayer computes the contrastive loss $ E = \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d^2 + \left(1-y\right) \max \left(margin-d, 0\right)^2 $ where $ d = \left| \left| a_n - b_n \right| \right|_2 $. This layer is initialized with the MyCaffe.param.ContrastiveLossParameter. More...
 
class  ConvolutionLayer
 The ConvolutionLayer convolves the input image with a bank of learned filters, and (optionally) adds biases. This layer is initialized with the MyCaffe.param.ConvolutionParameter. More...
 
class  CopyLayer
 The CopyLayer copies the src bottom to the dst bottom. The layer has no output. More...
 
class  CropLayer
 The CropLayer takes a Blob and crops it to the shape specified by the second input Blob, across all dimensions after the specified axis. More...
 
class  DataLayer
 The DataLayer loads data from the IXImageDatabase database. This layer is initialized with the MyCaffe.param.DataParameter. More...
 
class  DataNormalizerLayer
 The DataNormalizerLayer normalizes the input data (and optionally label) based on the normalization operations specified in the layer parameter. More...
 
class  DebugLayer
 The DebugLayer merely stores, up to max_stored_batches, batches of input which are then optionally used by various debug visualizers. This layer is initialized with the MyCaffe.param.DebugParameter. More...
 
class  DeconvolutionLayer
 The DeconvolutionLayer convolves the input with a bank of learned filtered, and (optionally) add biases, treating filters and convolution parameters in the opposite sense as ConvolutionLayer. This layer is initialized with the MyCaffe.param.ConvolutionParameter. More...
 
class  DropoutLayer
 During training only, sets a random portion of $ x $ to 0, adjusting the rest of the vector magnitude accordingly This layer is initialized with the MyCaffe.param.DropoutParameter. More...
 
class  DummyDataLayer
 The DummyDataLayer provides data to the Net generated by a Filler. This layer is initialized with the MyCaffe.param.DummyDataParameter. More...
 
class  EltwiseLayer
 The EltwiseLayer computes elementwise oeprations, such as product and sum, along multiple input blobs. This layer is initialized with the MyCaffe.param.EltwiseParameter. More...
 
class  ELULayer
 The ELULayer computes exponential linear unit non-linearity $ y = \left\{ \begin{array}{lr} x \: \mbox{if} \; x > 0 \\ \alpha (\exp(x)-1) \: \mbox{if} \; x \le 0 \end{array} \right. $. This layer is initialized with the MyCaffe.param.EluParameter. More...
 
class  EmbedLayer
 The EmbedLayer is a layer for learning 'embeddings' of one-hot vector input. This layer is initialized with the MyCaffe.param.EmbedParameter. More...
 
class  EuclideanLossLayer
 The EuclideanLossLayer computes the Euclidean (L2) loss $ E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n \right| \right|_2^2 $ for real-valued regression tasks. More...
 
class  ExpLayer
 The ExpLayer which computes the exponential of the input. This layer is initialized with the MyCaffe.param.ExpParameter. More...
 
class  FilterLayer
 The FilterLayer takes two+ Blobs, interprets last Blob as a selector and filters remaining Blobs accordingly with selector data (0 means that the corresponding item has to be filtered, non-zero means that corresponding item needs to stay). More...
 
class  FlattenLayer
 The FlattenLayer reshapes the input Blob into flat vectors This layer is initialized with the MyCaffe.param.FlattenParameter. More...
 
class  GradientScaleLayer
 The GradientScaleLayer which scales the deltas during the backpropagation. This layer is initialized with the MyCaffe.param.GradientScaleParameter. More...
 
class  HingeLossLayer
 The HingeLossLayer computes the hinge loss for a one-of-many classification task. This layer is initialized with the MyCaffe.param.HingeLossParameter. More...
 
class  Im2colLayer
 The Im2ColLayer is a helper layer for image operations that rearranges image regions into column vectors.
More...
 
class  ImageDataLayer
 The ImageDataLayer loads data from the image files located in the root directory specified. This layer is initialized with the MyCaffe.param.ImageDataParameter. More...
 
class  InfogainLossLayer
 The InforgainLossLayer is a generalization of SoftmaxWithLossLayer that takes an 'information gain' (infogain) matrix specifying the 'value of all label pairs. This layer is initialized with the MyCaffe.param.InfogainLossParameter. More...
 
class  InnerProductLayer
 The InnerProductLayer, also know as a 'fully-connected' layer, computes the inner product with a set of learned weights, and (optionally) adds biases. This layer is initialized with the MyCaffe.param.InnerProductParameter. More...
 
class  InputLayer
 The InputLayer provides data to the Net by assigning top Blobs directly. This layer is initialized with the MyCaffe.param.InputParameter. More...
 
class  LabelMappingLayer
 /b DEPRECIATED (use DataLayer DataLabelMappingParameter instead) The LabelMappingLayer converts original labels to new labels specified by the label mapping. This layer is initialized with the MyCaffe.param.LabelMappingParameter. More...
 
class  LastBatchLoadedArgs
 Specifies the arguments sent to the OnBatchLoad event used when synchronizing between Data Layers. More...
 
class  Layer
 An interface for the units of computation which can be composed into a Net. More...
 
class  LayerParameterEx
 The LayerParameterEx class is used when sharing another Net to conserve GPU memory and extends the LayerParameter with shared Blobs for this purpose. More...
 
class  LogLayer
 The LogLayer computes the log of the input. This layer is initialized with the MyCaffe.param.LogParameter. More...
 
class  LossLayer
 The LossLayer provides an interface for Layer's that take two blobs as input – usually (1) predictions and (2) ground-truth labels – and output a singleton blob representing the loss. This layer is initialized with the MyCaffe.param.LossParameter. More...
 
class  LRNLayer
 The "Local Response Normalization" LRNLayer is used to normalize the input in a local region across or within feature maps. This layer is initialized with the MyCaffe.param.LRNParameter. More...
 
class  LSTMAttentionLayer
 The LSTMAttentionLayer adds attention to the long-short term memory layer and is used in encoder/decoder models. To use attention, just set 'enable_attention'=true. When disabled, this layer operates like a standard LSTM layer where inputs are in the shape T,B,I with T=timesteps, B=batch and I=input. More...
 
class  LSTMLayer
 The LSTMLayer processes sequential inputs using a 'Long Short-Term Memory' (LSTM) [1] style recurrent neural network (RNN). Implemented by unrolling the LSTM computation through time. This layer is initialized with the MyCaffe.param.RecurrentParameter. More...
 
class  LSTMSimpleLayer
 [DEPRECIATED - use LSTMAttentionLayer instead with enable_attention = false] The LSTMSimpleLayer is a simpe version of the long-short term memory layer. This layer is initialized with the MyCaffe.param.LSTMSimpleParameter. More...
 
class  LSTMUnitLayer
 The LSTMUnitLayer is a helper for LSTMLayer that computes a single timestep of the non-linearity of the LSTM, producing the updated cell and hidden states. More...
 
class  MathLayer
 The MathLayer which computes various mathematical functions of the input. This layer is initialized with the MyCaffe.param.MathParameter. More...
 
class  MemoryDataLayer
 The MemoryDataLayer provides data to the Net from memory. This layer is initialized with the MyCaffe.param.MemoryDataParameter. More...
 
class  MemoryDataLayerGetDataArgs
 The MemoryDataLayerGetDataArgs class is passed to the OnGetData event. More...
 
class  MemoryDataLayerPackDataArgs
 The MemoryDataLayerPackDataArgs is passed to the OnDataPack event which fires each time the data received in AddDatumVector needs to be packed into a specific ordering as is the case when using an LSTM network. More...
 
class  MemoryLossLayer
 The MemoryLossLayer provides a method of performing a custom loss functionality. Similar to the MemoryDataLayer, the MemoryLossLayer supports an event used to get the loss value. This event is called OnGetLoss, which once retrieved is used for learning on the backward pass. More...
 
class  MemoryLossLayerGetLossArgs
 The MemoryLossLayerGetLossArgs class is passed to the OnGetLoss event. More...
 
class  MultinomialLogisticLossLayer
 The MultinomialLogicistLossLayer computes the multinomial logistc loss for a one-of-many classification task, directly taking a predicted probability distribution as input. More...
 
class  MVNLayer
 The "Mean-Variance Normalization" MVNLayer normalizes the input to have 0-mean and/or unit (1) variance. This layer is initialized with the MyCaffe.param.MVNParameter. More...
 
class  NeuronLayer
 The NeuronLayer is an interface for layers that take one blob as input (x) and produce only equally-sized blob as output (y), where each element of the output depends only on the corresponding input element. More...
 
class  ParameterLayer
 The ParameterLayer passes its blob[0] data and diff to the top[0]. More...
 
class  PoolingLayer
 The PoolingLayer pools the input image by taking the max, average, etc. within regions. This layer is initialized with the MyCaffe.param.PoolingParameter. More...
 
class  PowerLayer
 The PowerLayer computes the power of the input. This layer is initialized with the MyCaffe.param.PowerParameter. More...
 
class  PReLULayer
 The PReLULayer computes the "Parameterized Rectified Linear Unit" non-linearity. This layer is initialized with the MyCaffe.param.PReLUParameter. More...
 
class  QuantileLossLayer
 The QuantileLossLayer computes the quantile loss $ E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n \right| \right|_2^2 $ for real-valued regression tasks. More...
 
class  RecurrentLayer
 The RecurrentLayer is an abstract class for implementing recurrent behavior inside of an unrolled newtork. This layer type cannot be instantiated – instead, you should use one of teh implementations which defines the recurrent architecture, such as RNNLayer or LSTMLayer. This layer is initialized with the MyCaffe.param.RecurrentParameter. More...
 
class  ReductionLayer
 The ReductionLayer computes the 'reductions' – operations that return a scalar output Blob for an input Blob of arbitrary size, such as the sum, absolute sum, and sum of squares. This layer is initialized with the MyCaffe.param.ReductionParameter. More...
 
class  ReLULayer
 The ReLULayer computes the "Rectifier Linear Unit" ReLULayer non-linearity, a classic for neural networks. This layer is initialized with the MyCaffe.param.ReLUParameter. More...
 
class  ReshapeLayer
 The ReshapeLayer reshapes the input Blob into an arbitrary-sized output Blob. This layer is initialized with the MyCaffe.param.ReshapeParameter. More...
 
class  RNNLayer
 The RNNLayer processes time-varying inputs using a simple recurrent neural network (RNN). Implemented as a network unrolling the RNN computation in time. This layer is initialized with the MyCaffe.param.RecurrentParameter. More...
 
class  ScaleLayer
 The ScaleLayer computes the elementwise product of two input Blobs, with the shape of the latter Blob 'broadcast' to match the shape of the former. Equivalent to tiling the later Blob, then computing the elementwise product. Note: for efficiency and convienience this layer can additionally perform a 'broadcast' sum too when 'bias_term: true' This layer is initialized with the MyCaffe.param.ScaleParameter. is set. More...
 
class  SigmoidCrossEntropyLossLayer
 The SigmoidCrossEntropyLayer computes the cross-entropy (logisitic) loss and is often used for predicting targets interpreted as probabilities. More...
 
class  SigmoidLayer
 The SigmoidLayer is a neuron layer that calculates the sigmoid function, a classc choice for neural networks. This layer is initialized with the MyCaffe.param.SigmoidParameter. More...
 
class  SilenceLayer
 The SilenceLayer ignores bottom blobs while producing no top blobs. (This is useuful to suppress output during testing.) More...
 
class  SliceLayer
 The SliceLayer takes a blob and slices it along either the num or channel dimensions outputting multiple sliced blob results. This layer is initialized with the MyCaffe.param.SliceParameter. More...
 
class  SoftmaxCrossEntropy2LossLayer
 The SoftmaxCrossEntropy2Layer computes the cross-entropy (logisitic) loss and is often used for predicting targets interpreted as probabilities. More...
 
class  SoftmaxCrossEntropyLossLayer
 The SoftmaxCrossEntropyLossLayer computes the cross-entropy (logisitic) loss and is often used for predicting targets interpreted as probabilities in reinforcement learning. More...
 
class  SoftmaxLayer
 The SoftmaxLayer computes the softmax function. This layer is initialized with the MyCaffe.param.SoftmaxParameter. More...
 
class  SoftmaxLossLayer
 Computes the multinomial logistic loss for a one-of-many classification task, passing real-valued predictions through a softmax to get a probability distribution over classes. More...
 
class  SplitLayer
 The SplitLayer creates a 'split' path in the network by copying the bottom blob into multiple top blob's to be used by multiple consuming layers. More...
 
class  SPPLayer
 The SPPLayer does spatial pyramid pooling on the input image by taking the max, average, etc. within regions so that the result vector of different sized images are of the same size. This layer is initialized with the MyCaffe.param.SPPParameter. More...
 
class  SwishLayer
 The SwishLayer provides a novel activation function that tends to work better than ReLU. This layer is initialized with the MyCaffe.param.SwishParameter. More...
 
class  TanhLayer
 The TanhLayer is a neuron layer that calculates the tanh function, popular with auto-encoders. This layer is initialized with the MyCaffe.param.TanhParameter. More...
 
class  ThresholdLayer
 The ThresholdLayer is a neuron layer that tests whether the input exceeds a threshold: outputs 1 for inputs above threshold; 0 otherwise. This layer is initialized with the MyCaffe.param.ThresholdParameter. More...
 
class  TileLayer
 The TileLayer copies a Blob along specified dimensions. This layer is initialized with the MyCaffe.param.TileParameter. More...
 

Detailed Description

The MyCaffe.layers namespace contains all layers that have a solidified code base, including the Layer class.