Deep Convolutional Auto-Encoders for MNIST Now Supported!

In our latest release, version, we now support deep convolutional auto-encoders with pooling as described by [1], and do so with the new ly released CUDA 9.2.148/cuDNN 7.1.4.

Auto-encoders are models that learn how to re-create the input fed into them.  In our example shown here, the MNIST dataset is fed into our model,…

Auto-Encoder Input

…a Deep Convolutional Auto-Encoder Model named AutoEncPool2.

Deep Convolutional Auto-Encoder Model with Pooling

The AutoEncPool2 model learns to re-create the MNIST inputs as shown below, which is the visualization of the last BATCHNORM output layer, bn5.

Auto-Encoder Learned Output

The magic occurs in the ip2encode layer which contains the 30 element encoding learned from the MNIST dataset.  To view the encoding, merely inspect the DEBUG layer attached to it.

Debug Layer

Inspecting the DEBUG layer produces a t-SNE analysis of the embedding that clearly shows data separation for each class within the learned encoding.

Auto-Encoder t-SNE Analysis

Such auto-encoder’s can be helpful in pre-training models and performing data reduction tasks.

If you would like to try the Auto-Encoder model yourself, just follow the easy steps in the Auto-Encoder tutorial.


[1] Volodymyr Turchenko, Eric Chalmers, Artur Luczac, A Deep Convolutional Auto-Encoder with Pooling – Unpooling Layers in Caffe. arXiv, 2017.

ResNet-56 for CIFAR-10 Now Supported!

In our latest release, version, we now support the ResNet-56 model trained on CIFAR-10 as described by [1], and do so with the newly released CUDA 9.2/cuDNN 7.1.4.

ResNet-56 Model for CIFAR-10

To try this out yourself, just follow the easy steps in the new ResNet tutorial!

New Features
  • New CUDA 9.2 support (requires driver 397.93 or above)
  • New cuDNN 7.1.4 support.
  • Easily switch between CUDA 9.2 and CUDA 9.1.
  • New ResNet-56 support with new model and solver templates.
  • Added ability to print models to image using Save As and selecting the Model image files (*.png) type.
  • Easily apply the settings of one node to all others like it in the Model Editor by right clicking the node and selecting the Apply Settings menu item.
Bug Fixes
  • New bug fixes in the BATCHNORM layer.
  • Solvers now correctly support blobs with no diff.
  • Previous stability issues now appear resolved (by NVIDIA) in latest NVIDIA 397.93 driver.  We now recommend using the 397.93 driver or later.

Happy ‘deep’ learning!


[1] Yihui He, ResNet 20 32 44 56 110 for CIFAR10 with caffe. GitHub, 2016.

BATCHNORM and ELU now support cuDNN!

In our latest release, version, we have added cuDNN support to both the BATCHNORM and ELU layers which provide speed improvements.  In addition we have added the following new features and fixes.

New Features
  • BATCHNORM layer now supports the cuDNN engine.
  • ELU layer now supports the cuDNN engine.
  • Layer debugging added that allows for easy NAN/INF detection on each pass.
  • Warnings are now supported in the Output Window.
  • Mouse-wheel scrolling has been added to the Toolbox Window.
  • The Model Editor now supports single-stepping both forward and backward passes.
  • The Model Editor now supports drag-n-drop replacement of Neuron layers.
New Bug Fixes
  • BATCHNORM layer (CUDA version) had numerous bug fixes.
  • ACCURACY layer had bug fixes related to NAN output.
  • POOLING configuration dialog now allows global pooling and kernel sizes of zero.
  • Stream synchronization improvements added throughout MyCaffe.
  • T-SNE now properly uses the T-SNE % of images.
  • Bug fixes added to resolve weight importing issues.
Known Issues

There appear to be several stability issues in the NVIDIA driver version 397.31 (as noted here).  We have also experience similar stability issues with this driver and version 397.64 when running on Windows 10.  For these reasons, we recommend using the NVIDIA Driver version 391.35 at this time.

Happy ‘deep’ learning!

Domain-Adversarial Neural Network support added to the SignalPop AI Designer!

Our latest SignalPop AI Designer release, version, now supports Domain-Adversarial Neural Networks (DANN) as described by [1].

By adding image overlay support to an updated MNIST Dataset Creator, you can now create both source and target datasets.

Source MNIST Dataset
Target MNIST Dataset

Using the new source and target dataset support you can now easily create DANN networks that use both.

An updated visual editor also supports multiple source and target datasets as shown below with the full DANN model.

DANN Model

The newly added GRADIENTSCALER layer allows for easy gradient reversal which is then added to the bottleneck layer, shown above, to create an adversarial relationship between the two networks.

Try creating a DANN yourself with the easy step-by-step Tutorials that show you how to get up and running with the latest version of the SignalPop AI Designer.

New General Features
  • Projects now optionally support both source and target datasets.
  • A new GRADIENTSCALLER layer has been added for gradient reversals.
  • Full DANN solver and model templates have been added.
  • The MNIST Dataset Creator can now create datasets with an image overlay.
  • We now support the recently released NVIDIA cuDNN 7.1.3.
New Debugging Features
  • We have added single stepping support for both training and testing.
  • A new blob data debugger shows the contents of each blob passing between layers.
  • The model editor has been improved to show models viewable by phase (TRAIN, TEST and RUN).

Happy ‘deep’ learning!


[1] Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., . . . Lempitsky, V. (2016). Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research 17, 1-35.

New beta release, synced up with Caffe through 3/26/2018 with cuDNN 7.1.2 support!

Today, we posted a new beta release ( that is synced up with the native Caffe project through 3/26/2018 and supports the newly released cuDNN 7.1.2.

New Features

1.) Synced up with native Caffe through 3/26/2018 with the following highlights:

  • Added new Swish Layer.
  • Added minor changes and error checking.
  • At this time we are evaluating the new fine tuning changes, but have not added them just yet.

2.) We now support the latest cuDNN 7.1.2 library.

3.) The image evaluator now allows you to choose between the CAFFE and CUDNN engines for deconvolution.

Happy ‘deep’ learning!

New beta release, with a new focus on Windows 10 (1709+)

We have just released a new beta with a new focus on Windows 10 (specifically 1709 and above).  With this release moving forward, Windows 10 will be our primary platform of focus, yet we will continue to run our test cycles on Windows 7.

New Features and Fixes

The following main additions have been added to this release:

  • Support for newly released CUDA 9.1 (patches 1-3) and cuDNN 7.1.1
  • Fixed memory overwrites caused during the convolution backward pass when group > 1.
  • New updated, faster installation.
New Installation Notes

We have changed our installation process in that we now install the SignalPop In-Memory Database Service under the Local Service account.  Before using the LOAD_FROM_SERVICE image loading method you will need to do one of the following:

1.) Make sure the Local Service account has access to your DNN database tables (Use the SQL Server Management Studio to make these changes).

2.) Alternatively, change the Service Account used by the SignalPop In-Memory Database service to an account that has access to the tables within the DNN database.  This link will show you how to do just that.

New Maintenance Release Available – version

We have just dropped a new maintenance release version  This release includes the following improvements:

1.) Dramatically improved start-up time.
2.) t-SNE algorithm includes bugs fixed related to very small ‘% of NN to circle’
3.) Dataset naming improved.
4.) First database creation improved.

For a list of all bugs fixed, see our bugs section in the Developer area.

NOTE: Your existing product license key will work with this new release, just install this version and you are ready to go!

If you don’t have the SignalPop AI Designer, you can download an evaluation version for free from the Products area.

Known Issues
  • IMPORTANT: When using the AlexNet (32×32) or (56×56) resource template, the second convolution layer ‘conv2’ uses a group setting of 2.  This causes a known CUDA error when using the CUDNN engine.  We recommend for now changing the ‘conv2’ group setting to 1 to work around the issue while we work on a fix.

New beta release now synced up with native Caffe through 2/1/2018!

We have just released a new beta release – version that is fully synced up with the native Caffe open-source project up through 2/1/2018.

New features added in this version include the following:

1.) The deconvolution layer now supports the CUDNN engine.
2.) The BilinearFill has been updated.
3.) All NVIDIA cuDNN errors are now supported up through version 7.0.5.
4.) All NVIDIA CUDA errors are now supported up through version 9.1.
5.) The CUDA.9 low-level interface DLL now use compute_35 and sm_35 (for compute_30, sm_30) use the CUDA.8 low-level interface DLL.
6.) NCCL has been updated to resolve issues caused when training in multi-GPU configuration.

Happy learning!

Export datasets and projects directly into your Docker containers!

The new release of the SignalPop AI Designer (v. now allows you to easily export both your datasets and projects directly into your Docker containers!  With this feature, you can easily develop, edit and test your models (and datasets) locally in the visual SignalPop AI Designer and then quickly deploy them via SFTP into your production Docker container running native Caffe locally or in the cloud.

To get going all that you need to do is setup an SFTP Docker container (such as atmoz/sftp on DockerHub) and link it to your native Caffe Docker container (such as nvidia/caffe also on DockerHub) via a shared Docker volume.

The following Docker commands will get you started:

$ docker volume create mycaffe-vol
$ docker container run 
  -v mycaffe-vol:/home/signalpop/mycaffe 
  -p 2222:22 
  -d atmoz/sftp signalpop:password:1001::mycaffe/files
$ docker container run -it 
  -v mycaffe-vol:/workspace/mycaffe nvidia/caffe

Once up and running, you can then easily export datasets or projects from the SignalPop AI Designer right into your native Caffe Docker container!

To export your dataset (or project) simply right click on the dataset and select the ‘Export’ menu.

Exporting a dataset to a Docker container.

Once the export completes, just ls to the /workspace/mycaffe/files/data directory on your native Caffe Docker container and you will see the set of images for both the test and training set of the CIFAR-10 dataset.

For more information, see the “Exporting to Docker” section in the Getting Started document located in the Developers area and also shipped with the SignalPop AI Designer.