In our latest release, version 0.9.1.86, we have added cuDNN support to both the BATCHNORM and ELU layers which provide speed improvements. In addition we have added the following new features and fixes.
BATCHNORM layer now supports the cuDNN engine.
ELU layer now supports the cuDNN engine.
Layer debugging added that allows for easy NAN/INF detection on each pass.
Warnings are now supported in the Output Window.
Mouse-wheel scrolling has been added to the Toolbox Window.
The Model Editor now supports single-stepping both forward and backward passes.
The Model Editor now supports drag-n-drop replacement of Neuron layers.
New Bug Fixes
BATCHNORM layer (CUDA version) had numerous bug fixes.
ACCURACY layer had bug fixes related to NAN output.
POOLING configuration dialog now allows global pooling and kernel sizes of zero.
Stream synchronization improvements added throughout MyCaffe.
T-SNE now properly uses the T-SNE % of images.
Bug fixes added to resolve weight importing issues.
There appear to be several stability issues in the NVIDIA driver version 397.31 (as noted here). We have also experience similar stability issues with this driver and version 397.64 when running on Windows 10. For these reasons, we recommend using the NVIDIA Driver version 391.35 at this time.
We have just released a new beta with a new focus on Windows 10 (specifically 1709 and above). With this release moving forward, Windows 10 will be our primary platform of focus, yet we will continue to run our test cycles on Windows 7.
New Features and Fixes
The following main additions have been added to this release:
Support for newly released CUDA 9.1 (patches 1-3) and cuDNN 7.1.1
Fixed memory overwrites caused during the convolution backward pass when group > 1.
New updated, faster installation.
New Installation Notes
We have changed our installation process in that we now install the SignalPop In-Memory Database Service under the Local Service account. Before using the LOAD_FROM_SERVICE image loading method you will need to do one of the following:
1.) Make sure the Local Service account has access to your DNN database tables (Use the SQL Server Management Studio to make these changes).
2.) Alternatively, change the Service Account used by the SignalPop In-Memory Database service to an account that has access to the tables within the DNN database. This link will show you how to do just that.
We have just dropped a new maintenance release version 0.9.0.427. This release includes the following improvements:
1.) Dramatically improved start-up time.
2.) t-SNE algorithm includes bugs fixed related to very small ‘% of NN to circle’
3.) Dataset naming improved.
4.) First database creation improved.
For a list of all bugs fixed, see our bugs section in the Developer area.
NOTE: Your existing product license key will work with this new release, just install this version and you are ready to go!
If you don’t have the SignalPop AI Designer, you can download an evaluation version for free from the Products area.
IMPORTANT: When using the AlexNet (32×32) or (56×56) resource template, the second convolution layer ‘conv2’ uses a group setting of 2. This causes a known CUDA error when using the CUDNN engine. We recommend for now changing the ‘conv2’ group setting to 1 to work around the issue while we work on a fix.
We have just released a new beta release – version 0.9.0.409 that is fully synced up with the native Caffe open-source project up through 2/1/2018.
New features added in this version include the following:
1.) The deconvolution layer now supports the CUDNN engine.
2.) The BilinearFill has been updated.
3.) All NVIDIA cuDNN errors are now supported up through version 7.0.5.
4.) All NVIDIA CUDA errors are now supported up through version 9.1.
5.) The CUDA.9 low-level interface DLL now use compute_35 and sm_35 (for compute_30, sm_30) use the CUDA.8 low-level interface DLL.
6.) NCCL has been updated to resolve issues caused when training in multi-GPU configuration.
We just released our latest beta release (version 0.9.0.398) with support for CUDA 9.1 and cuDNN 7.0.5 recently released by NVIDIA.
To get the latest and greatest AI development tools from SignalPop, just join our free beta program by selecting Beta user when you sign up. Beta users have full access to the latest SignalPop AI Designer for up to 3 months.
The new release of the SignalPop AI Designer (v. 0.9.0.391) now allows you to easily export both your datasets and projects directly into your Docker containers! With this feature, you can easily develop, edit and test your models (and datasets) locally in the visual SignalPop AI Designer and then quickly deploy them via SFTP into your production Docker container running native Caffe locally or in the cloud.
To get going all that you need to do is setup an SFTP Docker container (such as atmoz/sftp on DockerHub) and link it to your native Caffe Docker container (such as nvidia/caffe also on DockerHub) via a shared Docker volume.
The following Docker commands will get you started:
Once up and running, you can then easily export datasets or projects from the SignalPop AI Designer right into your native Caffe Docker container!
To export your dataset (or project) simply right click on the dataset and select the ‘Export’ menu.
Once the export completes, just ls to the /workspace/mycaffe/files/data directory on your native Caffe Docker container and you will see the set of images for both the test and training set of the CIFAR-10 dataset.
For more information, see the “Exporting to Docker” section in the Getting Started document located in the Developers area and also shipped with the SignalPop AI Designer.