Inroducing the SignalPop Universal Miner!

You can now easily mine Ethereum with our newly released SignalPop Universal Miner where all that you need to do is control your ambient temperature – we take care of the rest!

SignalPop Universal Miner

Built entirely for Windows bases systems (Windows 10 highly recommended), the Universal Miner carefully monitors your GPU’s and keeps their temperatures within a 5 degree Celsius range of the target temperature that you set.

Setting Temperature Targets

On hot days, we automatically ramp up the fans and on cold days we reduce them.

Active Hardware Monitoring

In addition to temperature control, the Universal Miner monitors the underlying mining software making sure that it continually runs.

Our goal is to keep you mining 24/7!

Mining for Charity

Whether operating a mining farm or running on a home computer, with the SignalPop Universal Miner you can optionally give a portion of your mining time to a charity.  During which time your mining work is done on behalf of the charity and your compensation is sent to their Ethereum address.

Mining for Charity

We give you a link to each charity so that you can vet them and decided which one’s you would like to donate your mining efforts to.

Monitor Your Account

The SignalPop Universal Miner uses the Ethermine Pool for all mining work – to view your account, simply select the ‘Account’ button which will take you directly to your account on their site where you can easily track your earnings.

You can get the free download from our Products page.

Happy Mining!

Deep Convolutional Auto-Encoders for MNIST Now Supported!

In our latest release, version 0.9.2.122, we now support deep convolutional auto-encoders with pooling as described by [1], and do so with the new ly released CUDA 9.2.148/cuDNN 7.1.4.

Auto-encoders are models that learn how to re-create the input fed into them.  In our example shown here, the MNIST dataset is fed into our model,…

Auto-Encoder Input

…a Deep Convolutional Auto-Encoder Model named AutoEncPool2.

Deep Convolutional Auto-Encoder Model with Pooling

The AutoEncPool2 model learns to re-create the MNIST inputs as shown below, which is the visualization of the last BATCHNORM output layer, bn5.

Auto-Encoder Learned Output

The magic occurs in the ip2encode layer which contains the 30 element encoding learned from the MNIST dataset.  To view the encoding, merely inspect the DEBUG layer attached to it.

Debug Layer

Inspecting the DEBUG layer produces a t-SNE analysis of the embedding that clearly shows data separation for each class within the learned encoding.

Auto-Encoder t-SNE Analysis

Such auto-encoder’s can be helpful in pre-training models and performing data reduction tasks.

If you would like to try the Auto-Encoder model yourself, just follow the easy steps in the Auto-Encoder tutorial.

References

[1] Volodymyr Turchenko, Eric Chalmers, Artur Luczac, A Deep Convolutional Auto-Encoder with Pooling – Unpooling Layers in Caffe. arXiv, 2017.

ResNet-56 for CIFAR-10 Now Supported!

In our latest release, version 0.9.2.30, we now support the ResNet-56 model trained on CIFAR-10 as described by [1], and do so with the newly released CUDA 9.2/cuDNN 7.1.4.

ResNet-56 Model for CIFAR-10

To try this out yourself, just follow the easy steps in the new ResNet tutorial!

New Features
  • New CUDA 9.2 support (requires driver 397.93 or above)
  • New cuDNN 7.1.4 support.
  • Easily switch between CUDA 9.2 and CUDA 9.1.
  • New ResNet-56 support with new model and solver templates.
  • Added ability to print models to image using Save As and selecting the Model image files (*.png) type.
  • Easily apply the settings of one node to all others like it in the Model Editor by right clicking the node and selecting the Apply Settings menu item.
Bug Fixes
  • New bug fixes in the BATCHNORM layer.
  • Solvers now correctly support blobs with no diff.
  • Previous stability issues now appear resolved (by NVIDIA) in latest NVIDIA 397.93 driver.  We now recommend using the 397.93 driver or later.

Happy ‘deep’ learning!

References

[1] Yihui He, ResNet 20 32 44 56 110 for CIFAR10 with caffe. GitHub, 2016.

BATCHNORM and ELU now support cuDNN!

In our latest release, version 0.9.1.86, we have added cuDNN support to both the BATCHNORM and ELU layers which provide speed improvements.  In addition we have added the following new features and fixes.

New Features
  • BATCHNORM layer now supports the cuDNN engine.
  • ELU layer now supports the cuDNN engine.
  • Layer debugging added that allows for easy NAN/INF detection on each pass.
  • Warnings are now supported in the Output Window.
  • Mouse-wheel scrolling has been added to the Toolbox Window.
  • The Model Editor now supports single-stepping both forward and backward passes.
  • The Model Editor now supports drag-n-drop replacement of Neuron layers.
New Bug Fixes
  • BATCHNORM layer (CUDA version) had numerous bug fixes.
  • ACCURACY layer had bug fixes related to NAN output.
  • POOLING configuration dialog now allows global pooling and kernel sizes of zero.
  • Stream synchronization improvements added throughout MyCaffe.
  • T-SNE now properly uses the T-SNE % of images.
  • Bug fixes added to resolve weight importing issues.
Known Issues

There appear to be several stability issues in the NVIDIA driver version 397.31 (as noted here).  We have also experience similar stability issues with this driver and version 397.64 when running on Windows 10.  For these reasons, we recommend using the NVIDIA Driver version 391.35 at this time.

Happy ‘deep’ learning!

Domain-Adversarial Neural Network support added to the SignalPop AI Designer!

Our latest SignalPop AI Designer release, version 0.9.1.70, now supports Domain-Adversarial Neural Networks (DANN) as described by [1].

By adding image overlay support to an updated MNIST Dataset Creator, you can now create both source and target datasets.

Source MNIST Dataset
Target MNIST Dataset

Using the new source and target dataset support you can now easily create DANN networks that use both.

An updated visual editor also supports multiple source and target datasets as shown below with the full DANN model.

DANN Model

The newly added GRADIENTSCALER layer allows for easy gradient reversal which is then added to the bottleneck layer, shown above, to create an adversarial relationship between the two networks.

Try creating a DANN yourself with the easy step-by-step Tutorials that show you how to get up and running with the latest version of the SignalPop AI Designer.

New General Features
  • Projects now optionally support both source and target datasets.
  • A new GRADIENTSCALLER layer has been added for gradient reversals.
  • Full DANN solver and model templates have been added.
  • The MNIST Dataset Creator can now create datasets with an image overlay.
  • We now support the recently released NVIDIA cuDNN 7.1.3.
New Debugging Features
  • We have added single stepping support for both training and testing.
  • A new blob data debugger shows the contents of each blob passing between layers.
  • The model editor has been improved to show models viewable by phase (TRAIN, TEST and RUN).

Happy ‘deep’ learning!

References

[1] Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., . . . Lempitsky, V. (2016). Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research 17, 1-35.

New beta release 0.9.1.31, synced up with Caffe through 3/26/2018 with cuDNN 7.1.2 support!

Today, we posted a new beta release (0.9.1.31) that is synced up with the native Caffe project through 3/26/2018 and supports the newly released cuDNN 7.1.2.

New Features

1.) Synced up with native Caffe through 3/26/2018 with the following highlights:

  • Added new Swish Layer.
  • Added minor changes and error checking.
  • At this time we are evaluating the new fine tuning changes, but have not added them just yet.

2.) We now support the latest cuDNN 7.1.2 library.

3.) The image evaluator now allows you to choose between the CAFFE and CUDNN engines for deconvolution.

Happy ‘deep’ learning!

New beta release 0.9.1.21, with a new focus on Windows 10 (1709+)

We have just released a new beta with a new focus on Windows 10 (specifically 1709 and above).  With this release moving forward, Windows 10 will be our primary platform of focus, yet we will continue to run our test cycles on Windows 7.

New Features and Fixes

The following main additions have been added to this release:

  • Support for newly released CUDA 9.1 (patches 1-3) and cuDNN 7.1.1
  • Fixed memory overwrites caused during the convolution backward pass when group > 1.
  • New updated, faster installation.
New Installation Notes

We have changed our installation process in that we now install the SignalPop In-Memory Database Service under the Local Service account.  Before using the LOAD_FROM_SERVICE image loading method you will need to do one of the following:

1.) Make sure the Local Service account has access to your DNN database tables (Use the SQL Server Management Studio to make these changes).

2.) Alternatively, change the Service Account used by the SignalPop In-Memory Database service to an account that has access to the tables within the DNN database.  This link will show you how to do just that.

New Maintenance Release Available – version 0.9.0.427

We have just dropped a new maintenance release version 0.9.0.427.  This release includes the following improvements:

1.) Dramatically improved start-up time.
2.) t-SNE algorithm includes bugs fixed related to very small ‘% of NN to circle’
3.) Dataset naming improved.
4.) First database creation improved.

For a list of all bugs fixed, see our bugs section in the Developer area.

NOTE: Your existing product license key will work with this new release, just install this version and you are ready to go!

If you don’t have the SignalPop AI Designer, you can download an evaluation version for free from the Products area.

Known Issues
  • IMPORTANT: When using the AlexNet (32×32) or (56×56) resource template, the second convolution layer ‘conv2’ uses a group setting of 2.  This causes a known CUDA error when using the CUDNN engine.  We recommend for now changing the ‘conv2’ group setting to 1 to work around the issue while we work on a fix.

New beta release now synced up with native Caffe through 2/1/2018!

We have just released a new beta release – version 0.9.0.409 that is fully synced up with the native Caffe open-source project up through 2/1/2018.

New features added in this version include the following:

1.) The deconvolution layer now supports the CUDNN engine.
2.) The BilinearFill has been updated.
3.) All NVIDIA cuDNN errors are now supported up through version 7.0.5.
4.) All NVIDIA CUDA errors are now supported up through version 9.1.
5.) The CUDA.9 low-level interface DLL now use compute_35 and sm_35 (for compute_30, sm_30) use the CUDA.8 low-level interface DLL.
6.) NCCL has been updated to resolve issues caused when training in multi-GPU configuration.

Happy learning!