SNIPE - Scalable and Generalized Neural Information Processing Engine

Download

Choose among the following Files:

With some browsers, it may happen that downloads are not loaded. If this problem persists, try to download by right clicking on the dowload link and selecting “save as”!

What is SNIPE?

SNIPE is a well-documented JAVA library that implements a framework for neural networks in a speedy, feature-rich and usable way. It is available at no cost for non-commercial purposes; if you plan to use it in a commercial way, please get in touch with me.

SNIPE was originally designed for high performance simulations with lots and lots of neural networks (even large ones) being trained simultaneously. Recently, I decided to give it away as a professional reference implementation that covers the network aspects handled with in my manuscript "A Brief Introduction to Neural Networks", while at the same time being faster and more efficient than lots of other implementations due to the original high-performance simulation design goal.

  1. Generalized, dynamic data structure for arbitrary network topologies, so that virtually all network structures can be realized or even easily hand-crafted. Topological editing at run-time is easily possible. Roughly put, I use a adjacency list that was optimized and tailored to provide the following features regardless of the topology:
    1. In-situ processing. No preprocessing of the data structure needed when switching from editing to propagating and vice versa.
    2. Memory consumption in O(synapses)
    3. Batch-read and batch-write of all synaptic weights of a neuron in O(log(synapsesIncidentToNeuron)), therefore complete forward and backward propagations in O(synapses)
    4. Read/write single synaptic weights: O(log(neurons))
    5. Add/remove synapses: O(neurons*log(neurons))
    6. Add/remove neurons: O(neurons+synapses)
  2. Built-In, fast and easy-to-use learning operators for gradient descent or evolutionary learning, as well as mechanisms for efficient control of even large network populations.
  3. Mechanisms for design and control of even large populations of neural networks
  4. Usage of only low level data structures (arrays) for easy portability. It is not the goal to quench the last tiniest bit of asymptotic complexity out of the structure, but to make it usable, light weight and fast in praxis.
  5. No object-oriented overload, like objects for every neuron or even synapses, etc.

For the sake of readability, the runtimes presented here are actually a bit too high – for a more exact analysis, have a look at JavaDoc of core.NeuralNetwork.

What SNIPE is not

SNIPE doesn't provide an eye-candy GUI or similar features. It is just a framework that enables you using neural networks in your software in a professional, yet easy way – nothing more and nothing less.

Provide Feedback!

I really appreciate and make use of feedback I receive from users. If you have any complaints, bug-fixes, suggestions, or acclamations :-) send emails to me or place a comment in the discussion section below at the bottom of this page.

News

Just click on a news headline to get more information.

2011-10-16: Snipe Version 0.9 released

2010-11-14: Snipe Version 0.82

2010-05-11: SNIPE Version 0.81

2010-03-28: SNIPE - Scalable and Generalized Neural Information Processing Engine

Getting started with SNIPE - a small TODO list

  1. Learn JAVA :-)
  2. Read this page completely for a brief introduction
  3. Download the JAR and make it accessible in your JAVA development environment
  4. Download the JavaDoc, unzip it and open it in your Browser
  5. Read the documentation of the class com.dkriesel.snipe.core.NeuralNetworkDescriptor to learn the very basics
  6. Read as much as you wish of the documentation of the class com.dkriesel.snipe.core.NeuralNetwork to get some detail information and an idea of its features and their computational efforts
  7. Have a closer look at the code examples on this page
  8. Try things for yourself. SNIPE throws exceptions on lots of errors you can encounter and false arguments you can put into methods, so learn by doing!

Brief Package and Class Description

SNIPE consists of four main packages which in turn contain several classes. All package names include the prefix com.dkriesel.SNIPE., while all qualified class names include their respective package names as prefix. Packages and classes introduced in this section are ordered by importance in your development. Starting with the very fundamental ones, we go further to the ones needed for advanced development.

The core package (the most important)

This package contains the two core classes of SNIPE - NeuralNetworkDescriptor, which you use to outline and create instances of Neural Networks, and the NeuralNetwork class itself; Both of those classes' documentations, and the package documentation as well contain fundamental information to get used to SNIPE.

  • NeuralNetworkDescriptor: Read and instantiate first! A NeuralNetworkDescriptor object defines a set of high-level general parameters that are used to create a possibly large group of neural network instances. To create a neural network, you first define a descriptor, and then create a neural network using the descriptor. This layout is useful for the creation of even large populations of networks, for instance when using evolutionary algorithms or similar. Moreover, some fundamental information about SNIPE usage is given in the documentation of this class.
  • NeuralNetwork: Instantiated second using a NeuralNetworkDescriptor instance, this class represents the core functionality of SNIPE: A neural network of arbitrary topology with a lightweight and calculation efficient data structure, the possibility to change the entire network topology as well as the synaptic weights and whole lots of other features. In the documentation of this class, detail information about the efficient data structure of the neural network is given, as well as information about its features and their computational effort.

The training package

Contains a class where to put in the training data for your neural network and one that contains several error measurement methods.

  • ErrorMeasurement: Contains several error measurement methods for use in a static way, like RootMeanSquare, Euclidean error and others.
  • TrainingSampleLesson: Use this class for training data storage and optimization.

The neuronbehavior package

Contains several neuron behaviors. Neuron behaviors are generalized activity functions, some of which are introduced here, as well as an interface to implement customized ones.

  • NeuronBehavior: Implement this interface to create neuron behaviors of your own!
  • TangensHyperbolicus and Fermi: Those classes represent the standard activity functions, namely Tangens Hyperbolicus and Fermi Function.
  • TangensHyperbolicusAnguita represents a tuned approximation of tangens hyperbolicus that is about 200 times faster and incorporates other advantages as well. There are also other versions of the tangens hyperbolicus at your disposal.
  • Different leaky integrators allow for neuron-individual dynamics.

The util package

Contains some SNIPE utilities: A random number generator (which is not mine; see its class documentation for copyright information) and a class which generates DOT-Code of a neural net in order to visualize it using GraphViz.

  • GraphVizEncoder is a class that creates some designed code for the GraphViz DOT Engine to visualize a given neural network. Be careful – this tool easily generates the code even for networks that are way too large for GraphViz to deal with :-|
  • MersenneTwisterFast is a fast pseudo random generator used widely in SNIPE. It is not mine; have a look into its documentation to get more copyright information.

Example Code Snippets

Here is some richly commented example code for you – enjoy! Copy it into files created in the package directories stated in each code title, so it corresponds to the JavaDoc information.

Teach a multilayer perceptron the 8-3-8 problem using Backprop

This code snippet provides an introduction to the four most commonly used classes in SNIPE: NeuralNetworkDescriptor, NeuralNetwork, TrainingSampleLesson and ErrorMeasurement. It shows how to create a NeuralNetwork instance of a given size using a NeuralNetworkDescriptor instance, and how to create a small instance of TrainingSampleLesson using its predefined static method getEncoderSampleLesson. Further, it shows how to train the NeuralNetwork instance with backpropagation, how data is propagated through the network and how you can measure the network's error values.

com/dkriesel/snipe/examples/MultilayerPerceptron838ProblemBackprop.java

package com.dkriesel.snipe.examples;
 
import java.text.DecimalFormat;
 
import com.dkriesel.snipe.core.NeuralNetwork;
import com.dkriesel.snipe.core.NeuralNetworkDescriptor;
import com.dkriesel.snipe.neuronbehavior.TangensHyperbolicusAnguita;
import com.dkriesel.snipe.training.ErrorMeasurement;
import com.dkriesel.snipe.training.TrainingSampleLesson;
 
/**
 * Very simple example program that trains an 8-3-8 multilayer perceptron
 * encoder problem with backpropagation of error.
 * 
 * @author David Kriesel / dkriesel.com
 * 
 */
public class MultilayerPerceptron838ProblemBackprop {
 
	/**
	 * Executes the example.
	 * 
	 * @param args
	 *            no args are parsed.
	 */
	public static void main(String[] args) {
 
		/*
		 * Create and configure a descriptor for feed forward networks without
		 * shortcut connections and with fastprop, an identity activity
		 * functions in the input layer and tangens hyperbolicus functions in
		 * hidden and output layers. To learn about fastprop, have a look into
		 * the NeuralNetworkDescriptor documentation.
		 */
		NeuralNetworkDescriptor desc = new NeuralNetworkDescriptor(8, 3, 8);
		desc.setSettingsTopologyFeedForward();
 
		/*
		 * If you want, remove the comment slashes from the following two lines
		 * to use a tuned tangens hyperbolicus approximation that is computed
		 * faster and provides better learning sometimes.
		 */
		// desc.setNeuronBehaviorHiddenNeurons(new TangensHyperbolicusAnguita());
		// desc.setNeuronBehaviorOutputNeurons(new TangensHyperbolicusAnguita());
 
		/*
		 * Create neural network using the descriptor (we could as well generate
		 * thousands of similar networks at this point of time). The network
		 * will be automatically added all allowed synapses according to the
		 * default settings in the descriptor.
		 */
		NeuralNetwork net = new NeuralNetwork(desc);
 
		/*
		 * Prepare Training Data: 8-Dimensional encoder problem with 1 as
		 * positive and -1 as negative value
		 */
		TrainingSampleLesson lesson = TrainingSampleLesson
				.getEncoderSampleLesson(8, 1, -1);
 
		/*
		 * Train that sucker with backprop in three phases with different
		 * learning rates. In between, display progress, and measure overall
		 * time.
		 */
		long startTime = System.currentTimeMillis();
		System.out.println("Root Mean Square Error before training:\t"
				+ ErrorMeasurement.getErrorRootMeanSquareSum(net, lesson));
		net.trainBackpropagationOfError(lesson, 250000, 0.2);
		System.out.println("Root Mean Square Error after phase 1:\t"
				+ ErrorMeasurement.getErrorRootMeanSquareSum(net, lesson));
		net.trainBackpropagationOfError(lesson, 250000, 0.05);
		System.out.println("Root Mean Square Error after phase 2:\t"
				+ ErrorMeasurement.getErrorRootMeanSquareSum(net, lesson));
		net.trainBackpropagationOfError(lesson, 250000, 0.01);
		System.out.println("Root Mean Square Error after phase 3:\t"
				+ ErrorMeasurement.getErrorRootMeanSquareSum(net, lesson));
		long endTime = System.currentTimeMillis();
		long time = endTime - startTime;
 
		/*
		 * Print out what the network learned (in a formatted way) and the time
		 * it took.
		 */
		DecimalFormat df = new DecimalFormat("#.#");
		System.out.println("\nNetwork output:");
		for (int i = 0; i < lesson.countSamples(); i++) {
			double[] output = net.propagate(lesson.getInputs()[i]);
			for (int j = 0; j < output.length; j++) {
				System.out.print(df.format(output[j]) + "\t");
			}
			System.out.println("");
		}
 
		System.out.println("\nTime taken: " + time + "ms");
	}
}

Hand-Craft a Network

This example performs several structural operations on a large population of neural networks. In the end, GraphViz DOT-Code for one of the networks is generated. Below the code, there are pictures that illustrate what happens with the networks from step 2 on. Note, that in the pictures normally are two BIAS neurons - this is because every layer gets an own bias in order to get better GraphViz layout. I cut out one of those in every picture.

com/dkriesel/snipe/examples/HandCraftNetwork.java

package com.dkriesel.snipe.examples;
 
import com.dkriesel.snipe.core.NeuralNetwork;
import com.dkriesel.snipe.core.NeuralNetworkDescriptor;
import com.dkriesel.snipe.util.GraphVizEncoder;
 
/**
 * Simple example program that shows how to hand-craft even large numbers of
 * networks.
 * 
 * @author David Kriesel / dkriesel.com
 * 
 */
public class HandCraftNetwork {
 
	/**
	 * Executes the example.
	 * 
	 * @param args
	 *            no args are parsed.
	 */
	public static void main(String[] args) {
 
		/*
		 * Create a NeuralNetworkDescriptor outlining Networks with two input
		 * neurons, one hidden neuron and two output neurons. Tell the networks
		 * not to automatically initialize synapses that are allowed. Tell them
		 * to choose synaptic weight values out of [-0.1;0.1] when initializing
		 * synapse weights randomly.
		 */
		NeuralNetworkDescriptor desc = new NeuralNetworkDescriptor(2, 1, 2);
		desc.setInitializeAllowedSynapses(false);
		desc.setSynapseInitialRange(0.1);
 
		/*
		 * Create ten thousand of those networks.
		 */
		NeuralNetwork[] net = desc.createNeuralNetworks(10000);
 
		/*
		 * for-loop that customizes each of the networks
		 */
		for (int i = 0; i < net.length; i++) {
 
			/*
			 * Step 0: We now have 10000 neural networks with 2 input neurons
			 * (numbered 1,2), one hidden neuron (3) and two output neurons (4
			 * and 5). There are no synapses yet. The input layer is numbered 0,
			 * the hidden layer 1 and the output layer is numbered 2. We will
			 * see that the NeuralNetwork class maintains the ascending
			 * numbering of neurons and layers.
			 */
 
			/*
			 * Step 1: Now, in each network create three additional hidden
			 * neurons. The hidden layer now contains neurons with indices 3, 4,
			 * 5 and 6, while the indices of the output neurons have been
			 * increased.
			 */
			net[i].createNeuronInLayer(1);
			net[i].createNeuronInLayer(1);
			int thirdOne = net[i].createNeuronInLayer(1);
 
			/*
			 * Step 2: Create a synapse from the BIAS to the hidden neuron just
			 * added.
			 */
			net[i].setSynapse(0, thirdOne, 2.0);
 
			/*
			 * Step 3: Remove the hidden neuron that was added last. This
			 * decreases all following neuron indices and removes the incident
			 * synapse we added as well.
			 */
			net[i].removeNeuron(thirdOne);
 
			/*
			 * Step 4: Now, create a full connection set from hidden to output
			 * layer. Connections will be initialized with weight values out of
			 * [-0.1;0.1], as we defined in the descriptor.
			 */
			net[i].createSynapsesFromLayerToLayer(1, 2);
 
			/*
			 * Step 5: Add three forward connections from input layer to hidden
			 * layer.
			 */
			net[i].setSynapse(1, 3, 5.0);
			net[i].setSynapse(1, 4, 2.0);
			net[i].setSynapse(2, 5, 3.0);
 
			/*
			 * Step 6: Add two forward shortcuts from input layer to output
			 * layer.
			 */
			net[i].setSynapse(1, 6, 4.0);
			net[i].setSynapse(2, 7, 5.0);
 
			/*
			 * Step 7: Add two self-connections in hidden layer.
			 */
			net[i].setSynapse(3, 3, 6.0);
			net[i].setSynapse(5, 5, 7.0);
 
			/*
			 * Step 8: Add two lateral connections between the two output
			 * neurons. Now, we're done.
			 */
			net[i].setSynapse(6, 7, 8.0);
			net[i].setSynapse(7, 6, 9.0);
		}
 
		/*
		 * Print out GraphViz Code of one network. Weight labels of weak
		 * synapses (namely, the ones that were initialized with weight
		 * absolutes smaller or equal to 0.1) are suppressed. Those synapses are
		 * printed lighter as well.
		 */
		GraphVizEncoder graph = new GraphVizEncoder();
		String code = graph.getGraphVizCode(net[0], "NeuralNetwork");
		System.out.println(code);
	}
}

Disclaimer and Terms of Use

SNIPE is provided “as is” and any expressed or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the regents or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.

SNIPE and its documentation are licensed under the Creative Commons Attribution-No Derivative Works 3.0 Unported License, except for some little portions of the work mentioned separately in the JavaDoc. Note that this license does not extend to the source files used to produce the JAR and documentation. Those are still mine (for now).

Comments

Because of caching, a comment can take up to two minutes until it appears.

Hi David,

I just began reading your Script on Neural Networks and found my way to SNIPE through it. Trying to reach the online JavaDoc I encountered a DNS Error, since the snipe.dkriesel.com pages are not available.

Just wanted to let you know!

L.Dohmen

1 |
Lucas Dohmen
| 2016/06/22 16:37 | reply

Hi,

I get a “no main manifest attribute” error trying to execute the JAR file on Ubuntu. I'm new to Java, so I needed some help understanding the error message. The main-class property is missing in the META-INF/MANIFEST.MF file. Is this the descriptor class?

Thanks.

2 |
J.
| 2016/11/27 07:42 | reply



Z​ D M