SLFN Solvers

Background solvers for Single Layer Feed-forward Network (SLFN) that do all heavy-lifting computations, a separate solver accelerates computations for separate hardware (GPU, etc.). Interface is defined by SLFN class.

Use different solvers by passing optional parameter accelerator to ELM or HPELM.

Basic SLFN solver that follows paper notations, defines interface for all solvers.

Created on Sun Sep 6 11:18:55 2015 @author: akusok

class hpelm.nnets.slfn.SLFN(inputs, outputs, norm=None, precision=<type 'numpy.float64'>)[source]

Bases: object

Single Layer Feed-forward Network (SLFN), the neural network that ELM trains.

This implementation is not the fastest but very simple, and it defines interface. Gives correct output, other solvers should provide the same output as this guy.

Parameters:
  • outputs (int) – number of outputs, or classes for classification
  • norm (double) – output weights normalization parameter (Tikhonov normalizaion, or ridge regression), large values provides smaller (= better) weights but worse model accuracy
  • precision (Numpy.float32/64) – solver precision, float32 is faster but may be worse, most GPU work fast only in float32.
neurons

list

a list of different types of neurons, initially empty. One neuron type is a tuple (‘number of neurons’, ‘function_type’, W, Bias), neurons is a list of [neuron_type_1, neuron_type_2, ...].

func

dict

a dictionary of transformation function type, key is a neuron type (= function name) and value is the function itself. A single function takes input parameters X, W, B, and outputs corresponding H for its neuron type.

HH, HT

matrix

intermediate covariance matrices used in ELM solution. Can be computed and stored in GPU memory for accelerated SLFN. They are not needed once ELM is solved and they can take a lot of memory with large numbers of neurons, so one can delete them with reset() method. They are omitted when an ELM model is saved.

B

matrix

output solution matrix of SLFN. A trained ELM needs only neurons and B to predict outputs for new input data.

add_batch(X, T, wc=None)[source]

Add a batch of training data to an iterative solution, weighted if neeed.

The batch is processed as a whole, the training data is splitted in ELM.add_data() method. With parameters HH_out, HT_out, the output will be put into these matrices instead of model.

Parameters:
  • X (matrix) – input data matrix size (N * inputs)
  • T (matrix) – output data matrix size (N * outputs)
  • wc (vector) – vector of weights for data samples, one weight per sample, size (N * 1)
  • HT_out (HH_out,) – output matrices to add batch result into, always given together
add_neurons(number, func, W, B)[source]

Add prepared neurons to the SLFN, merge with existing ones.

Adds a number of specific neurons to SLFN network. Weights and biases must be provided for that function.

If neurons of such type already exist, they are merged together.

Parameters:
  • number (int) – the number of new neurons to add
  • func (str) – transformation function of hidden layer. Linear function creates a linear model.
  • W (matrix) – a 2-D matrix of neuron weights, size (inputs * number)
  • B (vector) – a 1-D vector of neuron biases, size (number * 1)
fix_affinity()[source]

Numpy processor core affinity fix.

Fixes a problem if all Numpy processes are pushed to CPU core 0.

get_B()[source]

Return B as a numpy array.

get_corr()[source]

Return current correlation matrices.

get_neurons()[source]

Return current neurons.

Returns:neurons – current neurons in the model
Return type:list of tuples (number/int, func/string, W/matrix, B/vector
reset()[source]

Resets intermediate training results, releases memory that they use.

Keeps solution of ELM, so a trained ELM remains operational. Can be called to free memory after an ELM is trained.

set_B(B)[source]

Set B as a numpy array.

Parameters:B (matrix) – output layer weights matrix, size (L * outputs)
set_corr(HH, HT)[source]

Set pre-computed correlation matrices.

Parameters:
  • HH (matrix) – covariance matrix of hidden layer represenation H, size (L * L)
  • HT (matrix) – correlation matrix between H and outputs T, size (L * outputs)
solve()[source]

Redirects to solve_corr, to avoid duplication of code.

solve_corr(HH, HT)[source]

Compute output weights B for given HH and HT.

Simple but inefficient version, see a better one in solver_python.

Parameters:
  • HH (matrix) – covariance matrix of hidden layer represenation H, size (L * L)
  • HT (matrix) – correlation matrix between H and outputs T, size (L * outputs)

This is a fast Python implementation of SLFN.

Created on Sun Sep 6 11:18:55 2015 @author: akusok

class hpelm.nnets.slfn_python.SLFNPython(inputs, outputs, norm=None, precision=<type 'numpy.float64'>)[source]

Bases: hpelm.nnets.slfn.SLFN

Single Layer Feed-forward Network (SLFN), the neural network that ELM trains.

add_batch(X, T, wc=None)[source]

Add a batch using Symmetric Rank-K matrix update for HH.

get_corr()[source]

Return current correlation matrices.

solve_corr(HH, HT)[source]

Compute output weights B for given HH and HT.

Simple but inefficient version, see a better one in solver_python.

Parameters:
  • HH (matrix) – covariance matrix of hidden layer represenation H, size (L * L)
  • HT (matrix) – correlation matrix between H and outputs T, size (L * outputs)

Nvidia GPU-accelerated solver based on Scikit-CUDA, works without compiling anything.

GPU computations run in asyncronous mode: GPU is processing one batch of data while CPU prepares the next batch. Loads GPU for 100% without waiting times, very fast and efficient. The requied Scikit-CUDA is a single-line install in Linux: pip install scikit-cuda. Tested on CUDA 7.

Created on Sat Sep 12 13:10:23 2015 @author: akusok

class hpelm.nnets.slfn_skcuda.SLFNSkCUDA(inputs, outputs, norm=None, precision=<type 'numpy.float64'>)[source]

Bases: hpelm.nnets.slfn.SLFN

Single Layer Feed-forward Network (SLFN) implementation on GPU with pyCUDA.

To choose a specific GPU, use environmental variable CUDA_DEVICE, for exampe CUDA_DEVICE=0 python myscript1.py & CUDA_DEVICE=1 python myscript2.py.

In single precision, only upper triangular part of HH matrix is computed to speedup the method.

add_batch(X, T, wc=None)[source]

Add a batch of training data to an iterative solution, weighted if neeed.

The batch is processed as a whole, the training data is splitted in ELM.add_data() method. With parameters HH_out, HT_out, the output will be put into these matrices instead of model.

Parameters:
  • X (matrix) – input data matrix size (N * inputs)
  • T (matrix) – output data matrix size (N * outputs)
  • wc (vector) – vector of weights for data samples, one weight per sample, size (N * 1)
  • HT_out (HH_out,) – output matrices to add batch result into, always given together
add_neurons(number, func, W, B)[source]

Add prepared neurons to the SLFN, merge with existing ones.

Adds a number of specific neurons to SLFN network. Weights and biases must be provided for that function.

If neurons of such type already exist, they are merged together.

Parameters:
  • number (int) – the number of new neurons to add
  • func (str) – transformation function of hidden layer. Linear function creates a linear model.
  • W (matrix) – a 2-D matrix of neuron weights, size (inputs * number)
  • B (vector) – a 1-D vector of neuron biases, size (number * 1)
get_B()[source]

Return B as a numpy array.

get_corr()[source]

Return current correlation matrices.

get_neurons()[source]

Return current neurons.

Returns:neurons – current neurons in the model
Return type:list of tuples (number/int, func/string, W/matrix, B/vector
reset()[source]

Resets intermediate training results, releases memory that they use.

Keeps solution of ELM, so a trained ELM remains operational. Can be called to free memory after an ELM is trained.

set_B(B)[source]

Set B as a numpy array.

Parameters:B (matrix) – output layer weights matrix, size (L * outputs)
set_corr(HH, HT)[source]

Set pre-computed correlation matrices.

Parameters:
  • HH (matrix) – covariance matrix of hidden layer represenation H, size (L * L)
  • HT (matrix) – correlation matrix between H and outputs T, size (L * outputs)
solve()[source]

Compute output weights B, with fix for unstable solution.

solve_corr(HH, HT)[source]

Compute output weights B for given HH and HT.

Simple but inefficient version, see a better one in solver_python.

Parameters:
  • HH (matrix) – covariance matrix of hidden layer represenation H, size (L * L)
  • HT (matrix) – correlation matrix between H and outputs T, size (L * outputs)