HPELM

Created on Mon Oct 27 17:48:33 2014

@author: akusok

class hpelm.hp_elm.HPELM(inputs, outputs, classification='', w=None, batch=1000, accelerator=None, precision='double', norm=None, tprint=5)[source]

Bases: hpelm.elm.ELM

Interface for training High-Performance Extreme Learning Machines (HP-ELM).

Parameters:
  • inputs (int) – dimensionality of input data, or number of data features
  • outputs (int) – dimensionality of output data, or number of classes
  • classification ('c'/'wc'/'ml', optional) – train ELM for classfication (‘c’) / weighted classification (‘wc’) / multi-label classification (‘ml’). For weighted classification you can provide weights in w. ELM will compute and use the corresponding classification error instead of Mean Squared Error.
  • w (vector, optional) – weights vector for weighted classification, lenght (outputs * 1).
  • batch (int, optional) – batch size for data processing in ELM, reduces memory requirements. Does not work for model structure selection (validation, cross-validation, Leave-One-Out). Can be changed later directly as a class attribute.
  • accelerator (string, optional) – type of accelerated ELM to use: None, ‘GPU’, ...
  • precision (optional) – data precision to use, supports signle (‘single’, ‘32’ or numpy.float32) or double (‘double’, ‘64’ or numpy.float64). Single precision is faster but may cause numerical errors. Majority of GPUs work in single precision. Default: double.
  • norm (double, optinal) – L2-normalization parameter, None gives the default value.
  • tprint (int, optional) – ELM reports its progess every tprint seconds or after every batch, whatever takes longer.

Class attributes; attributes that simply store initialization or train() parameters are omitted.

nnet

object

Implementation of neural network with computational methods, but without complex logic. Different implementations are given by different classes: for Python, for GPU, etc. See hpelm.nnets folder for particular files. You can implement your own computational algorithm by inheriting from hpelm.nnets.SLFN and overwriting some methods.

flist

list of strings

Awailable types of neurons, use them when adding new neurons.

Note

The ‘hdf5’ type denotes a name of HDF5 file type with a single 2-dimensional array inside. HPELM uses PyTables interface to HDF5: http://www.pytables.org/. For HDF5 array examples, see http://www.pytables.org/usersguide/libref/homogenous_storage.html. Array name is irrelevant, but there must be only one array per HDF5 file.

A 2-dimensional Numpy.ndarray can also be used.

add_data(fX, fT, istart=0, icount=inf, fHH=None, fHT=None)[source]

Feed new training data (X,T) to HP-ELM model in batches: does not solve ELM itself.

This method prepares an intermediate solution data, that takes the most time. After that, obtaining the solution is fast.

The intermediate solution consists of two matrices: HH and HT. They can be in memory for a model computed at once, or stored on disk for a model computed in parts or in parallel.

For iterative solution, provide file names for on-disk matrices in the input parameters fHH and fHT. They will be created if they don’t exist, or new results will be merged with the existing ones. This method is multiprocess-safe for parallel writing into files fHH and fHT, that allows you to easily compute ELM in parallel. The multiprocess-safeness uses Python module ‘fasteners’ and a lock file, which is named fHH+’.lock’ and fHT+’.lock’.

Parameters:
  • fX (hdf5) – (part of) input training data size (N * inputs)
  • fT (hdf5) –
  • istart (int, optional) – index of first data sample to use from fX, istart < N. If not given, all data from fX is used. Sample with index istart is used for training, indexing is 0-based.
  • icount (int, optional) – number of data samples to use from fX, starting from istart, automatically adjusted to istart + icount <= N. If not given, all data starting from start is used. The last sample used for training is istart`+`icount-1, so you can index data as: istart_1=0, icount_1=1000; istart_2=1000, icount_2=1000; istart_3=2000, icount_3=1000, ...
  • fHT (fHH,) – file names for storing HH and HT matrices. Files are created if they don’t exist, or new result is added to the existing files if they exist. Parallel writing to the same fHH, fHT files is multiprocess-safe, made specially for parallel training of HP-ELM. Another use is to split a very long training of huge ELM into smaller parts, so the training can be interrupted and resumed later.
add_data_async(fX, fT, istart=0, icount=inf, fHH=None, fHT=None)[source]

Version of add_data() with asyncronous I/O. See add_data() for reference.

Spawns new processes using Python’s multiprocessing module, and requires more memory than non-async version.

error(fT, fY, istart=0, icount=inf)[source]

Calculate error of model predictions of HPELM.

Computes Mean Squared Error (MSE) between model predictions Y and true outputs T. For classification, computes mis-classification error. For multi-label classification, correct classes are all with Y>0.5.

For weighted classification the error is an average weighted True Positive Rate, or percentage of correctly predicted samples for each class, multiplied by weight of that class and averaged. If you want something else, just write it yourself :) See https://en.wikipedia.org/wiki/Confusion_matrix for details.

Parameters:
  • fT (hdf5) – hdf5 filename with true outputs
  • fY (hdf5) – hdf5 filename with predicted outputs
  • istart (int, optional) – index of first data sample to use from fX, istart < N. If not given, all data from fX is used. Sample with index istart is used for training, indexing is 0-based.
  • icount (int, optional) – number of data samples to use from fX, starting from istart, automatically adjusted to istart + icount <= N. If not given, all data starting from start is used. The last sample used for training is istart`+`icount-1, so you can index data as: istart_1=0, icount_1=1000; istart_2=1000, icount_2=1000; istart_3=2000, icount_3=1000, ...
Returns:

e – MSE for regression / classification error for classification.

Return type:

double

predict(fX, fY=None, istart=0, icount=inf)[source]

Iterative predict outputs and save them to HDF5, can use custom range.

Parameters:
  • fX (hdf5) – hdf5 filename or Numpy matrix with input data from which outputs are predicted
  • fY (hdf5) – hdf5 filename or Numpy matrix to store output data into, if ‘None’ then Numpy matrix is generated automatically.
  • istart (int, optional) – index of first data sample to use from fX, istart < N. If not given, all data from fX is used. Sample with index istart is used for training, indexing is 0-based.
  • icount (int, optional) – number of data samples to use from fX, starting from istart, automatically adjusted to istart + icount <= N. If not given, all data starting from start is used. The last sample used for training is istart`+`icount-1, so you can index data as: istart_1=0, icount_1=1000; istart_2=1000, icount_2=1000; istart_3=2000, icount_3=1000, ...
predict_async(fX, fY, istart=0, icount=inf)[source]

Version of predict() with asyncronous I/O. See predict() for reference.

Spawns new processes using Python’s multiprocessing module, and requires more memory than non-async version.

project(fX, fH=None, istart=0, icount=inf)[source]

Iteratively project input data from HDF5 into HPELM hidden layer, and save in another HDF5.

Parameters:
  • fX (hdf5) – hdf5 filename or Numpy matrix with input data to project
  • fH (hdf5) – hdf5 filename or Numpy matrix to store projected inputs, if ‘None’ then Numpy matrix is generated automatically.
  • istart (int, optional) – index of first data sample to use from fX, istart < N. If not given, all data from fX is used. Sample with index istart is used for training, indexing is 0-based.
  • icount (int, optional) – number of data samples to use from fX, starting from istart, automatically adjusted to istart + icount <= N. If not given, all data starting from start is used. The last sample used for training is istart`+`icount-1, so you can index data as: istart_1=0, icount_1=1000; istart_2=1000, icount_2=1000; istart_3=2000, icount_3=1000, ...
solve_corr(fHH, fHT)[source]

Solves an ELM model with the given (covariance) fHH and (correlation) fHT HDF5 files.

Parameters:
  • fHH (hdf5) – an hdf5 file with intermediate solution data
  • fHT (hdf5) – an hdf5 file with intermediate solution data
train(fX, fT, *args, **kwargs)[source]

Universal training interface for HP-ELM model.

Always trains a basic ELM model without model structure selection. L2-regularization is available as norm parameter at HPELM initialization. Number of neurons selection with validation set for trained HPELM is available in train_hpv() method.

Parameters:
  • fX (hdf5) – input data on disk, size (N * inputs)
  • fT (hdf5) – outputs data on disk, size (N * outputs)
  • 'c'/'wc'/'ml' (string, choose one) – train HPELM for classification (‘c’), classification with weighted classes (‘wc’) or multi-label classification (‘ml’) with several correct classes per data sample. In classification, number of outputs is the number of classes; correct class(es) for each sample has value 1 and incorrect classes have 0.
Keyword Arguments:
 
  • istart (int, optional) – index of first data sample to use from fX, istart < N. If not given, all data from fX is used. Sample with index istart is used for training, indexing is 0-based.
  • icount (int, optional) – number of data samples to use from fX, starting from istart, automatically adjusted to istart + icount <= N. If not given, all data starting from start is used. The last sample used for training is istart`+`icount-1, so you can index data as: istart_1=0, icount_1=1000; istart_2=1000, icount_2=1000; istart_3=2000, icount_3=1000, ...
  • batch (int, optional) – batch size for ELM, overwrites batch size from the initialization
train_async(fX, fT, *args, **kwargs)[source]

Training HPELM with asyncronous I/O, good for network drives, etc. See train() for reference.

Spawns new processes using Python’s multiprocessing module.

validation_corr(fHH, fHT, fXv, fTv, steps=10)[source]

Quick batch error evaluation with different numbers of neurons on a validation set.

Only feasible implementation of model structure selection with HP-ELM. This method makes a single pass over the validation data, computing errors for all numbers of neurons at once. It requires HDF5 files with matrices HH and HT: fHH and fHT, obtained from add_data(..., fHH, fHT) method.

The method writes the best solution to the HPELM model.

Parameters:
  • fHH (string) – name of HDF5 file with HH matrix
  • fHT (string) – name of HDF5 file with HT matrix
  • fXv (string) – name of HDF5 file with validation dataset inputs
  • fTv (string) – name of HDF5 file with validation dataset outputs
  • steps (int or vector) – amount of different numbers of neurons to test, choosen uniformly on a logarithmic scale from 3 to number of neurons in HPELM. Can be given exactly as a vector.
Returns:

Ls – numbers of neurons used by validation_corr() method errs (vector): corresponding errors for number of neurons in Ls, with classification error if model

is run for classification

confs (list of matrix): list of confusion matrices corresponding to elements in Ls (empty for regression)

Return type:

vector