ReplicatedFocusingBeliefPropagation

class rfbp.ReplicatedFocusingBeliefPropagation.ReplicatedFocusingBeliefPropagation(mag=<class 'ReplicatedFocusingBeliefPropagation.rfbp.MagP64.MagP64'>, hidden=3, max_iter=1000, seed=135, damping=0.5, accuracy=('accurate', 'exact'), randfact=0.1, epsil=0.1, protocol='pseudo_reinforcement', size=101, nth=1, verbose=False)[source]

Bases: sklearn.base.BaseEstimator, sklearn.base.ClassifierMixin

ReplicatedFocusingBeliefPropagation classifier

Parameters:
  • mag (Enum Mag (default = MagP64)) – Switch magnetization type
  • hidden (int (default = 3)) – Number of hidden layers
  • max_iters (int (default = 1000)) – Number of iterations
  • seed (int (default = 135)) – Random seed
  • damping (float (default = 0.5)) – Damping parameter
  • accuracy (pair of string (default : ('accurate', 'exact'))) – Accuracy of the messages computation at the hidden units level. Possible values are (‘exact’, ‘accurate’, ‘approx’, ‘none’)
  • randfact (float (default = 0.1)) – Seed random generator of Cavity Messages
  • epsil (float (default = 0.1)) – Threshold for convergence
  • protocol (string (default = 'pseudo_reinforcement')) – Updating protocol. Possible values are [“scoping”, “pseudo_reinforcement”, “free_scoping”, “standard_reinforcement”]
  • size (int (default = 101)) – Number of updates
  • nth (int (default = max_num_of_cores)) – Number of thread to use in the computation
  • verbose (bool (default = False)) – Enable or disable stdout on shell

Example

>>> import numpy as np
>>> from ReplicatedFocusingBeliefPropagation import ReplicatedFocusingBeliefPropagation as rFBP
>>>
>>> N, M = (20, 101) # M must be odd
>>> data = np.random.choice([-1, 1], p=[.5, .5], size=(N, M))
>>> label = np.random.choice([-1, 1], p=[.5, .5], size=(N, ))
>>>
>>> rfbp = rFBP()
>>> rfbp.fit(data, label)
  ReplicatedFocusingBeliefPropagation(randfact=0.1, damping=0.5, accuracy=('accurate', 'exact'), nth=1, epsil=0.1, seed=135, size=101, hidden=3, verbose=False, protocol=pseudo_reinforcement, mag=<class 'ReplicatedFocusingBeliefPropagation.rfbp.MagP64.MagP64'>, max_iter=1000)
>>> predicted_labels = rfbp.predict(data)

Notes

Note

The input data must be composed by binary variables codified as [-1, 1], since the model works only with spin-like variables.

References

    1. Baldassi, C. Borgs, J. T. Chayes, A. Ingrosso, C. Lucibello, L. Saglietti, and R. Zecchina. “Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes”, Proceedings of the National Academy of Sciences, 113(48):E7655-E7662, 2016.
    1. Baldassi, A. Braunstein, N. Brunel, R. Zecchina. “Efficient supervised learning in networks with binary synapses”, Proceedings of the National Academy of Sciences, 104(26):11079-11084, 2007.
    1. Baldassi, F. Gerace, C. Lucibello, L. Saglietti, R. Zecchina. “Learning may need only a few bits of synaptic precision”, Physical Review E, 93, 2016
    1. Dall’Olio, N. Curti, G. Castellani, A. Bazzani, D. Remondini. “Classification of Genome Wide Association data by Belief Propagation Neural network”, CCS Italy, 2019.
fit(X, y=None)[source]

Fit the ReplicatedFocusingBeliefPropagation model meta-transformer

Parameters:
  • X (array-like of shape (n_samples, n_features)) – The training input samples.
  • y (array-like, shape (n_samples,)) – The target values (integers that correspond to classes in classification, real numbers in regression).
Returns:

self

Return type:

ReplicatedFocusingBeliefPropagation object

load_weights(weightfile, delimiter='\t', binary=False)[source]

Load weights from file

Parameters:
  • weightfile (string) – Filename of weights
  • delimiter (char) – Separator for ascii loading
  • binary (bool) – Switch between binary and ascii loading style
Returns:

self

Return type:

ReplicatedFocusingBeliefPropagation object

Example

>>> from ReplicatedFocusingBeliefPropagation import ReplicatedFocusingBeliefPropagation as rFBP
>>>
>>> clf = rFBP()
>>> clf.load_weights('path/to/weights_filename.csv', delimiter=',', binary=False)
  ReplicatedFocusingBeliefPropagation(randfact=0.1, damping=0.5, accuracy=('accurate', 'exact'), nth=1, epsil=0.1, seed=135, size=101, hidden=3, verbose=False, protocol=pseudo_reinforcement, mag=<class 'ReplicatedFocusingBeliefPropagation.rfbp.MagP64.MagP64'>, max_iter=1000)
predict(X)[source]

Predict the new labels computed by ReplicatedFocusingBeliefPropagation model

Parameters:X (array of shape [n_samples, n_features]) – The input samples.
Returns:y – The predicted target values.
Return type:array of shape [n_samples]
save_weights(weightfile, delimiter='\t', binary=False)[source]

Load weights from file

Parameters:
  • weightfile (string) – Filename to dump the weights
  • delimiter (char) – Separator for ascii dump
  • binary (bool) – Switch between binary and ascii dumping style

Example

>>> import numpy as np
>>> from ReplicatedFocusingBeliefPropagation import ReplicatedFocusingBeliefPropagation as rFBP
>>>
>>> N, M = (20, 101) # M must be odd
>>> data = np.random.choice([-1, 1], p=[.5, .5], size=(N, M))
>>> label = np.random.choice([-1, 1], p=[.5, .5], size=(N, ))
>>>
>>> rfbp = rFBP()
>>> rfbp.fit(data, label)
>>> rfbp.save_weights('path/to/weights_filename.csv', delimiter=',', binary=False)