Hardware Classes¶
template class xf::data_analytics::classification::logisticRegressionPredict¶
#include "logisticRegression.hpp"
Overview¶
linear least square regression predict
Parameters:
MType | datatype of regression, support double and float |
D | Number of features that processed each cycle |
DDepth | DDepth * D is max feature numbers supported. |
K | Number of weight vectors that processed each cycle |
KDepth | KDepth * K is max weight vectors supported. |
RAMWeight | Use which kind of RAM to store weight, could be LUTRAM, BRAM or URAM. |
RAMIntercept | Use which kind of RAM to store intercept, could be LUTRAM, BRAM or URAM. |
template < typename MType, int D, int DDepth, int K, int KDepth, RAMType RAMWeight, RAMType RAMIntercept > class logisticRegressionPredict // fields static const int marginDepth sl2 <MType, D, DDepth, K, KDepth,&funcMul <MType>,&funcSum <MType>,&funcAssign <MType>, AdditionLatency <MType>::value, RAMWeight, RAMIntercept> marginProcessor pickMaxProcess <MType, K> pickProcessor
Methods¶
pickFromK¶
void pickFromK ( MType margin [K], ap_uint <32> counter, ap_uint <32> ws, MType& maxMargin, ap_uint <32>& maxIndex )
pick best weight vector for classification from K vectors
Parameters:
margin | K margins generate by K weight vectors. |
counter | start index of this K margins in all margins. |
ws | number of margins |
maxMargin | max of K margins. |
maxIndex | which index does max margin sits. |
pick¶
void pick ( hls::stream <MType> marginStrm [K], hls::stream <bool>& eMarginStrm, hls::stream <ap_uint <32>>& retStrm, hls::stream <bool>& eRetStrm, ap_uint <32> ws )
pick best weight vector for classification
Parameters:
marginStrm | margin stream. To get a vector of L margins, marginStrm will be read (L + K - 1) / D times. Margin 0 to K-1 will be read from marginStrm[0] to marginStrm[D-1] at the first time. Then margin D to 2*D - 1. The last round will readin fake data if L is not divisiable by K. These data won’t be used, just to allign K streams. |
eMarginStrm | Endflag of marginStrm. |
retStrm | result stream of classification. |
eRetStrm | Endflag of retStrm. |
ws | number of weight vectors used. |
predict¶
void predict ( hls::stream <MType> opStrm [D], hls::stream <bool>& eOpStrm, ap_uint <32> cols, ap_uint <32> classNum, hls::stream <ap_uint <32>>& retStrm, hls::stream <bool>& eRetStrm )
classification function of logistic regression
Parameters:
opStrm | feature input streams. To get a vector of L features, opStrm will be read (L + D - 1) / D times. Feature 0 to D-1 will be read from opStrm[0] to opStrm[D-1] at the first time. Then feature D to 2*D - 1. The last round will readin fake data if L is not divisiable by D. These data won’t be used, just to allign D streams. |
eOpStrm | End flag of opStrm. |
cols | Feature numbers |
classNum | Number of classes. |
retStrm | result stream of classification. |
eRetStrm | Endflag of retStrm. |
setWeight¶
void setWeight ( MType inputW [K][D][KDepth *DDepth], ap_uint <32> cols, ap_uint <32> classNum )
set up weight parameters for prediction
Parameters:
inputW | weight |
cols | Effective weight numbers |
classNum | number of classes. |
setIntercept¶
void setIntercept ( MType inputI [K][KDepth], ap_uint <32> classNum )
set up intercept parameters for prediction
Parameters:
inputI | intercept, should be set to zero if don’t needed. |
classNum | number of classes. |
template class xf::data_analytics::regression::linearLeastSquareRegressionPredict¶
#include "linearRegression.hpp"
Overview¶
linear least square regression predict
Parameters:
MType | datatype of regression, support double and float |
D | Number of features that processed each cycle |
DDepth | DDepth * D is max feature numbers supported. |
RAMWeight | Use which kind of RAM to store weight, could be LUTRAM, BRAM or URAM. |
RAMIntercept | Use which kind of RAM to store intercept, could be LUTRAM, BRAM or URAM. |
template < typename MType, int D, int DDepth, RAMType RAMWeight, RAMType RAMIntercept > class linearLeastSquareRegressionPredict // fields sl2 <MType, D, DDepth, 1, 1,&funcMul <MType>,&funcSum <MType>,&funcAssign <MType>, AdditionLatency <MType>::value, RAMWeight, RAMIntercept> dotMulProcessor
Methods¶
setWeight¶
void setWeight ( MType inputW [D][DDepth], ap_uint <32> cols )
set up weight parameters for prediction
Parameters:
inputW | weight |
cols | Effective weight numbers |
setIntercept¶
void setIntercept (MType inputI)
set up intercept parameters for prediction
Parameters:
inputI | intercept should be set to zero if don’t needed. |
predict¶
void predict ( hls::stream <MType> opStrm [D], hls::stream <bool>& eOpStrm, hls::stream <MType> retStrm [1], hls::stream <bool>& eRetStrm, ap_uint <32> cols )
predict based on input features and preset weight and intercept
Parameters:
opStrm | feature input streams. To get a vector of L features, opStrm will be read (L + D - 1) / D times. Feature 0 to D-1 will be read from opStrm[0] to opStrm[D-1] at the first time. Then feature D to 2*D - 1. The last round will readin fake data if L is not divisiable by D. These data won’t be used, just to allign D streams. |
eOpStrm | End flag of opStrm. |
retStrm | Prediction result. |
eRetStrm | End flag of retStrm. |
cols | Effective feature numbers. |
template class xf::data_analytics::regression::LASSORegressionPredict¶
#include "linearRegression.hpp"
Overview¶
LASSO regression predict.
Parameters:
MType | datatype of regression, support double and float |
D | Number of features that processed each cycle |
DDepth | DDepth * D is max feature numbers supported. |
RAMWeight | Use which kind of RAM to store weight, could be LUTRAM, BRAM or URAM. |
RAMIntercept | Use which kind of RAM to store intercept, could be LUTRAM, BRAM or URAM. |
template < typename MType, int D, int DDepth, RAMType RAMWeight, RAMType RAMIntercept > class LASSORegressionPredict // fields sl2 <MType, D, DDepth, 1, 1,&funcMul <MType>,&funcSum <MType>,&funcAssign <MType>, AdditionLatency <MType>::value, RAMWeight, RAMIntercept> dotMulProcessor
Methods¶
setWeight¶
void setWeight ( MType inputW [D][DDepth], ap_uint <32> cols )
set up weight parameters for prediction
Parameters:
inputW | weight |
cols | Effective weight numbers |
setIntercept¶
void setIntercept (MType inputI)
set up intercept parameters for prediction
Parameters:
inputI | intercept, should be set to zero if don’t needed. |
predict¶
void predict ( hls::stream <MType> opStrm [D], hls::stream <bool>& eOpStrm, hls::stream <MType> retStrm [1], hls::stream <bool>& eRetStrm, ap_uint <32> cols )
predict based on input features and preset weight and intercept
Parameters:
opStrm | feature input streams. To get a vector of L features, opStrm will be read (L + D - 1) / D times. Feature 0 to D-1 will be read from opStrm[0] to opStrm[D-1] at the first time. Then feature D to 2*D - 1. The last round will readin fake data if L is not divisiable by D. These data won’t be used, just to allign D streams. |
eOpStrm | End flag of opStrm. |
retStrm | Prediction result. |
eRetStrm | End flag of retStrm. |
cols | Effective feature numbers. |
template class xf::data_analytics::regression::ridgeRegressionPredict¶
#include "linearRegression.hpp"
Overview¶
ridge regression predict
Parameters:
MType | datatype of regression, support double and float |
D | Number of features that processed each cycle |
DDepth | DDepth * D is max feature numbers supported. |
RAMWeight | Use which kind of RAM to store weight, could be LUTRAM, BRAM or URAM. |
RAMIntercept | Use which kind of RAM to store intercept, could be LUTRAM, BRAM or URAM. |
template < typename MType, int D, int DDepth, RAMType RAMWeight, RAMType RAMIntercept > class ridgeRegressionPredict // fields sl2 <MType, D, DDepth, 1, 1,&funcMul <MType>,&funcSum <MType>,&funcAssign <MType>, AdditionLatency <MType>::value, RAMWeight, RAMIntercept> dotMulProcessor
Methods¶
setWeight¶
void setWeight ( MType inputW [D][DDepth], ap_uint <32> cols )
set up weight parameters for prediction
Parameters:
inputW | weight |
cols | Effective weight numbers |
setIntercept¶
void setIntercept (MType inputI)
set up intercept parameters for prediction
Parameters:
inputI | intercept, should be set to zero if don’t needed. |
predict¶
void predict ( hls::stream <MType> opStrm [D], hls::stream <bool>& eOpStrm, hls::stream <MType> retStrm [1], hls::stream <bool>& eRetStrm, ap_uint <32> cols )
predict based on input features and preset weight and intercept
Parameters:
opStrm | feature input streams. To get a vector of L features, opStrm will be read (L + D - 1) / D times. Feature 0 to D-1 will be read from opStrm[0] to opStrm[D-1] at the first time. Then feature D to 2*D - 1. The last round will readin fake data if L is not divisiable by D. These data won’t be used just to allign D streams. |
eOpStrm | End flag of opStrm. |
retStrm | Prediction result. |
eRetStrm | End flag of retStrm. |
cols | Effective feature numbers. |
template class xf::data_analytics::common::SGDFramework¶
#include "SGD.hpp"
Overview¶
Stochasitc Gradient Descent Framework.
Parameters:
Gradient | gradient class which suite into this framework. |
template <typename Gradient> class SGDFramework // direct descendants template < typename MType, int WAxi, int WData, int BurstLen, int D, int DDepth, RAMType RAMWeight, RAMType RAMIntercept, RAMType RAMAvgWeight, RAMType RAMAvgIntercept > class xf::data_analytics::regression::internal::LASSORegressionSGDTrainer template < typename MType, int WAxi, int WData, int BurstLen, int D, int DDepth, RAMType RAMWeight, RAMType RAMIntercept, RAMType RAMAvgWeight, RAMType RAMAvgIntercept > class xf::data_analytics::regression::internal::linearLeastSquareRegressionSGDTrainer template < typename MType, int WAxi, int WData, int BurstLen, int D, int DDepth, RAMType RAMWeight, RAMType RAMIntercept, RAMType RAMAvgWeight, RAMType RAMAvgIntercept > class xf::data_analytics::regression::internal::ridgeRegressionSGDTrainer // typedefs typedef Gradient::DataType MType // fields static const int WAxi static const int D static const int Depth ap_uint <32> offset ap_uint <32> rows ap_uint <32> cols ap_uint <32> bucketSize float fraction bool ifJump MType stepSize MType tolerance bool withIntercept ap_uint <32> maxIter Gradient gradProcessor
Methods¶
seedInitialization¶
void seedInitialization (ap_uint <32> seed)
Initialize RNG for sampling data.
Parameters:
seed | Seed for RNG |
setTrainingConfigs¶
void setTrainingConfigs ( MType inputStepSize, MType inputTolerance, bool inputWithIntercept, ap_uint <32> inputMaxIter )
Set configs for SGD iteration.
Parameters:
inputStepSize | steps size of SGD iteration. |
inputTolerance | convergence tolerance of SGD. |
inputWithIntercept | if SGD includes intercept or not. |
inputMaxIter | max iteration number of SGD. |
setTrainingDataParams¶
void setTrainingDataParams ( ap_uint <32> inputOffset, ap_uint <32> inputRows, ap_uint <32> inputCols, ap_uint <32> inputBucketSize, float inputFraction, bool inputIfJump )
Set configs for loading trainging data.
Parameters:
inputOffset | offset of data in ddr. |
inputRows | number of rows of training data |
inputCols | number of features of training data |
inputBucketSize | bucketSize of jump sampling |
inputFraction | sample fraction |
inputIfJump | perform jump scaling or not. |
initGradientParams¶
void initGradientParams (ap_uint <32> cols)
Set initial weight to zeros.
Parameters:
cols | feature numbers |
calcGradient¶
void calcGradient (ap_uint <WAxi>* ddr)
calculate gradient of current weight
Parameters:
ddr | Traing Data |
updateParams¶
bool updateParams (ap_uint <32> iterationIndex)
update weight and intercept based on gradient
Parameters:
iterationIndex | iteraton index. |