SHOGUN
4.1.0
|
Multiple Kernel Learning.
A support vector machine based method for use with multiple kernels. In Multiple Kernel Learning (MKL) in addition to the SVM \(\bf\alpha\) and bias term \(b\) the kernel weights \(\bf\beta\) are estimated in training. The resulting kernel method can be stated as
\[ f({\bf x})=\sum_{i=0}^{N-1} \alpha_i \sum_{j=0}^M \beta_j k_j({\bf x}, {\bf x_i})+b . \]
where \(N\) is the number of training examples \(\alpha_i\) are the weights assigned to each training example \(\beta_j\) are the weights assigned to each sub-kernel \(k_j(x,x')\) are sub-kernels and \(b\) the bias.
Kernels have to be chosen a-priori. In MKL \(\alpha_i,\;\beta\) and bias are determined by solving the following optimization program
\begin{eqnarray*} \mbox{min} && \gamma-\sum_{i=1}^N\alpha_i\\ \mbox{w.r.t.} && \gamma\in R, \alpha\in R^N \nonumber\\ \mbox{s.t.} && {\bf 0}\leq\alpha\leq{\bf 1}C,\;\;\sum_{i=1}^N \alpha_i y_i=0 \nonumber\\ && \frac{1}{2}\sum_{i,j=1}^N \alpha_i \alpha_j y_i y_j k_k({\bf x}_i,{\bf x}_j)\leq \gamma,\;\; \forall k=1,\ldots,K\nonumber\\ \end{eqnarray*}
here C is a pre-specified regularization parameter.
Within shogun this optimization problem is solved using semi-infinite programming. For 1-norm MKL using one of the two approaches described in
Soeren Sonnenburg, Gunnar Raetsch, Christin Schaefer, and Bernhard Schoelkopf. Large Scale Multiple Kernel Learning. Journal of Machine Learning Research, 7:1531-1565, July 2006.
The first approach (also called the wrapper algorithm) wraps around a single kernel SVMs, alternatingly solving for \(\alpha\) and \(\beta\). It is using a traditional SVM to generate new violated constraints and thus requires a single kernel SVM and any of the SVMs contained in shogun can be used. In the MKL step either a linear program is solved via glpk or cplex or analytically or a newton (for norms>1) step is performed.
The second much faster but also more memory demanding approach performing interleaved optimization, is integrated into the chunking-based SVMlight.
In addition sparsity of MKL can be controlled by the choice of the \(L_p\)-norm regularizing \(\beta\) as described in
Marius Kloft, Ulf Brefeld, Soeren Sonnenburg, and Alexander Zien. Efficient and accurate lp-norm multiple kernel learning. In Advances in Neural Information Processing Systems 21. MIT Press, Cambridge, MA, 2009.
An alternative way to control the sparsity is the elastic-net regularization, which can be formulated into the following optimization problem:
\begin{eqnarray*} \mbox{min} && C\sum_{i=1}^N\ell\left(\sum_{k=1}^Kf_k(x_i)+b,y_i\right)+(1-\lambda)\left(\sum_{k=1}^K\|f_k\|_{\mathcal{H}_k}\right)^2+\lambda\sum_{k=1}^K\|f_k\|_{\mathcal{H}_k}^2\\ \mbox{w.r.t.} && f_1\in\mathcal{H}_1,f_2\in\mathcal{H}_2,\ldots,f_K\in\mathcal{H}_K,\,b\in R \nonumber\\ \end{eqnarray*}
where \(\ell\) is a loss function. Here \(\lambda\) controls the trade-off between the two regularization terms. \(\lambda=0\) corresponds to \(L_1\)-MKL, whereas \(\lambda=1\) corresponds to the uniform-weighted combination of kernels ( \(L_\infty\)-MKL). This approach was studied by Shawe-Taylor (2008) "Kernel Learning for Novelty Detection" (NIPS MKL Workshop 2008) and Tomioka & Suzuki (2009) "Sparsity-accuracy trade-off in MKL" (NIPS MKL Workshop 2009).
Static Public Member Functions | |
static bool | perform_mkl_step_helper (CMKL *mkl, const float64_t *sumw, const float64_t suma) |
static void * | apply_helper (void *p) |
Public Attributes | |
SGIO * | io |
Parallel * | parallel |
Version * | version |
Parameter * | m_parameters |
Parameter * | m_model_selection_parameters |
Parameter * | m_gradient_parameters |
ParameterMap * | m_parameter_map |
uint32_t | m_hash |
Protected Member Functions | |
virtual bool | train_machine (CFeatures *data=NULL) |
virtual void | init_training ()=0 |
void | perform_mkl_step (float64_t *beta, float64_t *old_beta, int num_kernels, int32_t *label, int32_t *active2dnum, float64_t *a, float64_t *lin, float64_t *sumw, int32_t &inner_iters) |
float64_t | compute_optimal_betas_via_cplex (float64_t *beta, const float64_t *old_beta, int32_t num_kernels, const float64_t *sumw, float64_t suma, int32_t &inner_iters) |
float64_t | compute_optimal_betas_via_glpk (float64_t *beta, const float64_t *old_beta, int num_kernels, const float64_t *sumw, float64_t suma, int32_t &inner_iters) |
float64_t | compute_optimal_betas_elasticnet (float64_t *beta, const float64_t *old_beta, const int32_t num_kernels, const float64_t *sumw, const float64_t suma, const float64_t mkl_objective) |
void | elasticnet_transform (float64_t *beta, float64_t lmd, int32_t len) |
void | elasticnet_dual (float64_t *ff, float64_t *gg, float64_t *hh, const float64_t &del, const float64_t *nm, int32_t len, const float64_t &lambda) |
float64_t | compute_optimal_betas_directly (float64_t *beta, const float64_t *old_beta, const int32_t num_kernels, const float64_t *sumw, const float64_t suma, const float64_t mkl_objective) |
float64_t | compute_optimal_betas_block_norm (float64_t *beta, const float64_t *old_beta, const int32_t num_kernels, const float64_t *sumw, const float64_t suma, const float64_t mkl_objective) |
float64_t | compute_optimal_betas_newton (float64_t *beta, const float64_t *old_beta, int32_t num_kernels, const float64_t *sumw, float64_t suma, float64_t mkl_objective) |
virtual bool | converged () |
void | init_solver () |
bool | init_cplex () |
void | set_qnorm_constraints (float64_t *beta, int32_t num_kernels) |
bool | cleanup_cplex () |
bool | init_glpk () |
bool | cleanup_glpk () |
bool | check_glp_status (glp_prob *lp) |
virtual float64_t * | get_linear_term_array () |
SGVector< float64_t > | apply_get_outputs (CFeatures *data) |
virtual void | store_model_features () |
virtual bool | is_label_valid (CLabels *lab) const |
virtual bool | train_require_labels () const |
virtual TParameter * | migrate (DynArray< TParameter * > *param_base, const SGParamInfo *target) |
virtual void | one_to_one_migration_prepare (DynArray< TParameter * > *param_base, const SGParamInfo *target, TParameter *&replacement, TParameter *&to_migrate, char *old_name=NULL) |
virtual void | load_serializable_pre () throw (ShogunException) |
virtual void | load_serializable_post () throw (ShogunException) |
virtual void | save_serializable_pre () throw (ShogunException) |
virtual void | save_serializable_post () throw (ShogunException) |
Protected Attributes | |
CSVM * | svm |
float64_t | C_mkl |
float64_t | mkl_norm |
float64_t | ent_lambda |
float64_t | mkl_block_norm |
float64_t * | beta_local |
int32_t | mkl_iterations |
float64_t | mkl_epsilon |
bool | interleaved_optimization |
float64_t * | W |
float64_t | w_gap |
float64_t | rho |
CTime | training_time_clock |
CPXENVptr | env |
CPXLPptr | lp_cplex |
glp_prob * | lp_glpk |
glp_smcp * | lp_glpk_parm |
bool | lp_initialized |
SGVector< float64_t > | m_linear_term |
bool | svm_loaded |
float64_t | epsilon |
float64_t | tube_epsilon |
float64_t | nu |
float64_t | C1 |
float64_t | C2 |
float64_t | objective |
int32_t | qpsize |
bool | use_shrinking |
bool(* | callback )(CMKL *mkl, const float64_t *sumw, const float64_t suma) |
CMKL * | mkl |
CKernel * | kernel |
CCustomKernel * | m_custom_kernel |
CKernel * | m_kernel_backup |
bool | use_batch_computation |
bool | use_linadd |
bool | use_bias |
float64_t | m_bias |
SGVector< float64_t > | m_alpha |
SGVector< int32_t > | m_svs |
float64_t | m_max_train_time |
CLabels * | m_labels |
ESolverType | m_solver_type |
bool | m_store_model_features |
bool | m_data_locked |
apply machine to data if data is not specified apply to the current features
data | (test)data to be classified |
Definition at line 160 of file Machine.cpp.
|
virtualinherited |
apply kernel machine to data for binary classification task
data | (test)data to be classified |
Reimplemented from CMachine.
Definition at line 249 of file KernelMachine.cpp.
apply get outputs
data | features to compute outputs |
Definition at line 255 of file KernelMachine.cpp.
|
staticinherited |
apply example helper, used in threads
p | params of the thread |
Definition at line 425 of file KernelMachine.cpp.
|
virtualinherited |
apply machine to data in means of latent problem
Reimplemented in CLinearLatentMachine.
Definition at line 240 of file Machine.cpp.
Applies a locked machine on a set of indices. Error if machine is not locked
indices | index vector (of locked features) that is predicted |
Definition at line 195 of file Machine.cpp.
|
virtualinherited |
Applies a locked machine on a set of indices. Error if machine is not locked. Binary case
indices | index vector (of locked features) that is predicted |
Reimplemented from CMachine.
Definition at line 519 of file KernelMachine.cpp.
Applies a locked machine on a set of indices. Error if machine is not locked
indices | index vector (of locked features) that is predicted |
Definition at line 532 of file KernelMachine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for latent problems
Definition at line 274 of file Machine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for multiclass problems
Definition at line 260 of file Machine.cpp.
|
virtualinherited |
Applies a locked machine on a set of indices. Error if machine is not locked. Binary case
indices | index vector (of locked features) that is predicted |
Reimplemented from CMachine.
Definition at line 525 of file KernelMachine.cpp.
|
virtualinherited |
applies a locked machine on a set of indices for structured problems
Definition at line 267 of file Machine.cpp.
|
virtualinherited |
apply machine to data in means of multiclass classification problem
Reimplemented in CNeuralNetwork, CCHAIDTree, CCARTree, CGaussianProcessClassification, CMulticlassMachine, CKNN, CC45ClassifierTree, CID3ClassifierTree, CDistanceMachine, CVwConditionalProbabilityTree, CGaussianNaiveBayes, CConditionalProbabilityTree, CMCLDA, CQDA, CRelaxedTree, and CBaggingMachine.
Definition at line 228 of file Machine.cpp.
|
virtualinherited |
apply kernel machine to one example
num | which example to apply to |
Reimplemented from CMachine.
Definition at line 406 of file KernelMachine.cpp.
|
virtualinherited |
apply kernel machine to data for regression task
data | (test)data to be classified |
Reimplemented from CMachine.
Definition at line 243 of file KernelMachine.cpp.
|
virtualinherited |
apply machine to data in means of SO classification problem
Reimplemented in CLinearStructuredOutputMachine.
Definition at line 234 of file Machine.cpp.
|
inherited |
Builds a dictionary of all parameters in SGObject as well of those of SGObjects that are parameters of this object. Dictionary maps parameters to the objects that own them.
dict | dictionary of parameters to be built. |
Definition at line 1244 of file SGObject.cpp.
|
protected |
|
protected |
|
protected |
|
virtualinherited |
Creates a clone of the current object. This is done via recursively traversing all parameters, which corresponds to a deep copy. Calling equals on the cloned object always returns true although none of the memory of both objects overlaps.
Definition at line 1361 of file SGObject.cpp.
float64_t compute_elasticnet_dual_objective | ( | ) |
|
virtual |
float64_t compute_mkl_primal_objective | ( | ) |
|
protected |
given the alphas, compute the corresponding optimal betas
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
suma | (sum over alphas) |
mkl_objective | the current mkl objective |
|
protected |
given the alphas, compute the corresponding optimal betas
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
suma | (sum over alphas) |
mkl_objective | the current mkl objective |
|
protected |
given the alphas, compute the corresponding optimal betas
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
suma | (sum over alphas) |
mkl_objective | the current mkl objective |
|
protected |
given the alphas, compute the corresponding optimal betas
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
suma | (sum over alphas) |
mkl_objective | the current mkl objective |
|
protected |
given the alphas, compute the corresponding optimal betas using a lp for 1-norm mkl, a qcqp for 2-norm mkl and an iterated qcqp for general q-norm mkl.
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
suma | (sum over alphas) |
inner_iters | number of internal iterations (for statistics) |
|
protected |
given the alphas, compute the corresponding optimal betas using a lp for 1-norm mkl
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
suma | (sum over alphas) |
inner_iters | number of internal iterations (for statistics) |
|
pure virtual |
compute beta independent term from objective, e.g., in 2-class MKL sum_i alpha_i etc
Implemented in CMKLRegression, CMKLOneClass, and CMKLClassification.
|
virtual |
|
inherited |
|
inherited |
|
protectedvirtual |
|
inherited |
create new model
num | number of alphas and support vectors in new model |
Definition at line 195 of file KernelMachine.cpp.
Locks the machine on given labels and data. After this call, only train_locked and apply_locked may be called.
Computes kernel matrix to speed up train/apply calls
labs | labels used for locking |
features | features used for locking |
Reimplemented from CMachine.
Definition at line 624 of file KernelMachine.cpp.
|
virtualinherited |
Unlocks a locked machine and restores previous state
Reimplemented from CMachine.
Definition at line 655 of file KernelMachine.cpp.
|
virtualinherited |
A deep copy. All the instance variables will also be copied.
Definition at line 200 of file SGObject.cpp.
Recursively compares the current SGObject to another one. Compares all registered numerical parameters, recursion upon complex (SGObject) parameters. Does not compare pointers!
May be overwritten but please do with care! Should not be necessary in most cases.
other | object to compare with |
accuracy | accuracy to use for comparison (optional) |
tolerant | allows linient check on float equality (within accuracy) |
Definition at line 1265 of file SGObject.cpp.
|
inherited |
get alpha at given index
idx | index of alpha |
Definition at line 141 of file KernelMachine.cpp.
Definition at line 190 of file KernelMachine.cpp.
|
inherited |
check if batch computation is enabled
Definition at line 100 of file KernelMachine.cpp.
|
inherited |
|
inherited |
|
virtualinherited |
get classifier type
Reimplemented in CLaRank, CDualLibQPBMSOSVM, CNeuralNetwork, CCCSOSVM, CLeastAngleRegression, CLDA, CKernelRidgeRegression, CLibLinearMTL, CBaggingMachine, CLibLinear, CGaussianProcessClassification, CKMeans, CLibSVR, CQDA, CGaussianNaiveBayes, CMCLDA, CLinearRidgeRegression, CKNN, CGaussianProcessRegression, CScatterSVM, CSGDQN, CSVMSGD, CSVMOcas, COnlineSVMSGD, CLeastSquaresRegression, CMKLRegression, CDomainAdaptationSVMLinear, CMKLMulticlass, CWDSVMOcas, CHierarchical, CMKLOneClass, CLibSVM, CStochasticSOSVM, CMKLClassification, CLPBoost, CPerceptron, CAveragedPerceptron, CFWSOSVM, CNewtonSVM, CLPM, CGMNPSVM, CSVMLin, CMulticlassLibSVM, CLibSVMOneClass, CMPDSVM, CGPBTSVM, CGNPPSVM, and CCPLEXSVM.
Definition at line 100 of file Machine.cpp.
|
inherited |
|
inherited |
|
inherited |
|
inherited |
bool get_interleaved_optimization_enabled | ( | ) |
|
inherited |
|
virtualinherited |
|
inherited |
check if linadd is enabled
Definition at line 110 of file KernelMachine.cpp.
|
protectedvirtualinherited |
|
virtualinherited |
returns type of problem machine solves
Reimplemented in CNeuralNetwork, CRandomForest, CCHAIDTree, CCARTree, and CBaseMulticlassMachine.
|
inherited |
float64_t get_mkl_epsilon | ( | ) |
int32_t get_mkl_iterations | ( | ) |
|
inherited |
Definition at line 1136 of file SGObject.cpp.
|
inherited |
Returns description of a given parameter string, if it exists. SG_ERROR otherwise
param_name | name of the parameter |
Definition at line 1160 of file SGObject.cpp.
|
inherited |
Returns index of model selection parameter with provided index
param_name | name of model selection parameter |
Definition at line 1173 of file SGObject.cpp.
|
virtual |
Reimplemented from CSVM.
Reimplemented in CMKLRegression, CMKLOneClass, and CMKLClassification.
|
inherited |
get number of support vectors
Definition at line 170 of file KernelMachine.cpp.
|
inherited |
|
inherited |
|
inherited |
|
inherited |
get support vector at given index
idx | index of support vector |
Definition at line 135 of file KernelMachine.cpp.
|
inherited |
Definition at line 185 of file KernelMachine.cpp.
CSVM* get_svm | ( | ) |
|
inherited |
|
protected |
|
protected |
|
inherited |
initialise kernel optimisation
Definition at line 212 of file KernelMachine.cpp.
|
protected |
|
protectedpure virtual |
check run before starting training (to e.g. check if labeling is two-class labeling in classification case
Implemented in CMKLRegression, CMKLOneClass, and CMKLClassification.
|
inherited |
|
virtualinherited |
If the SGSerializable is a class template then TRUE will be returned and GENERIC is set to the type of the generic.
generic | set to the type of the generic if returning TRUE |
Definition at line 298 of file SGObject.cpp.
|
protectedvirtualinherited |
check whether the labels is valid.
Subclasses can override this to implement their check of label types.
lab | the labels being checked, guaranteed to be non-NULL |
Reimplemented in CNeuralNetwork, CCARTree, CCHAIDTree, CGaussianProcessRegression, and CBaseMulticlassMachine.
|
inherited |
|
inherited |
maps all parameters of this instance to the provided file version and loads all parameter data from the file into an array, which is sorted (basically calls load_file_parameter(...) for all parameters and puts all results into a sorted array)
file_version | parameter version of the file |
current_version | version from which mapping begins (you want to use Version::get_version_parameter() for this in most cases) |
file | file to load from |
prefix | prefix for members |
Definition at line 705 of file SGObject.cpp.
|
inherited |
loads some specified parameters from a file with a specified version The provided parameter info has a version which is recursively mapped until the file parameter version is reached. Note that there may be possibly multiple parameters in the mapping, therefore, a set of TParameter instances is returned
param_info | information of parameter |
file_version | parameter version of the file, must be <= provided parameter version |
file | file to load from |
prefix | prefix for members |
Definition at line 546 of file SGObject.cpp.
|
virtualinherited |
Load this object from file. If it will fail (returning FALSE) then this object will contain inconsistent data and should not be used!
file | where to load from |
prefix | prefix for members |
param_version | (optional) a parameter version different to (this is mainly for testing, better do not use) |
Definition at line 375 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_POST is called.
ShogunException | will be thrown if an error occurs. |
Reimplemented in CKernel, CWeightedDegreePositionStringKernel, CList, CAlphabet, CLinearHMM, CGaussianKernel, CInverseMultiQuadricKernel, CCircularKernel, and CExponentialKernel.
Definition at line 1063 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_PRE is called.
ShogunException | will be thrown if an error occurs. |
Reimplemented in CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.
Definition at line 1058 of file SGObject.cpp.
|
inherited |
problem type
|
inherited |
Takes a set of TParameter instances (base) with a certain version and a set of target parameter infos and recursively maps the base level wise to the current version using CSGObject::migrate(...). The base is replaced. After this call, the base version containing parameters should be of same version/type as the initial target parameter infos. Note for this to work, the migrate methods and all the internal parameter mappings have to match
param_base | set of TParameter instances that are mapped to the provided target parameter infos |
base_version | version of the parameter base |
target_param_infos | set of SGParamInfo instances that specify the target parameter base |
Definition at line 743 of file SGObject.cpp.
|
protectedvirtualinherited |
creates a new TParameter instance, which contains migrated data from the version that is provided. The provided parameter data base is used for migration, this base is a collection of all parameter data of the previous version. Migration is done FROM the data in param_base TO the provided param info Migration is always one version step. Method has to be implemented in subclasses, if no match is found, base method has to be called.
If there is an element in the param_base which equals the target, a copy of the element is returned. This represents the case when nothing has changed and therefore, the migrate method is not overloaded in a subclass
param_base | set of TParameter instances to use for migration |
target | parameter info for the resulting TParameter |
Definition at line 950 of file SGObject.cpp.
|
protectedvirtualinherited |
This method prepares everything for a one-to-one parameter migration. One to one here means that only ONE element of the parameter base is needed for the migration (the one with the same name as the target). Data is allocated for the target (in the type as provided in the target SGParamInfo), and a corresponding new TParameter instance is written to replacement. The to_migrate pointer points to the single needed TParameter instance needed for migration. If a name change happened, the old name may be specified by old_name. In addition, the m_delete_data flag of to_migrate is set to true. So if you want to migrate data, the only thing to do after this call is converting the data in the m_parameter fields. If unsure how to use - have a look into an example for this. (base_migration_type_conversion.cpp for example)
param_base | set of TParameter instances to use for migration |
target | parameter info for the resulting TParameter |
replacement | (used as output) here the TParameter instance which is returned by migration is created into |
to_migrate | the only source that is used for migration |
old_name | with this parameter, a name change may be specified |
Definition at line 890 of file SGObject.cpp.
|
virtualinherited |
Definition at line 264 of file SGObject.cpp.
perform single mkl iteration
given sum of alphas, objectives for current alphas for each kernel and current kernel weighting compute the corresponding optimal kernel weighting (all via get/set_subkernel_weights in CCombinedKernel)
sumw | vector of 1/2*alpha'*K_j*alpha for each kernel j |
suma | scalar sum_i alpha_i etc. |
|
protected |
perform single mkl iteration
given the alphas, compute the corresponding optimal betas
beta | new betas (kernel weights) |
old_beta | old betas (previous kernel weights) |
num_kernels | number of kernels |
label | (from svmlight label) |
active2dnum | (from svmlight active2dnum) |
a | (from svmlight alphas) |
lin | (from svmlight linear components) |
sumw | 1/2*alpha'*K_j*alpha for each kernel j |
inner_iters | number of required internal iterations |
|
inherited |
prints all parameter registered for model selection and their type
Definition at line 1112 of file SGObject.cpp.
|
virtualinherited |
prints registered parameters out
prefix | prefix for members |
Definition at line 310 of file SGObject.cpp.
|
inherited |
|
virtualinherited |
Save this object to file.
file | where to save the object; will be closed during returning if PREFIX is an empty string. |
prefix | prefix for members |
param_version | (optional) a parameter version different to (this is mainly for testing, better do not use) |
Definition at line 316 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_POST is called.
ShogunException | will be thrown if an error occurs. |
Reimplemented in CKernel.
Definition at line 1073 of file SGObject.cpp.
|
protectedvirtualinherited |
Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_PRE is called.
ShogunException | will be thrown if an error occurs. |
Reimplemented in CKernel, CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.
Definition at line 1068 of file SGObject.cpp.
|
inherited |
set alpha at given index to given value
idx | index of alpha vector |
val | new value of alpha vector |
Definition at line 160 of file KernelMachine.cpp.
set alphas to given values
alphas | float vector with all alphas to set |
Definition at line 175 of file KernelMachine.cpp.
|
inherited |
set batch computation enabled
enable | if batch computation shall be enabled |
Definition at line 95 of file KernelMachine.cpp.
|
inherited |
|
inherited |
set state of bias
enable_bias | if bias shall be enabled |
Definition at line 115 of file KernelMachine.cpp.
void set_constraint_generator | ( | CSVM * | s | ) |
|
inherited |
void set_elasticnet_lambda | ( | float64_t | elasticnet_lambda | ) |
|
inherited |
|
inherited |
Definition at line 42 of file SGObject.cpp.
|
inherited |
Definition at line 47 of file SGObject.cpp.
|
inherited |
Definition at line 52 of file SGObject.cpp.
|
inherited |
Definition at line 57 of file SGObject.cpp.
|
inherited |
Definition at line 62 of file SGObject.cpp.
|
inherited |
Definition at line 67 of file SGObject.cpp.
|
inherited |
Definition at line 72 of file SGObject.cpp.
|
inherited |
Definition at line 77 of file SGObject.cpp.
|
inherited |
Definition at line 82 of file SGObject.cpp.
|
inherited |
Definition at line 87 of file SGObject.cpp.
|
inherited |
Definition at line 92 of file SGObject.cpp.
|
inherited |
Definition at line 97 of file SGObject.cpp.
|
inherited |
Definition at line 102 of file SGObject.cpp.
|
inherited |
Definition at line 107 of file SGObject.cpp.
|
inherited |
Definition at line 112 of file SGObject.cpp.
|
inherited |
set generic type to T
|
inherited |
|
inherited |
set the parallel object
parallel | parallel object to use |
Definition at line 243 of file SGObject.cpp.
|
inherited |
set the version object
version | version object to use |
Definition at line 285 of file SGObject.cpp.
void set_interleaved_optimization_enabled | ( | bool | enable | ) |
|
inherited |
|
virtualinherited |
set labels
lab | labels |
Reimplemented in CNeuralNetwork, CGaussianProcessMachine, CCARTree, CStructuredOutputMachine, CRelaxedTree, and CMulticlassMachine.
Definition at line 73 of file Machine.cpp.
|
inherited |
set linadd enabled
enable | if linadd shall be enabled |
Definition at line 105 of file KernelMachine.cpp.
|
inherited |
set maximum training time
t | maximimum training time |
Definition at line 90 of file Machine.cpp.
void set_mkl_block_norm | ( | float64_t | q | ) |
void set_mkl_epsilon | ( | float64_t | eps | ) |
void set_mkl_norm | ( | float64_t | norm | ) |
|
inherited |
|
inherited |
|
protected |
|
inherited |
|
inherited |
|
inherited |
|
virtualinherited |
Setter for store-model-features-after-training flag
store_model | whether model should be stored after training |
Definition at line 115 of file Machine.cpp.
|
inherited |
set support vector at given index to given value
idx | index of support vector |
val | new value of support vector |
Definition at line 150 of file KernelMachine.cpp.
|
inherited |
set support vectors to given values
svs | integer vector with all support vectors indexes to set |
Definition at line 180 of file KernelMachine.cpp.
void set_svm | ( | CSVM * | s | ) |
|
inherited |
|
virtualinherited |
A shallow copy. All the SGObject instance variables will be simply assigned and SG_REF-ed.
Reimplemented in CGaussianKernel.
Definition at line 194 of file SGObject.cpp.
|
protectedvirtualinherited |
Stores feature data of the SV indices and sets it to the lhs of the underlying kernel. Then, all SV indices are set to identity.
May be overwritten by subclasses in case the model should be stored differently.
Reimplemented from CMachine.
Definition at line 454 of file KernelMachine.cpp.
|
virtualinherited |
Reimplemented from CMachine.
Definition at line 713 of file KernelMachine.cpp.
|
virtualinherited |
train machine
data | training data (parameter can be avoided if distance or kernel-based classifiers are used and distance/kernels are initialized with train data). If flag is set, model features will be stored after training. |
Reimplemented in CRelaxedTree, CAutoencoder, CSGDQN, and COnlineSVMSGD.
Definition at line 47 of file Machine.cpp.
Trains a locked machine on a set of indices. Error if machine is not locked
indices | index vector (of locked features) that is used for training |
Reimplemented from CMachine.
Definition at line 483 of file KernelMachine.cpp.
|
protectedvirtual |
|
protectedvirtualinherited |
returns whether machine require labels for training
Reimplemented in COnlineLinearMachine, CHierarchical, CLinearLatentMachine, CVwConditionalProbabilityTree, CConditionalProbabilityTree, and CLibSVMOneClass.
|
inherited |
unset generic type
this has to be called in classes specializing a template class
Definition at line 305 of file SGObject.cpp.
|
virtualinherited |
Updates the hash of current parameter combination
Definition at line 250 of file SGObject.cpp.
|
protected |
|
protected |
|
protected |
|
inherited |
io
Definition at line 496 of file SGObject.h.
|
protectedinherited |
kernel
Definition at line 311 of file KernelMachine.h.
coefficients alpha
Definition at line 332 of file KernelMachine.h.
|
protectedinherited |
bias term b
Definition at line 329 of file KernelMachine.h.
|
protectedinherited |
is filled with pre-computed custom kernel on data lock
Definition at line 314 of file KernelMachine.h.
|
protectedinherited |
|
inherited |
parameters wrt which we can compute gradients
Definition at line 511 of file SGObject.h.
|
inherited |
Hash of parameter values
Definition at line 517 of file SGObject.h.
|
protectedinherited |
old kernel is stored here on data lock
Definition at line 317 of file KernelMachine.h.
|
protectedinherited |
|
inherited |
model selection parameters
Definition at line 508 of file SGObject.h.
|
inherited |
map for different parameter versions
Definition at line 514 of file SGObject.h.
|
inherited |
parameters
Definition at line 505 of file SGObject.h.
|
protectedinherited |
|
protectedinherited |
|
protectedinherited |
array of ``support vectors'' (indices of feature objects)
Definition at line 335 of file KernelMachine.h.
|
protectedinherited |
|
protected |
|
protected |
|
inherited |
parallel
Definition at line 499 of file SGObject.h.
|
protected |
measures training time for use with get_max_train_time()
|
protectedinherited |
|
protectedinherited |
if batch computation is enabled
Definition at line 320 of file KernelMachine.h.
|
protectedinherited |
if bias shall be used
Definition at line 326 of file KernelMachine.h.
|
protectedinherited |
if linadd is enabled
Definition at line 323 of file KernelMachine.h.
|
protectedinherited |
|
inherited |
version
Definition at line 502 of file SGObject.h.