Saturday 20 June 2015

Pattern recognition using MPL,MIL, MCL

This blog is very stimulating, as the author like to share her explanations of advanced vision algorithms and codes.

This time it is about Multi-Instance Learning (MIL) and Multi-Pose Learning (MPL), proposed by Prof Boris Babenko at UC San Diego. MIL refers to learning from a package of the object, instead of using a labelled instance. The training samples are adjusted such that they lie in correspondence. On the other hand, MPL means learning different poses using different classifiers. It separates the data into groups and train separate classifiers for each group. In other words, divide and conquer.



The right figure refers to MIL, where every row constitutes the training sample for an object. The left one refers to MPL, where every row contains different poses. Each color represents a class.


Compared to MPL, MIL is more widely used. Unlike boosting, MPL uses

where y used in traditional boosting is replaced by a combination of $y^k$, k represents class and the result is 1 as long as one class is 1.

The iterative training procedures are similar, except that MPL requires extra training for each $y^k$.

Prof Cipolla proposes Multi-Class Learning (MCL),where multiple classes are learnt. It uses Noisy-OR model as in
The update stage in MCL is more clear than MPL
where $w_{ki}$ is the weight, k is the class, i is the sample. Note that instead of -1, 1, MCL uses 0, 1. Therefore, if a class is negative sample, then it will not be used in the next training. Meanwhile, the positive class sample has larger weights. In a class for k, the samples wrongly determined have increased weights, similar to traditional boosting. 
 

No comments:

Post a Comment