This preview shows page 1. Sign up to view the full content.
Unformatted text preview: approaches have the disadvantage that since ∊ = 0, the sparseness of the representation has been lost. The type of criterion that is optimised in all of the algorithms we have considered also arises in many other contexts, which all lead to a solution with a dual representation. We can express these criteria in the general form where L is a loss function, · 퓗 a regulariser and C is the regularisation parameter. If L is the square loss, this gives rise to regularisation networks of which Gaussian processes are a special case. For this type of problem the solution can always be expressed in the dual form. In the next chapter we will describe how these optimisation problems can be solved efficiently, frequently making use of the sparseness of the solution when deriving algorithms for very large datasets....
View
Full
Document
This note was uploaded on 10/15/2011 for the course MBAHRM 565 taught by Professor Profbhattacharya during the Spring '11 term at IIT Kanpur.
 Spring '11
 ProfBhattacharya

Click to edit the document details