NIPS2009_0291_slide - pen( A λ ) = σ 2 tr( A 2 λ ) leads...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
Data-driven calibration of linear estimators with minimal penalties (Sylvain Arlot and Francis Bach) Goals : choosing the kernel in multiple kernel Learning (MKL) choosing the regularization parameter in kernel ridge regression Select from multiple linear estimators b Y = A λ Y , λ Λ Penalization approach : b λ arg min λ Λ {k Y - A λ Y k 2 + pen( A λ ) } Ideal penalty
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: pen( A λ ) = σ 2 tr( A 2 λ ) leads to optimal selection, but depends on unknown noise variance σ 2 Minimal penalty pen( A λ ) = C [2tr( A λ )-tr( A 2 λ )] leads to a sharp jump in tr A ˆ λ ( C ) around C = σ 2 Allows data-driven estimation of σ 2 and non-asymptotic oracle inequalities...
View Full Document

Ask a homework question - tutors are online