View the step-by-step solution to:

Question

# We consider the problem of learning a vector-valued function data-value="R^dtoR^p">﻿﻿ from input-output training data ﻿﻿ where each ﻿﻿ is a d-dimensional vector and each ﻿﻿ is a p-dimensional vector. We choose our hypothesis class to be the set of linear functions from ﻿﻿ to ﻿﻿ , that is function satisfying ﻿﻿ for some ﻿﻿ regression matrix ﻿﻿ , and we want to minimize the squared error loss function over the training data.Let ﻿﻿ be the minimizer of the empirical risk:﻿﻿ -Derive a closed-form solution for ﻿﻿ as a function of the data matrices ﻿﻿ $and ﻿﻿ -Show that solving the problem from the previous question is equivalent to independently solving ﻿﻿ independent classical linear regression problems (one for each component of the output vector), and give an example of a multivariate regression task where performing independent regressions for each output variables is not the best thing to do.-The low rank regression algorithm addresses the issue described in the previous question by imposing a low rank constraint on the regression matrix$W$. Intuitively, the low rank constraint encourages the model to capture linear dependencies in the components of the output vector. Propose an algorithm to minimize the squared error loss over the training data subject to a low rank constraint on the regression matrix ﻿﻿ :﻿﻿ rank(﻿﻿) ﻿﻿ ﻿﻿ (hint: There are different ways to do that. Leverage the fact that rank(﻿﻿) ﻿﻿﻿﻿ if and only if there exists ﻿﻿$ and ﻿﻿ \$ such that ﻿﻿ ) ATTACHMENT PREVIEW Download attachment form.PNG

### Why Join Course Hero?

Course Hero has all the homework and study help you need to succeed! We’ve got course-specific notes, study guides, and practice tests along with expert tutors.

• ### -

Study Documents

Find the best study resources around, tagged to your specific courses. Share your own to gain free Course Hero access.

Browse Documents