2.4.12 Norms and objectivesWhile we don’t want to get too far ahead of ourselves, we do want you to anticipate why theseconcepts are useful. In machine learning we’re often trying to solve optimization problems:Maximizethe probability assigned to observed data.Minimizethe distance between predictionsand the ground-truth observations. Assign vector representations to items (like words, prod-ucts, or news articles) such that the distance between similar items is minimized, and the dis-tance between dissimilar items is maximized. Oftentimes, these objectives, perhaps the mostimportant component of a machine learning algorithm (besides the data itself), are expressedas norms.2.4.13 Intermediate linear algebraIf you’ve made it this far, and understand everything that we’ve covered, then honestly, youareready to begin modeling. If you’re feeling antsy, this is a perfectly reasonable place to moveon. You already know nearly all of the linear algebra required to implement a number of manypractically useful models and you can always circle back when you want to learn more.But there’s a lot more to linear algebra, even as concerns machine learning. At some point, ifyou plan to make a career of machine learning, you’ll need to know more than we’ve coveredso far. In the rest of this chapter, we introduce some useful, more advanced concepts.Basic vector propertiesVectors are useful beyond being data structures to carry numbers. In addition to reading andwriting values to the components of a vector, and performing some useful mathematical oper-ations, we can analyze vectors in some interesting ways.