5.1.4 Calculus and convex optimization

If some characters seem to be missing, it's because MathJax is not loaded correctly. Refreshing the page should fix it.

  1. Differentiable functions
    1. [E] What does it mean when a function is differentiable?
    2. [E] Give an example of when a function doesn’t have a derivative at a point.
    3. [M] Give an example of non-differentiable functions that are frequently used in machine learning. How do we do backpropagation if those functions aren’t differentiable?
  2. Convexity
    1. [E] What does it mean for a function to be convex or concave? Draw it.
    2. [E] Why is convexity desirable in an optimization problem?
    3. [M] Show that the cross-entropy loss function is convex.
  3. Given a logistic discriminant classifier:

    where the sigmoid function is given by:

    The logistic loss for a training sample with class label is given by:

    1. Show that .
    2. Show that .
    3. Show that is convex.
  4. Most ML algorithms we use nowadays use first-order derivatives (gradients) to construct the next training iteration.

    1. [E] How can we use second-order derivatives for training models?
    2. [M] Pros and cons of second-order optimization.
    3. [M] Why don’t we see more second-order optimization in practice?
  5. [M] How can we use the Hessian (second derivative matrix) to test for critical points?
  6. [E] Jensen’s inequality forms the basis for many algorithms for probabilistic inference, including Expectation-Maximization and variational inference.. Explain what Jensen’s inequality is.
  7. [E] Explain the chain rule.
  8. [M] Let , in which is a one-hot vector. Take the derivative of with respect to .
  9. [M] Given the function with the constraint . Find the function’s maximum and minimum values.

On convex optimization

Convex optimization is important because it's the only type of optimization that we more or less understand. Some might argue that since many of the common objective functions in deep learning aren't convex, we don't need to know about convex optimization. However, even when the functions aren't convex, analyzing them as if they were convex often gives us meaningful bounds. If an algorithm doesn't work assuming that a loss function is convex, it definitely doesn't work when the loss function is non-convex.

Convexity is the exception, not the rule. If you're asked whether a function is convex and it isn't already in the list of commonly known convex functions, there's a good chance that it isn't convex. If you want to learn about convex optimization, check out Stephen Boyd's textbook.


On Hessian matrix

The Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function.

Given a function . If all second partial derivatives of f exist and are continuous over the domain of the function, then the Hessian matrix H of f is a square nn matrix such that: .

Hessian matrix

The Hessian is used for large-scale optimization problems within Newton-type methods and quasi-Newton methods. It is also commonly used for expressing image processing operators in image processing and computer vision for tasks such as blob detection and multi-scale signal representation.

results matching ""

    No results matching ""