6.2 Complexity and numerical analysis

If some characters seem to be missing, it's because MathJax is not loaded correctly. Refreshing the page should fix it.

Given that most of the recent breakthroughs in machine learning come from bigger models that require massive memory and computational power, it’s important to not only know how to implement a model but also how to scale it. To scale a model, we’d need to be able to estimate memory requirement and computational cost, as well as mitigate numerical instability when training and serving machine learning models. Here are some of the questions that can be asked to evaluate your understanding of numerical stability and scalability.

  1. Matrix multiplication
    1. [E] You have three matrices: and you need to calculate the product . In what order would you perform your multiplication and why?
    2. [M] Now you need to calculate the product of matrices . How would you determine the order in which to perform the multiplication?
  2. [E] What are some of the causes for numerical instability in deep learning?
  3. [E] In many machine learning techniques (e.g. batch norm), we often see a small term added to the calculation. What’s the purpose of that term?
  4. [E] What made GPUs popular for deep learning? How are they compared to TPUs?
  5. [M] What does it mean when we say a problem is intractable?
  6. [H] What are the time and space complexity for doing backpropagation on a recurrent neural network?
  7. [H] Is knowing a model’s architecture and its hyperparameters enough to calculate the memory requirements for that model?
  8. [H] Your model works fine on a single GPU but gives poor results when you train it on 8 GPUs. What might be the cause of this? What would you do to address it?
  9. [H] What benefits do we get from reducing the precision of our model? What problems might we run into? How to solve these problems?
  10. [H] How to calculate the average of 1M floating-point numbers with minimal loss of precision?
  11. [H] How should we implement batch normalization if a batch is spread out over multiple GPUs?
  12. [M] Given the following code snippet. What might be a problem with it? How would you improve it? Hint: this is an actual question asked on StackOverflow.
import numpy as np

def within_radius(a, b, radius):
    if np.linalg.norm(a - b) < radius:
        return 1
    return 0

def make_mask(volume, roi, radius):
    mask = np.zeros(volume.shape)
    for x in range(volume.shape[0]):
        for y in range(volume.shape[1]):
            for z in range(volume.shape[2]):
                mask[x, y, z] = within_radius((x, y, z), roi, radius)
    return mask

results matching ""

    No results matching ""