8.1.2 Questions

  1. [E] What are the basic assumptions to be made for linear regression?
  2. [E] What happens if we don’t apply feature scaling to logistic regression?
  3. [E] What are the algorithms you’d use when developing the prototype of a fraud detection model?
  4. Feature selection.
    1. [E] Why do we use feature selection?
    2. [M] What are some of the algorithms for feature selection? Pros and cons of each.
  5. k-means clustering.

    1. [E] How would you choose the value of k?
    2. [E] If the labels are known, how would you evaluate the performance of your k-means clustering algorithm?
    3. [M] How would you do it if the labels aren’t known?
    4. [H] Given the following dataset, can you predict how K-means clustering works on it? Explain.

      k-means clustering
  6. k-nearest neighbor classification.
    1. [E] How would you choose the value of k?
    2. [E] What happens when you increase or decrease the value of k?
    3. [M] How does the value of k impact the bias and variance?
  7. k-means and GMM are both powerful clustering algorithms.

    1. [M] Compare the two.
    2. [M] When would you choose one over another?

      Hint: Here’s an example of how K-means and GMM algorithms perform on the artificial mouse dataset.

      k-means clustering vs. gaussian mixture model
      Image from Mohamad Ghassany’s course on Machine Learning
  8. Bagging and boosting are two popular ensembling methods. Random forest is a bagging example while XGBoost is a boosting example.
    1. [M] What are some of the fundamental differences between bagging and boosting algorithms?
    2. [M] How are they used in deep learning?
  9. Given this directed graph.
    Adjacency matrix
    1. [E] Construct its adjacency matrix.
    2. [E] How would this matrix change if the graph is now undirected?
    3. [M] What can you say about the adjacency matrices of two isomorphic graphs?
  10. Imagine we build a user-item collaborative filtering system to recommend to each user items similar to the items they’ve bought before.
    1. [M] You can build either a user-item matrix or an item-item matrix. What are the pros and cons of each approach?
    2. [E] How would you handle a new user who hasn’t made any purchases in the past?
  11. [E] Is feature scaling necessary for kernel methods?
  12. Naive Bayes classifier.

    1. [E] How is Naive Bayes classifier naive?
    2. [M] Let’s try to construct a Naive Bayes classifier to classify whether a tweet has a positive or negative sentiment. We have four training samples:

      Tweet Label
      This makes me so upset Negative
      This puppy makes me happy Positive
      Look at this happy hamster Positive
      No hamsters allowed in my house Negative

    According to your classifier, what's sentiment of the sentence The hamster is upset with the puppy?

  13. Two popular algorithms for winning Kaggle solutions are Light GBM and XGBoost. They are both gradient boosting algorithms.

    1. [E] What is gradient boosting?
    2. [M] What problems is gradient boosting good for?
  14. SVM.

    1. [E] What’s linear separation? Why is it desirable when we use SVM?
    2. [M] How well would vanilla SVM work on this dataset?

      Adjacency matrix
    3. [M] How well would vanilla SVM work on this dataset?

      Adjacency matrix
    4. [M] How well would vanilla SVM work on this dataset?

      Adjacency matrix

This book was created by Chip Huyen with the help of wonderful friends. For feedback, errata, and suggestions, the author can be reached here. Copyright ©2021 Chip Huyen.

results matching ""

    No results matching ""