click below
click below
Normal Size Small Size show me how
Exam 1 Part 3
Machine Learning
Question | Answer |
---|---|
What are examples of regression? | anything that has to do with predicting. predicting price of house given its features. |
What are examples of classification? | anything that has to do with grouping items of similar features and attributes. classify if an email is spam or not. |
What algorithms are used for regression? | linear, ridge, lasso, neural network, and decision tree regression, KNN, SVM |
What algorithms are used for classification? | logistic regression, naive bayes, KNN, decision tree, SVM |
What are examples of supervised learning? | optimize newspaper distribution, predict how a user will rate an item on amazon, predict if a hotel is likely to sell out, predict how users will rate a movie, predict viral content, |
What are examples of unsupervised learning? | Google's PageRank, reading and writing behavior |
What are examples of reinforcement learning? | uplift modeling, optimal policy for environment where the probabilities are known |
What algorithms are used with supervised learning? | linear and logistic regression, classification, neural networks, LDA, decision trees |
What algorithms are used with unsupervised learning? | decision tree clustering, k-means clustering, SVM, topic models, gaussian mixture models |
What algorithms are used with reinforcement learning? | relies on maximizing expectation of reward conditioned on user attributes and action, it is actionable, can be live or from logged data, A/B where we can measure causal inference of an outcome conditioned on an action. |
What is cross-validation? | Method of evaluating different learning algorithms by dividing data sets between training and testing. |
Why is cross-validation important? | Used to prevent over-fitting of data. |
What is hyperparameter optimization? | Finding the set of hyperparameters (all parameters which can be arbitrarily set by user) that leads to the best performance on a validation dataset |
What are the 5 steps of approaching an application for machine learning? | 1. Define the problem to be solved 2. Collect labeled data 3. Choose an algorithm class 4. Choose an optimization metric for the learning model 5. choose a metric for evaluating the model. |
What is an objective function in ML? | a real-valued function whose value is to be either minimized or maximized over the set of feasible alternatives |
What is the role of an objective function? | attempts to maximize profits or minimize losses based on a set of constraints and the relationship between one or more decision variables |
What is eager learning? | receives data and starts learning, does not wait for test data to learn, takes a long time learning and less time classifying. |
What is lazy learning? | stores dataset without learning from it, classifies data when it receives the test data, takes less time learning and more time classifying |
What is a parametric learning algorithm? Give examples. | has a fixed number of parameters, computationally faster but makes stronger assumptions about the data. Seen in linear regression. |
What is a nonparametric learning algorithm? Give examples. | uses flexible number of parameters, number of parameters grows as it learns from more data, computationally slower. Seen in KNN |
What is a discriminative learning algorithm? Give examples. | Focuses on learning the decision boundary between different class labels learned from a dataset. Logistic Regression, KNN, SVM, neural networks |
What is a generative learning algorithm? Give examples. | focuses on modeling the underlying probability distribution of the data in order to generate new examples that are similar to the original data. Naive Bayes, Gaussian mixture Models, hidden markov models |
Venn diagram of AI, Machine Learning, Deep Learning, and Data Science interaction | |
Explain differences between eager and lazy | eager immediately starts learning, and takes a long time learning. lazy waits for the test data and takes less time learning. |
Explain differences between parametric and nonparametric | parametric has fixed parameters while nonparametric does not. parametric is fast, nonparametric is slow. |
Explain differences between discriminative and generative | Discriminative learning algorithms only need the input features and the corresponding class labels, while generative learning algorithms need the joint probability distribution of the features and the labels too. DL performs best where boundary is well-defined, GL can be used where boundary not clear |
Examples of eager learning | Decision tree, naive bayes, artificial neural networks |
Examples of lazy learning | KNN, case-based reasoning |