Community Meeting (Jan 22, 2020): Machine Learning: What it can and can't do
Description
Presenters: Ted Thompson
An explanation of how machines learn and the intrinsic limitations of this method of learning. The talk will cover how machine learning is connected to probability and statistics, optimization, and generative vs discriminatory models. Examples of different types of machine learning models and what they can and cannot do will be included.
Notes
- Machine learning is a subset of AI.
- Simplistic description: a pattern recognition system.
- Probability and Statistics: use data sample to do:
- Inference
- Prediction
- Clustering
- Classification
- optimization
- example
- y = mx + b
- in machine learning, you are trying to find the best answer for m
- b is used to account for obscurities and bias in the model
- x is the input
- y is the result.
- y = mx + b
- example
- two main ways of learning
- Discriminative model
- decision boundary
- uses:
- Regressions
- Support Vector Machines
- most deep neural nets are discriminative models
- Generative model
- probability distributions of the data set
- uses:
- Naive Bayes (https://en.wikipedia.org/wiki/Bayes%27_theorem)
- Generative Adversarial Network
- Discriminative model
- What ML can't do
- learn without sufficient data
- explain itself
- reason
- cannot understand that if a < b and b < c, that a < c
- "Do the right thing" that is change break from the task to do something that is important but out of "character" e.g. a protestor stopping to help an injured police officer.
- What ML can do
- Repetitive tasks really fast
- Drive you crazy (takes a lot of effort to make your model work correctly)
- Generate synthetic data
- including pictures, video, art, poems, etc.
- Example: https://www.thispersondoesnotexist.com/
- Wikipedia article about the above site and the GAN network it uses: https://en.wikipedia.org/wiki/StyleGAN
- Question about the machine not being able to explain what it is doing.
- Discussion about this question brought up this link:
- https://cacm.acm.org/magazines/2020/1/241703-techniques-for-interpretable-machine-learning/fulltext
- Bias vs Variance tradeoff
- to minimize error rates you want to minimize the bias and variance.
- less bias has more variance
- less variance has more bias
- to minimize error rates you want to minimize the bias and variance.