This session will focus more on the essential mathematics underlying the most important machine learning concepts and algorithms. This makes easy to understand the algorithms better, and further helps to fine-tune the hyper-parameters involved in these algorithms.
Duration: 2.0 - 2.5 hours white-board session (no coding).
Meetup Location: Office No. 401, 4th Floor, Unity Gold Building, near Deccan PMT Bus Stop, Opp. Z-Bridge, Deccan, Pune.
Pls read these blog prior to the session. These things we have already covered in the 1st part of Essential Mathematics for ML.
1. For Machine Learning Basics: https://www.analyticsvidhya.com/blog/2015/06/machine-learning-basics
2. Basics of Machine Learning Algorithms (ignore the codes, for first-timers): https://www.analyticsvidhya.com/blog/2017/09/common-machine-learning-algorithms
3. The Bias-Variance Trade-off: https://www.kdnuggets.com/2016/08/bias-variance-tradeoff-overview.html
4. Linear Regression & Regularisation: https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression
THE CONTENTS GIVEN BELOW WILL BE STRICTLY FOLLOWED. NO GENERAL INTRODUCTION TO MACHINE LEARNING will be done. Participants are expected to know about Supervised and unsupervised learning, the basic meaning of Classification, Regression, and Clustering, concepts of overfitting and underfitting, Bias & Variance. Links given above.
Part – A: Basic Probability
- Basic Definitions
- Even & odds of an event
- Bayes Theorem & applications
- Probability Distribution Functions
Part – B: Basic Statistics
· Mean, Mode, Median
· Standard Deviation, Variance
· Correlation and Correlation-coefficient
· Standard Statistical Distributions
Part – C: Linear Algebra
1. Matrix Multiplication
2. Operations and Properties
a. Identity Matrix and Diagonal Matrices
b. Transpose, Inverse, Trace, Norms and Determinant of Matrices
c. Symmetric & Orthogonal Matrices
d. Linear Independence and Rank
e. Eigenvalues and Eigenvectors of Symmetric Matrices
3. Matrix Calculus
a. Gradients and Hessians of Quadratic and Linear Functions
b. Least Squares
c. Gradients of the Determinant
d. Eigenvalues as Optimization
Part – D: Applications to Machine Learning with worked examples
Ø Linear Regression: Least Squares Solution
Ø Stochastic Gradient Descent (SGD) for Linear Regression (also Batch and mini-batch GD will be discussed).
Ø Quick Regularisation Revision: Ridge, Lasso, ElasticNet
Ø Logistic Regression
Ø Linear Discriminant Analysis (if time permits)
Ø Naïve Bayes Classifier (if time permits)
Pls note there will be a small fee for this meetup (Rs. 200/-), to meet the Meetup hosting expenses and to attract only serious attendees. The registration link will be uploaded on AllEvents website tonight.
Prashant Sahu ( https://www.linkedin.com/in/prashantksahu