Go to Udemy
A practical course about ensemble models in machine learning using Python programming language
In this practical course, we are going to focus on ensemble models in supervised machine learning using Python programming language.
Ensemble models are a particular kind of machine learning model that mixes several models together. The general idea is that a team of models is able to increase the performance of a single one, both in terms of stability (i.e. variance) and in terms of accuracy (i.e. bias). The most common ensemble models are Random Forests and Gradient Boosting Decision Trees, which are explained extensively in the lessons of this course. Other types of ensemble models are voting and stacking, which are more complex procedures that are able to increase the performance of a model.
With this course, you are going to learn:
What bias-variance tradeoff is and how to deal with it
Bagging and some bagging models (like Random Forest)
Boosting and some boosting models (Like XGBoost or AdaBoost)
All the lessons of this course start with a brief introduction and end with a practical example in Python programming language and its powerful scikit-learn library. The environment that will be used is Jupyter, which is a standard in the data science industry. All the Jupyter notebooks are downloadable.
This course is part of my Supervised Machine Learning in Python online course, so you'll find some lessons that are already included in the larger course.
Go to Udemy