You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
link_to_notebook
In Topic 5, Part 1: Bagging and under the heading 4. Out-of-Bag Error, It starts as:
> Looking ahead, in case of Random Forest, there is no need to use cross-validation or hold-out samples in order to get an unbiased error estimation. Why? Because, in ensemble techniques, the error estimation takes place internally.
From what I understood, this must have been in the case of Bagging. Not Random Forest. Because in bagging trees, we use bootstrapping as a sampling method, approximately 1/3 of the data is not used in a single tree so this is like an internal validation set. However, Random Forest is about selecting a subset of features randomly so some of the stronger features won't dominate the model in the first splits.
If I am wrong, can you please update me? Thanks.
The text was updated successfully, but these errors were encountered:
link_to_notebook
In Topic 5, Part 1: Bagging and under the heading 4. Out-of-Bag Error, It starts as:
> Looking ahead, in case of Random Forest, there is no need to use cross-validation or hold-out samples in order to get an unbiased error estimation. Why? Because, in ensemble techniques, the error estimation takes place internally.
From what I understood, this must have been in the case of Bagging. Not Random Forest. Because in bagging trees, we use bootstrapping as a sampling method, approximately 1/3 of the data is not used in a single tree so this is like an internal validation set. However, Random Forest is about selecting a subset of features randomly so some of the stronger features won't dominate the model in the first splits.
If I am wrong, can you please update me? Thanks.
The text was updated successfully, but these errors were encountered: