A small guide to Random Forest - part 1

I've recently started playing with Kaggle and got curious about one of the most famous classification/regression framework, Random Forest. In a problem of classification or regression, several random decision trees (a "forest") are built and at the end the outputs are combined ("bagging"). The intuition is that randomness and a meaningful quantity of trees will avoid over- and underfitting. One possible bagging technique is the majority vote. Take the case of predicting a binary outcome, say a random variable Y which can assume only values in \{-1, 1\}, with respect to some features X_1, ..., X_m (occurred events). We assume there exist a correct answer - the "right model" - which we have to predict. The intuition of the majority vote is that if such "divine truth" exists and we build several "quite reasonable" models, most of them will give the right prediction. If the right value is Y=1 and we make n "reasonable" predictions Y_1, ..., Y_n, most of them will be equal to 1 and only a minority will be equal to -1. In mathematical terms, we'll choose the following prediction:


Bagging is done in other ways, but to me the majority vote example is an easy way to understand the fundamental concept.
The Random Forest framework was introduced by statistician Leo Breiman in 2001 in his seminal paper. Even though implementations have been released in many languages (R, MatLab, Python, Java...), it's important to learn the basics, to be able to tune the parameters well.

Decision trees

The elements of a Random Forest are usually decision trees (there are variants of the framework, though). Assume we have the following database:
training data =\left[\begin{array}{ccccc}a_1&b_1&c_1&d_1&e_1\\a_2&b_2&c_2&d_2&e_2\\a_3&b_3&c_3&d_3&e_3\end{array}\right]
Each column is a sample, each row corresponds to a feature. We consider a binary output: [0\text{ }0\text{ }1\text{ }0\text{ }1]. We now will choose m=2 random features (to be able to represent the problem in the plane) and will start building a decision tree. Assume our random sample is:
random sample =\left[\begin{array}{ccccc}a_1&b_1&c_1&d_1&e_1\\a_3&b_3&c_3&d_3&e_3\end{array}\right],
meaning that we randomly selected the features 1 and 3. Let's represent these points on a plane, assigning a different color on the base of the associate output.



Notice the distribution of points in this universal region:

Frequency of output value in the universal region: red corresponds to value 0, blue to 1.
Frequency of output value in the universal region: red corresponds to value 0, blue to 1.

Now an hyperplane x=x_0 is selected (randomly or with some criteria, for instance maximising information gain) and the points are separated into two regions:


Our decision tree starts and we have the following split and new frequency distributions in the two new regions:


Now the idea is to iterate this procedure separately on each branch. For instance, we consider only Region 1 (x\leq\text{ }x_0) and draw another hyperplane, say y=y_1:



On the other branch, we draw another hyperplane x=x_1:


Summing up, we built the following tree.





At this point clearly we can stop. We divided the plane in regions which completely classify our training data.
To summarise, here are the steps of Random Forest:

  1. For k = 1, 2, ..., Ntrees:
    --> select a bootstrap sample S from training data
    --> grow a decision tree T_k (with a stopping criterion for the depth)
  2. Bagging on \{T_k\}_{k}

Next, I plan to show the use of some variables and features of the randomForest R package and to make some observations on the algorithm. For instance, how to choose Ntrees? How to determine a reasonable stopping criterion for the tree depth?

The featured image is an excuse to introduce a great visualisation resource for Random Forests: check it out.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someonePrint this page

Paola Elefante

Technical Project Manager working in Supply Chain Management solutions at Relex Solutions Oy. Proud mother with the best husband ever. Shameless nerd&geek. Feminist. Undercover gourmet.

One thought on “A small guide to Random Forest - part 1

Leave a Reply

Your email address will not be published. Required fields are marked *