A decision tree is a supervised machine learning algorithm mainly used for Regression and Classification. It breaks down a data set into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision tree can handle both categorical and numerical data.
Businesses often use linear regression to understand the relationship between advertising spending and revenue. For example, they might fit a simple linear regression model using advertising spending as the predictor variable and revenue as the response variable. The regression model would take the following form: revenue = β0 + β1(ad spending) The coefficient β0 would represent the total expected revenue when ad spending is zero. The coefficient β1 would represent the average change in total revenue when ad spending is increased by one unit (e.g. one dollar). If β1 is negative, it would mean that more ad spending is associated with less revenue. If β1 is close to zero, it would mean that ad spending has little effect on revenue. And if β1 is positive, it would mean more ad spending is associated with more revenue. Depending on the value of β1, a company may decide to either decrease or increase their ad spending.
Bias: Bias is an error introduced in your model due to oversimplification of the machine learning algorithm. It can lead to underfitting. When you train your model at that time model makes simplified assumptions to make the target function easier to understand. Low bias machine learning algorithms — Decision Trees, k-NN and SVM High bias machine learning algorithms — Linear Regression, Logistic Regression Variance: Variance is error introduced in your model due to complex machine learning algorithm, your model learns noise also from the training data set and performs badly on test data set. It can lead to high sensitivity and overfitting. Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance. B...
Comments
Post a Comment