Posts

Eigenvectors and Eigenvalues

Image
Eigenvectors  are used for understanding linear transformations.  In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix.  Eigenvectors are the directions along which a particular linear transformation acts by flipping, compressing or stretching. Eigenvalue  can be referred to as the strength of the transformation in the direction of eigenvector or the factor by which the compression occurs.

Univariate, bivariate and multivariate analysis

  Univariate  analyses  are descriptive statistical analysis techniques which can be differentiated based on the number of variables involved at a given point of time. For example, the pie charts of sales based on territory involve only one variable and can the analysis can be referred to as univariate analysis. The  bivariate  analysis  attempts to understand the difference between two variables at a time as in a scatterplot.  For example, analyzing the volume of sale and spending can be considered as an example of bivariate analysis. Multivariate analysis  deals with the study of more than two variables to understand the effect of variables on the responses.

Softmax non-linearity function

Image
It is because it takes in a vector of real numbers and returns a probability distribution. Its definition is as follows. Let x be a vector of real numbers (positive, negative, whatever, there are no constraints). Then the i’th component of Softmax( x) is —    It should be clear that the output is a probability distribution: each element is non-negative and the sum over all components is 1.

TF/IDF Vectorization

Image
TF–IDF is short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining. The TF–IDF value increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.

Law of Large Numbers

Image
  It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate.

K-Fold Cross Validation

Image
 Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set). The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).    K-Fold CV is where a given data set is

Over-fitting and under-fitting

Image
In statistics and machine learning, one of the most common tasks is to fit a  model  to a set of training data, so as to be able to make reliable predictions on general untrained data. In overfitting , a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data. Underfitting  occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data. Such a model too would have poor predictive performance. To combat overfitting and underfitting, you can resample the data to estimate the model accuracy (k-fold cross-validation) and by having a validation dataset to eval