t-SNE
Why reduce dimensions in data?
Normally the rule is: the higher the number of dimensions, the more data will be needed to train the model efficiently. If the number of dimensions increases, feature space becomes sparser (emptier), so we need more and more data and even the closest neighbor is too far away from another data point.
Dimensionality reduction is the process of reducing the number of less useful data points. By eliminating dimensions in the data set, we will have fewer relationships between features to observe. The advantages of dimensionality reduction are: data can be analyzed and visualized easily. Model overfitting is also less probable.
Here you can see an example of reducing a 2D plot to 1D plot.
We normally use dimensionality reduction techniques like t-SNE or PCA in cases where we have many dimensions (maybe thousands). Because it is difficult to visualize let's say 10000-dimensional data, we can use t-SNE to reduce it to a number of dimensions we can easily comprehend, for example 2 or 3.
There are some methods to reduce dimensionality through reducing the number of features in the data set:
eliminate some features:
We just drop some information (some features) from the data set. The problem is once we eliminated some features, the information is lost as well.
selecting features:
Not all features are useful equally. Some of them are less necessary as the others. Again reducing the number of features is always followed by information loss.
There are some criteria to choose a good feature :
- a feature appears frequently in the data set, which means that the model will probably see it in different settings.
- a feature has an obvious meaning and is not ambiguous
- a feature is uncorrelated to other features
T-SNE
t-SNE, an unsupervised, non-linear technique, takes the original data set which is presented in many dimensions and reduces it to a low dimensional graph that preserves most of the original information. The main goal of t-SNE is to project data into a low dimensional space in a way that the clustering in the high dimensional space is maintained. The method doesn't consist just of a projection onto one of the axis but is a technique to find the right group the data point belongs to.
If we only project the data on one of the axis, we will get a mess, e.g.:
As we can see the clustering was not kept and we just threw everything in one basket.
Let's understand how the t-SNE calculation works:
First of all, we determine how similar one point is in respect to the others in the original high dimensional space. Then, we calculate the similarity probability of data points.
We transform high dimensional distances between data points into conditional probabilities. Those conditional probabilities represent similarities of some data point x1 to some other data point x 2 in the n-dimensional space.
We measure the distance between two data points and then measure the similarities between them:
We continue with the procedure for each other neighboring point.
Because we use a normal distribution (a bell shaped curve), the data points that are most distant from each other have low similarity scores. On the other hand, close data points have high similarity values:
We proceed with the measurement and apply the technique to each neighboring data point.
Now we have to normalize our data we plotted on the curve. We normalize data so that it doesn't depend on the data points density within the cluster. For example:
Data density around the starting point is high
Data density around the starting point is low
We can conclude that the width of a bell shaped curve is dependent on the data density near the point we want to calculate distances from. It is important to scale the similarity scores so that we treat data point independently of their density aka curve width. When the similarities are properly scaled, they can be summed up to 1.
As we now know, t-SNE is in its core a probabilistic technique and it tries to bring two distributions together. One distribution measures similarities of two data points as input and the other distribution measures similarities of the corresponding two data points in the low-dimensional embedding space.
Fundamentally, this indicates that the t-SNE algorithm analyses the initial data, then it tries to present this data in lower dimensions. T-SNE accomplishes it by reducing the divergence between both distributions. The way t-SNE does this reduction is computationally expensive. Therefore, there exist some significant restrictions. As an example, it is not always recommended to use t-SNE first. With very high dimensional data, we might need to apply another dimensionality reduction technique (like PCA in case of dense data or TruncatedSVD if we deal with sparse data) in the first place and only then use t-SNE.
Conclusion on t-SNE
t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of methods for dimensionality reduction. This non-linear technique is good applicable to the high-dimensional datasets visualization. T-SNE is widely used in fields like Natural Language Processing, speech processing and image processing. There are also some other techniques to reduce dimensionality in data. Some of them include: Singular Value Decomposition (SVD) and Principal Component Analysis (PCA).
Feel free to experiment with PCA and T-SNE here.
Further recommended readings: