Visualizing complex data has always been a major problem in the field of data science , as complex data sets often lack comprehensibility , which make them extremely difficult to interpret for people who aren’t data scientists . Training deep learning models on these datasets greatly aggravates this problem as the models tend to become very opaque and hard to interpret.
This problems is especially acute in medical datasets , where the black box nature of deep learning algorithms is a cause for concern and doctors have difficulty believing the predictions of the model. It is more easy to interpret the models and build trust if they are coupled with visualizations which could be easily explained to the doctors.
Visualizing complex datasets though , is a very arduous task because of the huge number of features in the set and the sparsity of the datasets. This problem can be tackled by using variations of deep autoencoders/VAE's/VAE-GANs' etc. which can figure out the latent structure in complex data sets and map it to a 3-D plane which can be readily understood.