Over the last decades, with neural networks and deep learning, several powerful architectures have been proposed, including e.g. convolutional neural networks (CNN), stacked autoencoders, deep Boltzmann machines (DBM), deep generative models and generative adversarial networks (GAN). On the other hand, with support vector machines (SVM) and kernel machines, solid foundations in learning theory and optimization have been achieved. What's next? Could one combine the best of both worlds? Within this talk, we outline a unifying picture and show several new synergies, for which model representations and duality principles play an important role. A recent example is restricted kernel machines (RKM), which connects least squares support vector machines (LS-SVM) to restricted Boltzmann machines (RBM). New developments on this will be shown for deep learning, generative models, multi-view and tensor based models, latent space exploration, robustness and explainability.