Dense Associative Memories are generalizations of Hopfield nets to higher order (higher than quadratic) interactions between the spins/neurons. I will describe a relationship between these models and neural networks commonly used in deep learning. From the perspective of associative memory, such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. From the perspective of deep learning, these models make it possible to control the kind of representation that the neural networks learn from a given dataset: small powers of the interaction vertex correspond to feature-based representations, large powers – to prototypes. These Dense Associative Memories can be driven by images processed with convolutional neural networks generally used in image analysis. I will discuss the potential for using this idea to mitigate the problem of adversarial images (very small changes to an input image which lead to a gross misclassification) in computer vision.
Published on March 14, 2018 by Microsoft Research