Attribute Embedding using Neural Networks

A popular and powerful approach employed in Natural Language Processing (NLP) applications is word embedding. By constructing representations of words within a Euclidean vector space of a pre-specified dimension, individual words are endowed with semantic as well as syntactic information through their relative position in space [1]. But what if the problem at hand does not involve natural language? Can these models still be of any benefit? The purpose of this blog post is to demonstrate how some the ideas behind word embedding models can be used to learn useful and more general attribute embeddings.