Learning Multi-Modal Word Representation Grounded in Visual Context

Authors

  • Éloi Zablocki LIP6; UPMC Univ Paris 06, UMR 7606, CNRS, Sorbonne Universités; F-75005, Paris
  • Benjamin Piwowarski LIP6; UPMC Univ Paris 06, UMR 7606, CNRS, Sorbonne Universités; F-75005, Paris
  • Laure Soulier LIP6; UPMC Univ Paris 06, UMR 7606, CNRS, Sorbonne Universités; F-75005, Paris
  • Patrick Gallinari LIP6; UPMC Univ Paris 06, UMR 7606, CNRS, Sorbonne Universités; F-75005, Paris

DOI:

https://doi.org/10.1609/aaai.v32i1.11939

Keywords:

multimodal representations, representation learning, word similarity, feature-norm prediction, concreteness, word embeddings, visual context, spatial information

Abstract

Representing the semantics of words is a long-standing problem for the natural language processing community. Most methods compute word semantics given their textual context in large corpora. More recently, researchers attempted to integrate perceptual and visual features. Most of these works consider the visual appearance of objects to enhance word representations but they ignore the visual environment and context in which objects appear. We propose to unify text-based techniques with vision-based techniques by simultaneously leveraging textual and visual context to learn multimodal word embeddings. We explore various choices for what can serve as a visual context and present an end-to-end method to integrate visual context elements in a multimodal skip-gram model. We provide experiments and extensive analysis of the obtained results.

Downloads

Published

2018-04-27

How to Cite

Zablocki, Éloi, Piwowarski, B., Soulier, L., & Gallinari, P. (2018). Learning Multi-Modal Word Representation Grounded in Visual Context. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11939