Authors
Xiaoxiang Zhu, Mengshu Hou, Xiaoyang Zeng and Hao Zhu, University of Electronic Science and Technology of China, China
Abstract
Most supervised systems of event detection (ED) task reply heavily on manual annotations and suffer from high-cost human effort when applied to new event types. To tackle this general problem, we turn our attention to few-shot learning (FSL). As a typical solution to FSL, cross-modal feature generation based frameworks achieve promising performance on images classification, which inspires us to advance this approach to ED task. In this work, we propose a model which extracts latent semantic features from event mentions, type structures and type names, then these three modalities are mapped into a shared low-dimension latent space by modality-specific aligned variational autoencoder enhanced by adversarial training. We evaluate the quality of our latent representations by training a CNN classifier to perform ED task. Experiments conducted on ACE2005 dataset show an improvement with 12.67% on F1-score when introducing adversarial training to VAE model, and our method is comparable with existing transfer learning framework for ED.
Keywords
Event Detection, Few-Shot Learning, Cross-modal generation, Variational autoencoder, GAN.