Document worth reading: “Word Sense Disambiguation with LSTM: Do We Really Need 100 Billion Words”
Recently, Yuan et al. (2016) have confirmed the effectiveness of using Long Short-Term Memory (LSTM) for performing Word Sense Disambiguation (WSD). Their proposed method outperformed the sooner state-of-the-art with plenty of benchmarks, nonetheless neither the teaching information nor the availability code was launched. This paper presents the outcomes of a replica analysis of this method using solely openly obtainable datasets (GigaWord, SemCore, OMSTI) and software program program (TensorFlow). From them, it emerged that state-of-the-art outcomes might be obtained with rather a lot a lot much less information than hinted by Yuan et al. All code and expert fashions are made freely obtainable. Word Sense Disambiguation with LSTM: Do We Really Need 100 Billion Words