Document worth reading: “An Interdisciplinary Comparison of Sequence Modeling Methods for Next-Element Prediction”
Data of sequential nature come up in tons of utility domains in varieties of, e.g. textual information, DNA sequences, and software program program execution traces. Different evaluation disciplines have developed methods to be taught sequence fashions from such datasets: (i) inside the machine learning space methods equivalent to (hidden) Markov fashions and recurrent neural networks have been developed and effectively utilized to a wide-range of duties, (ii) in course of mining course of discovery strategies goal to generate human-interpretable descriptive fashions, and (iii) inside the grammar inference space the primary focus is on discovering descriptive fashions inside the variety of formal grammars. Despite their completely completely different focuses, these fields share a typical goal – learning a model that exactly describes the habits inside the underlying information. Those sequence fashions are generative, i.e, they may predict what elements are vulnerable to occur after a given unfinished sequence. So far, these fields have developed primarily in isolation from each other and no comparability exists. This paper presents an interdisciplinary experimental evaluation that compares sequence modeling strategies on the responsibility of next-element prediction on 4 real-life sequence datasets. The outcomes level out that machine learning strategies that normally have no goal at interpretability in phrases of accuracy outperform strategies from the tactic mining and grammar inference fields that goal to yield interpretable fashions. An Interdisciplinary Comparison of Sequence Modeling Methods for Next-Element Prediction