Document worth reading: “On the Robustness of Projection Neural Networks For Efficient Text Representation: An Empirical Study”

Recently, there was sturdy curiosity in rising pure language functions that keep on non-public models equal to cellphones, watches and IoT with the aim to guard client privateness and have low memory. Advances in Locality-Sensitive Hashing (LSH)-based projection networks have demonstrated state-of-the-art effectivity with none embedding lookup tables and in its place computing on-the-fly textual content material representations. However, earlier works have not investigated ‘What makes projection neural networks environment friendly at capturing compact representations for textual content material classification?’ and ‘Are these projection fashions proof towards perturbations and misspellings in enter textual content material?’. In this paper, we analyze and reply these questions by means of perturbation analyses and by working experiments on a quantity of dialog act prediction duties. Our outcomes current that the projections are proof towards perturbations and misspellings as compared with widely-used recurrent architectures that use phrase embeddings. On ATIS intent prediction course of, when evaluated with perturbed enter info, we observe that the effectivity of recurrent fashions that use phrase embeddings drops significantly by higher than 30% as compared with merely 5% with projection networks, displaying that LSH-based projection representations are sturdy and continuously end in fine quality effectivity. On the Robustness of Projection Neural Networks For Efficient Text Representation: An Empirical Study