Document worth reading: “On the Transferability of Representations in Neural Networks Between Datasets and Tasks”
Deep networks, composed of a quantity of layers of hierarchical distributed representations, are inclined to be taught low-level choices in preliminary layers and transition to high-level choices in the route of closing layers. Paradigms comparable to modify learning, multi-task learning, and steady learning leverage this notion of generic hierarchical distributed representations to share information all through datasets and duties. Herein, we analysis the layer-wise transferability of representations in deep networks all through a quantity of datasets and duties and observe some attention-grabbing empirical observations. On the Transferability of Representations in Neural Networks Between Datasets and Tasks