Document worth reading: “Deep Learning Works in Practice. But Does it Work in Theory”

Deep learning will depend on a extremely specific type of neural networks: these superposing a variety of neural layers. In the previous few years, deep learning achieved most important breakthroughs in many duties equal to image analysis, speech recognition, pure language processing, and so forth. Yet, there’s no theoretical clarification of this success. In express, it should not be clear why the deeper the group, the upper it actually performs. We argue that the reason being intimately linked to a key perform of the data collected from our surrounding universe to feed the machine learning algorithms: large non-parallelizable logical depth. Roughly speaking, we conjecture that the shortest computational descriptions of the universe are algorithms with inherently large computation cases, even when a variety of laptop methods will be discovered for parallelization. Interestingly, this conjecture, blended with the folklore conjecture in theoretical laptop computer science that $ P neq NC$, explains the success of deep learning. Deep Learning Works in Practice. But Does it Work in Theory