Deep Learning: A Critical Appraisal (Marcus, 2018).

Summary

Gary Marcus is a professor in Psychology and Neural Science at New York University. He is also the author of (Davis & Marcus, 2015). In this paper, he reflected on the current issues of Deep Learning and emphasized that DL is not a universal tool and we need to know what it is and is not and good for.

Limits on the scope of deep learning

  • Generalization comes in two flavors, interpolation between known examples, and extrapolation, which requires going beyond a space of known training examples.

  • Deep learning works best for classification problems with sufficient data samples in stable domains where examples are mapped in a constant way onto a limited set of categories.
  • Deep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition.
  • Many adversarial studies show that deep learning solutions are often extremely superficial.
  • Language has a hierarchical structure, which deep learning so far has no natural way to deal with. This is a good point and might be helpful to model design. And a related question: can PCE learn the hierarchical structure?
  • Other issues: Open-ended inference, transparency, integration with prior knowledge, causation vs. correlation (even statistical techniques have limits in this.), an assumption of stable domains, not credible and no sound guarantees about performance.

Future Direction

Integrate symbolic systems, which excels at inference and abstraction, with deep learning, which excels at perceptual classification.

  • symbolic systems, representations of abstract relationships.
  • convolution, translational invariance
  • hierarchical structure in natural language. A paper showing the incapability of RNN: (Lake & Baroni, 2018).

References

  1. Marcus, G. (2018). Deep learning: A critical appraisal. ArXiv Preprint ArXiv:1801.00631.
  2. Davis, E., & Marcus, G. (2015). Commonsense reasoning and commonsense knowledge in artificial intelligence. Communications of the ACM, 58(9), 92–103.
  3. Lake, B., & Baroni, M. (2018). Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. International Conference on Machine Learning, 2873–2882.