"Deep Declarative Networks: a new hope"
Stephen Gould, Richard Hartley, Dylan Campbell
We explore a new class of end-to-end learnable models wherein data processing nodes (or network layers) are defined in terms of desired behaviour rather than an explicit forward function. Specifically, the forward function is implicitly defined as the solution to a mathematical optimization problem. Consistent with nomenclature in the programming languages community, we name these models deep declarative networks. Importantly, we show that the class of deep declarative networks subsumes current deep learning models. Moreover, invoking the implicit function theorem, we show how gradients can be back-propagated through many declaratively defined data processing nodes thereby enabling end-to-end learning. We show how these declarative processing nodes can be implemented in the popular PyTorch deep learning software library allowing declarative and imperative nodes to co-exist within the same network. We also provide numerous insights and illustrative examples of declarative nodes and demonstrate their application for image and point cloud classification tasks.
💻SUBSCRIBE AND FOLLOW:
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: https://bit.ly/3eQOgwP
TUTORIALS AND WORKSHOPS:
ECCV 2020 Tutorial
CVPR 2020 Workshop
"Deep Declarative Networks: a new hope" Preprint
"Deep Declarative Networks"
📚"On differentiating parameterized argmin and argmax problems with application to bi-level optimization"
📚"OptNet: Differentiable Optimization as a Layer in Neural Networks" :
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: email@example.com
#talkingpapers #TPAMI2021 #deepdeclarativenetworks
#machinelearning #deeplearning #AI #neuralnetworks #research #computervision #artificialintelligence
Recorded on March, 31th 2021.