Deep learning has shown incredible successes in the past few years, but there is still a lot of work remaining in order to understand why such over-parameterized models still generalize so well. In this presentation, Samy will cover recent work showing empirically interesting relations between learned internal representations and generalization.
Read Next: Latest Articles
Don’t miss a thing!
Subscribe to the World Summit AI mailing list to be the first to receive the latest news, AI articles and reports, ticket sales and much more…