|Computer Science/Discrete Mathematics Seminar II|
|Topic:||A practical guide to deep learning|
|Affiliation:||University of Toronto; Visitor, School of Mathematics|
|Date:||Tuesday, November 21|
|Time/Room:||10:30am - 12:30pm/S-101|
Neural networks have been around for many decades. An important question is what has led to their recent surge in performance and popularity. I will start with an introduction to deep neural networks, covering the terminology and standard approaches to constructing networks. I will focus on the two primary, very successful forms of networks: deep convolutional nets, as originally developed for vision problems; and recurrent networks, for speech and language tasks. In each case we will discuss what are believed to be some of the main ideas underlying the current successes, and their history. I will then describe how these two approaches come together in combined vision/text applications, such as image captioning.