時間:115年4月24日(星期五13:00- 15:00)
地點:臺北大學三峽校區音律電機資訊大樓B1萬榮講堂
大綱:
Massive experimental studies have witnessed the great success of deep learning (DL) in practical applications. However, DL remains by and large a mysterious “black-box”, spurring recent theoretical research to uncover its underlying principles. In this talk, we share our reflections, and obtained results, about mathematics of DL for data classification. Considering that the Euclidean metric over the network weight space typically fails to discriminate DL networks according to their classification performances, we propose from a probabilistic point of view a meaningful distance measure, whereby DL networks yielding similar performances are close. The proposed distance measure defines such an equivalent relation between network weights that networks with identical classification performance belong to the same equivalent class. This enables us to construct an associated quotient set, over which our proposed distance measure is provably a metric. Then, it is shown that the obtained metric quotient space is compact, apart from a vanishingly small subset. Our study contributes to some fundamental understanding of DL, though its impacts on practical algorithmic analysis and design are yet to be explored.
