Multiscale Training of Convolutional Neural Networks

모드선택 :              
세미나 신청은 모드에서 세미나실 사용여부를 먼저 확인하세요

Multiscale Training of Convolutional Neural Networks

김수현 0 2116
구분 초청강연
일정 2026-03-24(화) 16:00~17:00
세미나실 27동 220호
강연자 Eldad Haber (University of British Columbia)
담당교수 홍영준
기타

Convolutional Neural Networks (CNNs) are the backbone of many deep learning methods, but optimizing them remains computationally expensive. To address this, we explore multiscale training frameworks and mathematically identify key challenges, particularly when dealing with noisy inputs. Our analysis reveals that in the presence of noise, the gradient of standard CNNs in multiscale training may fail to converge as the mesh-size approaches to 0, undermining the optimization process. This insight drives the development of Mesh-Free Convolutions (MFCs), which are independent of input scale and avoid the pitfalls of traditional convolution kernels. We demonstrate that MFCs, with their robust gradient behavior, ensure convergence even with noisy inputs, enabling more efficient neural network optimization in multiscale settings. To validate the generality and effectiveness of our multiscale training approach, we show that (i) MFCs can theoretically deliver substantial computational speedups without sacrificing performance in practice, and (ii) standard convolutions benefit from our multiscale training framework in practice.

    정원 :
    부속시설 :
세미나명