Robustness of deep neural networks to adversarial attack: from heuristic methods to certified methods

LIST

모드선택 :              
세미나 신청은 모드에서 세미나실 사용여부를 먼저 확인하세요 

Robustness of deep neural networks to adversarial attack: from heuristic methods to certified methods

수리과학부 0 1134
구분 박사학위 논문 심사
일정 2021-05-27(목) 17:00~19:00
세미나실 27동 116호
강연자 이성윤 (서울대학교 수리과학부)
담당교수
기타
※ 발표시간: 17:00-18:00 Deep learning has shown successful results in many applications. However, it has been demonstrated that deep neural networks are vulnerable to small but adversarially designed perturbations in the input which can mislead a network to predict a wrong label. There have been many studies on such adversarial attacks and defenses against them. However, Athalye etal cite{athalye2018obfuscated} have shown that most defenses rely on specific predefined adversarial attacks and can be completely broken by stronger adaptive attacks. Thus, certified methods are proposed to guarantee stable prediction of input within a perturbation set. We present this transition from heuristic defense to certified defense, and investigate key features of certified defenses, textit{tightness and smoothness}.

    정원 :
    부속시설 :
세미나명