| 구분 |
박사학위 논문 심사 |
| 일정 |
2021-10-14(목) 16:00~18:30 |
| 세미나실 |
27동 220호 |
| 강연자 |
최재웅 (서울대학교) |
| 담당교수 |
강명주 |
| 기타 |
|
This thesis investigates how to find the representation of data satisfying the two desired properties: Disentanglement and Transform Equivariance. In Representation Learning, the representation of data is called disentangled if each axis corresponds to only one generative factor of data while being invariant to the others. Also, the representation is transform-equivariant if the representation encodes not only the existence of a feature, but also the variation of it.
In this thesis, we propose three methods to find a better representation of data: Two for Disentanglement and one for Transform Equivariance. First, we propose a Variational Autoencoder (VAE) model called Discond-VAE. Discond-VAE introduces the private latent variable to disentangle the class-dependent continuous generative factors of data. Second, we suggest a method to find latent traversal directions on pretrained Generative Adversarial Networks (GANs) that disentangle the generative factors of data. We call these directions as Local Basis. Local Basis discovers how the black-box latent space spans locally. The local-geometry-aware property of Local Basis provides increased stability of image traversal and an evaluation method for the global warpage of latent space. Finally, we propose a capsule network architecture called AR CapsNet by introducing the Attention Routing and Capsule Activation. AR CapsNet outperforms the CapsuleNet with less than half parameters while preserving the transform-equivariant property.
발표시간: 2021.10.14 오후 5시