티스토리 뷰
Generative Adversarial Network
■ minimax problem
$\underset{G}{min}\, \underset{D}{max} V(D, G) = \mathbb{E}_{x \sim P_{\text{data}}(x)} \big[ \log D(x) \big] + \mathbb{E}_{z \sim P_{z}(z)} \big[ \log (1-D(G(z)) \big]$
■ Discriminator
$maximize\quad J^{(D)}= \mathbb{E}_{x \sim P_{\text{data}}(x)} \big[ \log D(x) \big] + \mathbb{E}_{z \sim P_{z}(z)} \big[ \log (1-D(G(z)) \big]$
$\frac{1}{N}\sum_{i=1}^{N}{ y^{i}\,log(\hat{y_{i}}) + (1-y^{i})\,log(1- \hat{y_{i}})}
= -H(p, D)$
$minimize\quad H(p,D) = - \frac{1}{N}\sum_{i=1}^{N}{ y^{i}\,log(\hat{y_{i}}) + (1-y^{i})\,log(1- \hat{y_{i}})}$
■ Generator
$minimize\quad J^{(G)} = \mathbb{E}_{z \sim P_{z}(z)} \big[ \log (1-D(G(z)) \big]$
[ heuristic method ]
학습 초반, discriminator 의 학습이 훨씬 빨리 되기 때문에, D(G(z)) 의 값은 대부분 0에 가까워지게 됩니다.
즉, gradient vanishing 이 발생하게 됩니다.
$\nabla_{\theta_{g}} \frac{1}{m} \sum_{i=1}^{m}\log(1-D(G(z^{i})))\nabla_{\theta_{g}} \frac{1}{m} \sum_{i=1}^{m}\log(1-D(G(z^{i})))\approx 0$
이를 해결하고자 다음의 방법이 많이 사용됩니다.
$maximize\quad J^{(G)} = \mathbb{E}_{z \sim P_{z}(z)} \Big[\log D(G(z))\Big]$
참조
GAN — Why it is so hard to train Generative Adversarial Networks!
It is easier to recognize a Monet’s painting than drawing one. Generative models (creating data) are considered much harder comparing with…
medium.com
'딥러닝' 카테고리의 다른 글
[keras] ResNet (residual block) (1) | 2020.01.12 |
---|---|
[keras] vae spectrum generator (0) | 2019.04.27 |
- Total
- Today
- Yesterday
- gPRC
- dct
- SvD
- backpropagation
- DW
- Residual Block
- 네이버웹툰
- numpy
- 캡처방지
- flask serving
- DWT-DCT
- keras
- tensorflow serving
- Digital watermarking
- implementation
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 | 29 |
30 | 31 |