Despite the remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood. In this work, we present variants of stochastic gradient descent and analyze their last-iterate convergence under the assumption of convex-concavity. The analyses of the discrete algorithms are inspired by continuous-time analyses with differential equations.