6

I would like to train a generative model that generates artificial handwritten text as output. which architectures would you recommend to use?

Training input could be either images of handwritten letters, not words, or maybe sequences of points for each letter. I thought of using some kind of combination of GAN+LSTM/GRU. Already found:

  1. http://blog.otoro.net/2015/12/12/handwriting-generation-demo-in-tensorflow/

  2. https://distill.pub/2016/handwriting/

Would appreciate any further recommendations.

wacax
  • 3,500
  • 4
  • 26
  • 48
GrozaiL
  • 374
  • 3
  • 6

3 Answers3

1

Found some implementation of lstm-based handwriter. Maybe I will use some parts.

GrozaiL
  • 374
  • 3
  • 6
0

I suggest you to implement a "simple" GAN with convolutional layers. It's not necessary, in my opinion, to add LSTM layers. That's an additional layer of complexity, while you can achieve state-of-the-art results with conv layers alone (and save also training time).

You can train your model(s) on the EMNIST Dataset of handwritten letters.

Leevo
  • 6,445
  • 3
  • 18
  • 52
0

This paper is using images of words for training.

Adversarial Generation of Handwritten Text Images Conditioned on Sequences https://arxiv.org/abs/1903.00277

brian
  • 1