karpathy's min-char-rnn, Enhanced
The original min-char-rnn is this demonstration in Python of a recurrent neural network (RNN) applied to the task of mimicing text.
My version (source code here) adds comments and a few options. Given the comedies of Shakespeare it might start out like this:
hUntsae UG s osU
a rps wr gOtroseTtUsTt nSeaNMo.
and eventually work its way up to this:
Nats dome usiall
What igpetty me, machape lienenter, am will showiot?
What chore and-kee do the how, knersel ars, wat is bonete thee Srure IMILUF. (PEYRIS
Dey everttard sats.
Pleves hast wolt tainy forcherss.
Bope Velites, have.
If you compare this to karpathy's RNN blog examples, you can see my version is lacking a bit. That's because it only implements a single-layer RNN, and lacks most of the sophistication of modern RNN implementations (e.g.: dropout, variable learning rate, simulated annealing, model validation during training, etc.)
This page is here to present my improved version of the Python/NumPY code, which adds some of the missing features, and brings it into compliance with Python 3. I have also added a lot of comments and instructions, to the extent that it's no longer what you'd call a 'gist'.
It is useful mainly because many users cannot get @karpathy's Torch/nn version working, either for lack of the necessary hardware, due to dependency problems and outright bugs in the various nn projects, or because it works but makes their system overheat (!).
This page was written in the "embarrassingly readable" markup language RHTF, and was last updated on 2017 Feb 02. s.11