Deep Learning and Text Generation
Learn about seq2seq and LSTM neural networks commonly used in NLP work and how to implement them for machine translation.
StartKey Concepts
Review core concepts you need to learn to master this subject
Generating text with seq2seq
One-hot vectors
Seq2Seq Timesteps
Teacher forcing for seq2seq
Deep Learning with TensorFlow
Improving seq2seq
Generating text with seq2seq
Generating text with seq2seq
The seq2seq (sequence to sequence) model is a type of encoder-decoder deep learning model commonly employed in natural language processing that uses recurrent neural networks like LSTM to generate output. seq2seq can generate output token by token or character by character. In machine translation, seq2seq networks have an encoder accepting language as input and outputting state vectors and a decoder accepting the encoder’s final state and outputting possible translations.
How you'll master it
Stress-test your knowledge with quizzes that help commit syntax to memory