Language GAN (Generative Adversarial Network)


Notes:

A language GAN, or Generative Adversarial Network, is a type of machine learning model that is designed to generate natural language text. It works by training two neural networks, a generator and a discriminator, to work together to generate text that is indistinguishable from human-written text. The generator produces text, while the discriminator tries to distinguish between the generated text and real human-written text. The two networks are then trained using an adversarial process, in which the generator tries to produce text that the discriminator cannot distinguish from real text, and the discriminator tries to accurately distinguish between the generated text and real text.

In a Generative Adversarial Network (GAN), the discriminator is a type of classifier that is used to distinguish between generated data and real data. In the context of a language GAN, the discriminator is trained to classify text as either generated or real, based on a set of training data.

The role of the discriminator in a GAN is to provide feedback to the generator about the quality of the generated data. As the generator produces text, the discriminator compares it to real text and provides a score indicating how similar the generated text is to real text. This score is then used to adjust the generator’s parameters and improve the quality of the generated text.

  • Generative Adversarial Networks (GANs) are a type of machine learning model that is designed to generate data that is indistinguishable from real data. They work by training two neural networks, a generator and a discriminator, to work together to generate data that is of high quality. The generator produces data, while the discriminator tries to distinguish between the generated data and real data. The two networks are then trained using an adversarial process, in which the generator tries to produce data that the discriminator cannot distinguish from real data, and the discriminator tries to accurately distinguish between the generated data and real data.
  • Natural Language GAN  is a type of GAN that is specifically designed to generate natural language text. It works by training a generator and a discriminator to work together to produce text that is indistinguishable from human-written text. The generator produces text, while the discriminator tries to distinguish between the generated text and real human-written text.
  • Text GAN is a type of GAN that is specifically designed to generate text. It works in a similar way to a natural language GAN, but is specifically designed to generate text rather than other types of data. Like a natural language GAN, a text GAN uses a generator and a discriminator to produce text that is of high quality and indistinguishable from real text.

Wikipedia:

See also:

100 Best Generative Adversarial Network Videos


A review on generative adversarial networks: Algorithms, theory, and applications
J Gui, Z Sun, Y Wen, D Tao, J Ye – arXiv preprint arXiv:2001.06937, 2020 – arxiv.org
Page 1. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications … 1 INTRODUCTION GENERATIVE adversarial networks (GANs) have become a hot research topic recently …

On accurate evaluation of gans for language generation
S Semeniuta, A Severyn, S Gelly – arXiv preprint arXiv:1806.04936, 2018 – arxiv.org
… The recently proposed Generative Adversarial Networks (GAN) framework [12] goes beyond optimiz- ing a manually designed objective by leveraging a discriminator that learns to distinguish … It is thus possible that a well- behaved language GAN was not included in our search …

Evaluating text gans as language models
G Tevet, G Habib, V Shwartz, J Berant – arXiv preprint arXiv:1810.12686, 2018 – arxiv.org
… Models trained in this manner often struggle to overcome previous prediction errors. Generative Adversarial Networks (Goodfellow et al., 2014) offer a solution for exposure bias. ? The authors contributed equally … 2018. Language gans falling short …

Language gans falling short
M Caccia, L Caccia, W Fedus, H Larochelle… – arXiv preprint arXiv …, 2018 – arxiv.org
Page 1. Language GANs Falling Short … Page 4. rithm is superior, as evidenced by Figure 2, because no model simultaneously outperforms the other on both metrics. It is now standard for language GANs to evaluate simultaneously quality and diversity …

Latent code and text-based generative adversarial networks for soft-text generation
M Haidar, M Rezagholizadeh, A Do-Omri… – arXiv preprint arXiv …, 2019 – arxiv.org
Page 1. Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation … Text generation with generative adversarial networks (GANs) can be divided into the text- based and code-based categories according to the type of signals used for discrimination …

Learning representations of natural language texts with generative adversarial networks at document, sentence, and aspect level
A Vlachostergiou, G Caridakis, P Mylonas… – Algorithms, 2018 – mdpi.com
… Algorithms 2018, 11(10), 164; https://doi.org/10.3390/a11100164. Article. Learning Representations of Natural Language Texts with Generative Adversarial Networks at Document, Sentence, and Aspect Level … 2.3. Generative Adversarial Networks for NLP Tasks …

Training language gans from scratch
CM d’Autume, M Rosca, J Rae, S Mohamed – arXiv preprint arXiv …, 2019 – arxiv.org
… We have shown that large batch sizes, dense rewards and discriminator regularization remove the need for maximum likelihood pre-training in language GANs. To the best of our knowledge, we are the first to use Generative Adversarial Networks to train word-level language …

Training language gans from scratch
C de Masson d’Autume, S Mohamed… – Advances in Neural …, 2019 – papers.nips.cc
… Abstract Generative Adversarial Networks (GANs) enjoy great success at image genera- tion, but have proven difficult to train in the domain of natural language … We show it is in fact possible to train a language GAN from scratch — without maximum likelihood pre-training …

Cot: Cooperative training for generative modeling of discrete data
S Lu, L Yu, S Feng, Y Zhu, W Zhang, Y Yu – arXiv preprint arXiv …, 2018 – arxiv.org
… To tackle the exposure bias problem inherent in maximum likelihood es- timation (MLE), generative adversarial networks (GANs) are introduced to penalize the unrealis- tic generated samples … 2.2. Sequence Generative Adversarial Network …

MaskGAN: better text generation via filling in the_
W Fedus, I Goodfellow, AM Dai – arXiv preprint arXiv:1801.07736, 2018 – arxiv.org
… Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a framework for training generative models in an adversarial setup, with a generator … Designing error attribution per time step has been noted to be important in prior natural language GAN research (Yu et al …

Text to Game Characterization: A Starting Point for Generative Adversarial Video Composition
D Lee, H Choi – 2018 IEEE International Conference on Big …, 2018 – ieeexplore.ieee.org
… natural videos, many interesting properties exist for video generation that can be further analyzed. Furthermore, the proposed Language-GAN model can be applied to a chat-bot system for verifying its practical usability in the future. VIII … Generative adversarial nets …

-Generative Adversarial Networks with Memory for Text Generation
E Sheetz – esheetz.github.io
… storytelling capabilities of artificial intelligence. This project will provide an opportunity to explore what generative adversarial networks can do, evaluate how well neural networks can … The GAN framework has provided a way for computers to generate language. GANs were …

Discriminative Adversarial Search for Abstractive Summarization
T Scialom, PA Dray, S Lamprier, B Piwowarski… – arXiv preprint arXiv …, 2020 – arxiv.org
… Inspired by Generative Adversarial Networks (GANs), wherein a dis- criminator is used to improve the generator, our method differs from GANs in that the generator parameters are not updated at training time and the discriminator is only used to drive sequence generation at …

TextGAIL: Generative Adversarial Imitation Learning for Text Generation
Q Wu, L Li, Z Yu – arXiv preprint arXiv:2004.13796, 2020 – arxiv.org
… 2018. Language gans falling short. CoRR, abs/1811.02549. Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. CoRR, abs/1702.07983 …

Best Student Forcing: A Simple Training Mechanism in Adversarial Language Generation
J Sauder, T Hu, X Che, G Mordido, H Yang… – Proceedings of The 12th …, 2020 – aclweb.org
… Meanwhile, Generative Adversarial Nets (GANs) (Good- fellow et al., 2014) have been introduced into NLG (Yu et al., 2017; Che et al … However, vari- ous reports indicate that language GANs have shown short- comings in terms of training stability and sample diversi- ty, which …

CatGAN: Category-aware Generative Adversarial Networks with Hierarchical Evolutionary Learning for Category Text Generation
Z Liu, J Wang, Z Liang – arXiv preprint arXiv:1911.06641, 2019 – arxiv.org
Page 1. CatGAN: Category-aware Generative Adversarial Networks with Hierarchical Evolutionary Learning for Category Text Generation … 2018) and dialogue generation (Li et al. 2017). Recently, generative adversarial net (GAN) (Good- fellow et al …

DGSAN: Discrete Generative Self-Adversarial Network
E Montahaei, D Alihosseini, MS Baghshah – arXiv preprint arXiv …, 2019 – arxiv.org
… 2018. Language gans falling short. CoRR, abs/1811.02549. [3] Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. CoRR, abs/1702.07983 …

Imitation Learning for Sentence Generation with Dilated Convolutions Using Adversarial Training
JW Peng, MC Hu, CW Chang – 2019 IEEE International …, 2019 – ieeexplore.ieee.org
… [3] P. Abbeel and AY Ng, “Apprenticeship learning via inverse reinforcement learning,” ICML, 2004. [4] J. Ho and S. Ermon, “Generative adversarial imitation learning,” NIPS, p. 45654573, 2016 … al, “Language gans falling short,” arXiv preprint arXiv:1811.02549, 2018. 440

ARAML: A Stable Adversarial Training Framework for Text Generation
P Ke, F Huang, M Huang, X Zhu – arXiv preprint arXiv:1908.07195, 2019 – arxiv.org
… cn Abstract Most of the existing generative adversarial net- works (GAN) for text generation suffer from the instability of reinforcement learning train- ing algorithms such as policy gradient, leading to unstable performance. To …

Analyzing Natural Language Context in Human-Machine Teaming using Supervised Machine Learning
BA Barrows, L Le Vie, EL Meszaros, JE Ecker… – AIAA Scitech 2020 …, 2020 – arc.aiaa.org
… CIDEr = Consensus-based Image Description Evaluation DRM = Design Mission Reference GAN = Generative Adversarial Network GUI = Graphical User Interface HINGE = Human Informed Natural-language GANs Evaluation HMI = Human-Machine Interface NLP = Natural …

ColdGANs: Taming Language GANs with Cautious Sampling Strategies
T Scialom, PA Dray, S Lamprier, B Piwowarski… – arXiv preprint arXiv …, 2020 – arxiv.org
… under the Generative Adversarial Network (GAN) paradigm [14], which has been used successfully for image generation [3]. For text, modeled as a sequence of discrete symbols, a naive computation of the gradients is however intractable. Hence, Language GANs are based on …

-Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation
H Ying, D Li, X Li, P Li – The Thirty-Forth AAAI Conference on Artificial …, 2020 – aaai.org
… Recently, generative adversarial models have been applied extensively on text generation tasks, where the adversarially trained gen- erators alleviate the exposure bias experienced by conven- tional maximum likelihood approaches and result in promis- ing generation quality …

Style Example-Guided Text Generation using Generative Adversarial Transformers
KH Zeng, M Shoeybi, MY Liu – arXiv preprint arXiv:2003.00674, 2020 – arxiv.org
Page 1. Style Example-Guided Text Generation using Generative Adversarial Transformers STYLE EXAMPLE-GUIDED TEXT GENERATION USING GENERATIVE ADVERSARIAL TRANSFORMERS Kuo-Hao Zeng?, Mohammad …

Adversarial inference for multi-sentence video description
JS Park, M Rohrbach, T Darrell… – Proceedings of the …, 2019 – openaccess.thecvf.com
… Some works aim to overcome this issue by using the ad- versarial learning [9, 53]. While Generative Adversarial Networks [14] have achieved impressive results for image and even video generation [21, 43, 63, 77], their success in language generation has been limited [55, 71] …

Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions
LR Le Vie, MC Last, B Barrows, BD Allen – 2018 Aviation Technology …, 2018 – arc.aiaa.org
… CAS = Convergent Aeronautics Solutions CIDEr = Consensus-based Image Description Evaluation DRM = Design Reference Mission GAN = Generative Adversarial Network GUI = Graphical User Interface HINGE = Human Informed Natural-language GANs Evaluation HMI …

Adversarial Semantic Alignment for Improved Image Captions
P Dognin, I Melnyk, Y Mroueh… – Proceedings of the …, 2019 – openaccess.thecvf.com
Page 1. Adversarial Semantic Alignment for Improved Image Captions Pierre Dognin?, Igor Melnyk?, Youssef Mroueh?, Jerret Ross? & Tom Sercu? IBM Research, Yorktown Heights, NY {pdognin,mroueh,rossja}@us.ibm.com, {igor.melnyk,tom.sercu1}@ibm.com Abstract …

Rethinking Exposure Bias In Language Modeling
Y Xu, K Zhang, H Dong, Y Sun, W Zhao… – arXiv preprint arXiv …, 2019 – arxiv.org
… The distinctive increment in recall indicates less mode collapse, which is a common problem in language GANs and ACs. Figure 1 demonstrates the road exam results on EMWT News … 2017. Seqgan: Sequence generative adversarial nets with policy gradient …

Quantifying exposure bias for neural language generation
T He, J Zhang, Z Zhou, J Glass – arXiv preprint arXiv:1905.10617, 2019 – arxiv.org
… M. Caccia, L. Caccia, W. Fedus, H. Larochelle, J. Pineau, and L. Charlin. Language gans falling short. CoRR, abs/1811.02549, 2018. IJ Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets …

Improved Adversarial Image Captioning
P Dognin, I Melnyk, Y Mroueh, J Ross, T Sercu – 2019 – openreview.net
… Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Char- lin. Language gans falling short … R. Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. Boundary- seeking generative adversarial networks …

Evaluating Communication Modality for Improved Human/Autonomous System Teaming
EL Meszaros, LR Le Vie, MC Last, BA Barrows… – AIAA Scitech 2019 …, 2019 – arc.aiaa.org
… to facilitate such heterogeneous teams [1]. As part of ATTRACTOR , the human-machine interaction team has focused on evaluating communication as part of the Human Informed Natural-language GANs (Generative Adversarial Networks) Evaluation (HINGE) exploration …

Improving Adversarial Text Generation by Modeling the Distant Future
R Zhang, C Chen, Z Gan, W Wang, D Shen… – arXiv preprint arXiv …, 2020 – arxiv.org
Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation. Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to …

Unifying human and statistical evaluation for natural language generation
TB Hashimoto, H Zhang, P Liang – arXiv preprint arXiv:1904.02792, 2019 – arxiv.org
… Human discriminators cannot capture diversity effectively, and learned discriminators— eg, from a Generative Adversarial Network (Goodfellow et al., 2014) or one trained on human judgments (Lowe et al., 2017)—are too unreliable to use for rigorous evaluation …

The Detection of Distributional Discrepancy for Text Generation
X Chen, P Cai, P Jin, H Du, H Wang, X Dai… – arXiv preprint arXiv …, 2019 – arxiv.org
… This means that their distributions are different. Generative Adversarial Nets (GAN) are used to alleviate it … Experimenting on two existing language GANs, the dis- tributional discrepancy between real text and generated text increases with more adversarial learning rounds …

A Multi-language Platform for Generating Algebraic Mathematical Word Problems
V Liyanage, S Ranathunga – arXiv preprint arXiv:1912.01110, 2019 – arxiv.org
… In AAAI, 2017. [21] T. Che, Y. Li, R. Zhang, RD Hjelm, W. Li, Y. Song, and Y. Bengio. Maximum-likelihood augmented discrete generative adversarial net- works. In arXiv:1702.07983, 2017 … 2018. Language GANs Falling Short. arXiv:1811.02549 …

The curious case of neural text degeneration
A Holtzman, J Buys, L Du, M Forbes, Y Choi – arXiv preprint arXiv …, 2019 – arxiv.org
… Generative Adversarial Networks (GANs) have been a prominent research direction (Yu et al., 2017; Xu et al., 2018), but recent work has shown that when qual- ity and diversity are considered jointly, GAN-generated text fails to outperform generations from language models …

Learning Implicit Text Generation via Feature Matching
I Padhi, P Dognin, K Bai, CN Santos… – arXiv preprint arXiv …, 2020 – arxiv.org
… 214. Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. CoRR, abs/1702.07983. Jacob Devlin …

A Discriminator Improves Unconditional Text Generation without Updating the Generator
X Chen, P Cai, P Jin, H Wang, X Dai, J Chen – arXiv preprint arXiv …, 2020 – arxiv.org
… The language GANs try to narrow the gap by updating the parameters of G? directly according to the detected discrepancy signals from the dis- criminator (illustrated by the left in Figure 1(a)). Unfortunately, recent work demonstrates that these approaches do not work well …

Jointly measuring diversity and quality in text generation models
E Montahaei, D Alihosseini, MS Baghshah – arXiv preprint arXiv …, 2019 – arxiv.org
Page 1. Jointly Measuring Diversity and Quality in Text Generation Models Ehsan Montahaei? Sharif University of Technology / Tehran, Iran ehsan.montahaei@gmail.com Danial Alihosseini? Sharif University of Technology / Tehran, Iran dalihosseini@ce.sharif.edu …

-Nested-Wasserstein Distance for Sequence Generation
R Zhang, C Chen, Z Gan, Z Wen, W Wang, L Carin – bayesiandeeplearning.org
Reinforcement learning (RL) has been widely studied for improving sequencegeneration models. However, the conventional rewards used for RL training typically cannot capture sufficient semantic information and therefore render model bias. Further, the sparse and …

-Jointly Measuring Diversity and Quality in Text Generation Models
D Alihosseini, E Montahaei, MS Baghshah – Proceedings of the …, 2019 – aclweb.org
Page 1. Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation (NeuralGen), pages 90–98 Minneapolis, Minnesota, USA, June 6, 2019. c 2019 Association for Computational Linguistics 90 …

Trading Off Diversity and Quality in Natural Language Generation
H Zhang, D Duckworth, D Ippolito… – arXiv preprint arXiv …, 2020 – arxiv.org
Page 1. Trading Off Diversity and Quality in Natural Language Generation Hugh Zhang *12 Daniel Duckworth * 1 Daphne Ippolito 1 3 Arvind Neelakantan 4 Abstract For open-ended language generation tasks such as storytelling …

On the Weaknesses of Reinforcement Learning for Neural Machine Translation
L Choshen, L Fox, Z Aizenbud, O Abend – arXiv preprint arXiv:1907.01752, 2019 – arxiv.org
Page 1. On the Weaknesses of Reinforcement Learning for Neural Machine Translation Leshem Choshen1 Lior Fox1 Zohar Aizenbud1 1School of Computer Science and Engineering, 2 Department of Cognitive Sciences The …

-Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation
Y Song, N Miao, H Zhou, L Yu, M Wang, L Li… – lantaoyu.com
Page 1. Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation Yuxuan Song Ning Miao Hao Zhou Lantao Yu Shanghai Jiao Tong University Bytedance AI lab Bytedance AI lab Stanford University Mingxuan Wang Lei Li Bytedance AI lab …

Electra: Pre-training text encoders as discriminators rather than generators
K Clark, MT Luong, QV Le, CD Manning – arXiv preprint arXiv:2003.10555, 2020 – arxiv.org
Page 1. Published as a conference paper at ICLR 2020 ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS Kevin Clark Stanford University kevclark@cs.stanford.edu Minh-Thang Luong Google Brain thangluong@google.com …

Distributional Discrepancy: A Metric for Unconditional Text Generation
P Cai, X Chen, P Jin, H Wang, T Li – arXiv preprint arXiv:2005.01282, 2020 – arxiv.org
… Using Text Classifier to Detect Discrepancy Generative adversarial networks (GAN) [7] improved a trained neural language model by fine-tuning it [27, 6, 19]. In these language GANs, a discriminator which works as a classifier to detect the discrepancy between real sentences …

Improved Natural Language Generation via Loss Truncation
D Kang, T Hashimoto – arXiv preprint arXiv:2004.14589, 2020 – arxiv.org
… of each model. Finally, on Gigaword, we also compared against a recent generative adversarial network (GAN) model with a publicly available implementation (Wang and Lee, 2018). Human-evaluation metrics. We evaluate …

Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation
C Garbacea, S Carton, S Yan, Q Mei – arXiv preprint arXiv:1901.00398, 2019 – arxiv.org
… Generative Adversarial Networks (Goodfellow et al., 2014), or GANs, train generative models through an adversarial process … Notably, maximizing the adversarial error is consistent to the objective of the generator in generative adversarial networks …

-Automated Content Generation with Semantic Analysis and Deep Learning
M Morisio, THM VAN – webthesis.biblio.polito.it
… 20 2.7.2 Word Embeddings . . . . . 21 2.8 Generative Adversarial Networks . . . . . 23 2.9 SeqGAN . . . . … 22 2.13 Generative Adversarial Network. . . . . 23 2.14 SeqGAN architecture …

Autoregressive Text Generation Beyond Feedback Loops
F Schmidt, S Mandt, T Hofmann – arXiv preprint arXiv:1908.11658, 2019 – arxiv.org
… 2016. Generating sentences from a continuous space. In ACL. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language gans falling short. CoRR, abs/1811.02549. Justin Domke. 2013 …

Extractive Summary as Discrete Latent Variables
A Komatsuzaki – arXiv preprint arXiv:1811.05542, 2018 – arxiv.org
… Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Char- lin. Language GANs Falling Short … O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. Language Generation with Recurrent Generative Adversarial Networks without Pre-training …

-Recurrent Neural Networks for Discriminative and Generative Learning
S Semeniuta – 2019 – d-nb.info
… We then discuss the eval- uation of Generative Adversarial Networks when applied to language generation … Es folgt eine Diskussion über die Evaluations- methoden von Generative Adversarial Networks im Kontext der Sprachgenerierung …

A Universal Approximation Theorem of Deep Neural Networks for Expressing Distributions
Y Lu, J Lu – arXiv preprint arXiv:2004.08867, 2020 – arxiv.org
… Typical generative models include Variational Autoencoders [29], Normalizing Flows [46] and Generative Adversarial Networks (GANs) [19], just to name a … In the mathematical language, GANs can be formulated as the following minimization problem: inf g?GNN D(g#pz,?) (3.1 …

BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward
F Schmidt, T Hofmann – arXiv preprint arXiv:2003.02738, 2020 – arxiv.org
Page 1. BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward Florian Schmidt 1 Thomas Hofmann 1 Abstract Measuring the quality of a generated sequence against a set of references is a central problem …

-Towards Holistic and Automatic Evaluation of Open-Domain Dialogue Generation
B Pang, E Nijkamp, W Han, L Zhou, Y Liu, K Tu – faculty.sist.shanghaitech.edu.cn
Page 1. Towards Holistic and Automatic Evaluation of Open-Domain Dialogue Generation Bo Pang1?, Erik Nijkamp1?, Wenjuan Han2?§, Linqi Zhou1?, Yixian Liu3, Kewei Tu3 1Department of Statistics, University of California …

Sparse Text Generation
PH Martins, Z Marinho, AFT Martins – arXiv preprint arXiv:2004.02644, 2020 – arxiv.org
… In addition to new decoding methods, models that aim to in- crease word diversity and diminish repetition have also been introduced. Xu et al. (2018) proposed a diversity-promoting generative adversarial net- work, which rewards novel and fluent text. Holtz- man et al …

Evaluating the Evaluation of Diversity in Natural Language Generation
G Tevet, J Berant – arXiv preprint arXiv:2004.02990, 2020 – arxiv.org
… Page 3. off quality and diversity by ignoring some of the LM probability mass (Holtzman et al., 2019). Last, some NLG models, such as Generative Adversarial Networks (GANs) (Yu et al., 2017) are not based on a LM at all. While …

Multi-lingual Mathematical Word Problem Generation using Long Short Term Memory Networks with Enhanced Input Features
V Liyanage, S Ranathunga – … of The 12th Language Resources and …, 2020 – aclweb.org
… More recently, neural models such as Recurrent Neural Networks/LSTMs (Graves, 2013), Auto-encoders (Fabius and van Amersfoort, 2014), Reinforcement learning tech- niques (Guo, 2015), and Generative Adversarial Net- works (Goodfellow et al … Language gans falling short …

Clinically accurate chest X-ray report generation
G Liu, TMH Hsu, M McDermott, W Boag… – arXiv preprint arXiv …, 2019 – arxiv.org
… reach human-level quality. Alternatively, Rajeswar et al. (2017) and Fedus et al. (2018) have tried using Generative Adversarial Neural Networks (GANs) for text generation. However, Caccia et al. (2018) observed problems with …

MLE-guided parameter search for task loss minimization in neural sequence modeling
S Welleck, K Cho – arXiv preprint arXiv:2006.03158, 2020 – arxiv.org
… Related methods use policy gradient with generative adversarial networks (GAN) (Yu et al., 2017; de Masson d’Autume et al., 2019) … Language GANs Falling Short. In International Conference on Learning Representations (ICLR) …

Non-monotonic sequential text generation
S Welleck, K Brantley, H Daumé III, K Cho – arXiv preprint arXiv …, 2019 – arxiv.org
Page 1. Non-Monotonic Sequential Text Generation Sean Welleck 1 Kianté Brantley 2 Hal Daumé III 2 3 Kyunghyun Cho 145 Abstract Standard sequential generation methods assume a pre-specified generation order, such …

Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation
X Fan, Y Zhang, Z Wang, M Zhou – arXiv preprint arXiv:1912.13151, 2019 – arxiv.org
Page 1. Published as a conference paper at ICLR 2020 ADAPTIVE CORRELATED MONTE CARLO FOR CON- TEXTUAL CATEGORICAL SEQUENCE GENERATION Xinjie Fan1, Yizhe Zhang2, Zhendong Wang3, Mingyuan …