Loading

Text to Image Translation using Cycle GAN
Kambhampati. Monica1, Duvvada Rajeswara Rao2

1Kambhampati.Monica*, CSE, V R Siddhartha Engineering College, Vijayawada, India.
2Dr. Duvvada Rajeswara Rao, CSE, V R Siddhartha Engineering College, Vijayawada, India.

Manuscript received on March 30, 2020. | Revised Manuscript received on April 05, 2020. | Manuscript published on April 30, 2020. | PP: 1294-1297 | Volume-9 Issue-4, April 2020. | Retrieval Number: D8703049420/2020©BEIESP | DOI: 10.35940/ijeat.D8703.049420
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: In the recent past, text-to-image translation was an active field of research. The ability of a network to know a sentence’s context and to create a specific picture that represents the sentence demonstrates the model’s ability to think more like humans. Common text–translation methods employ Generative Adversarial Networks to generate high-text-images, but the images produced do not always represent the meaning of the phrase provided to the model as input. Using a captioning network to caption generated images, we tackle this problem and exploit the gap between ground truth captions and generated captions to further enhance the network. We present detailed similarities between our system and the methods already in place. Text-to-Image synthesis is a difficult problem with plenty of space for progress despite the current state-of – the-art results. Synthesized images from current methods give the described image a rough sketch but do not capture the true essence of what the text describes. The re-penny achievement of Generative Adversarial Networks (GANs) demonstrates that they are a decent contender for the decision of design to move toward this issue.
Keywords: Generative Adversarial Networks, Image, Synthesis, Text, Translation.