FGTD: Face Generation from Textual Description

By Kalpana Deorukhkar, Kevlyn Kadamala and Elita Menezes in Journal Papers

January 11, 2022

Majority of current text-to-image generation tasks are limited to creating images like flowers (Oxford 102 Flower), birds (CUB-200-2011), and Common Objects (COCO) from captions. The existing face datasets such as Labeled Faces in the Wild and MegaFace lack description while datasets like CelebA have attributes associated but do not provide feature descriptions. Thus, in this paper we build upon an existing algorithm to create captions with the attributes provided in the CelebA dataset, which can not only generate one caption but it can also be extended to generate N captions per image. We utilise Sentence BERT to encode these descriptions into sentence embeddings. We then perform a comparative study of three models - DCGAN, SAGAN and DFGAN, by using these sentence embeddings along with a latent noise as the inputs to the different architectures. Finally, we calculate the Inception Scores and the FID values to compare the output images across different architectures.

Cite:

@incollection{deorukhkar2022fgtd,
  title={FGTD: Face Generation from Textual Description},
  author={Deorukhkar, Kalpana and Kadamala, Kevlyn and Menezes, Elita},
  booktitle={Inventive Communication and Computational Technologies},
  pages={547--562},
  year={2022},
  publisher={Springer}
}
Posted on:
January 11, 2022
Length:
1 minute read, 177 words
Categories:
Journal Papers
See Also: