Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
MURAMOTO, Naoki
and
YU, Haitao
2019.
Image Caption Generation Using Hint-words.
Joho Chishiki Gakkaishi,
Vol. 29,
Issue. 2,
p.
153.
Guo, Longteng
Liu, Jing
Yao, Peng
Li, Jiangwei
and
Lu, Hanqing
2019.
MSCap: Multi-Style Image Captioning With Unpaired Stylized Text.
p.
4199.
Dang, Tien X.
Oh, Aran
Na, In-Seop
and
Kim, Soo-Hyung
2019.
The Role of Attention Mechanism and Multi-Feature in Image Captioning.
p.
170.
Orăsan, Constantin
2019.
Automatic summarisation: 25 years On.
Natural Language Engineering,
Vol. 25,
Issue. 06,
p.
735.
Tanti, Marc
Gatt, Albert
and
Muscat, Adrian
2019.
Computer Vision – ECCV 2018 Workshops.
Vol. 11132,
Issue. ,
p.
114.
Liu, Xiaoxiao
Xu, Qingyang
and
Wang, Ning
2019.
A survey on deep neural network-based image captioning.
The Visual Computer,
Vol. 35,
Issue. 3,
p.
445.
Tanti, Marc
Gatt, Albert
and
Camilleri, Kenneth P.
2019.
Computer Vision – ECCV 2018 Workshops.
Vol. 11132,
Issue. ,
p.
124.
Sharma, Grishma
Kalena, Priyanka
Malde, Nishi
Nair, Aromal
and
Parkar, Saurabh
2019.
Visual Image Caption Generator Using Deep Learning.
SSRN Electronic Journal ,
Alsharid, Mohammad
Sharma, Harshita
Drukker, Lior
Chatelain, Pierre
Papageorghiou, Aris T.
and
Noble, J. Alison
2019.
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019.
Vol. 11767,
Issue. ,
p.
338.
Deb, Tonmoay
Ali, Mohammad Zariff Ahsham
Bhowmik, Sanchita
Firoze, Adnan
Ahmed, Syed Shahir
Tahmeed, Muhammad Abeer
Rahman, N.S.M. Rezaur
Rahman, Rashedur M.
Nguyen, Ngoc Thanh
Szczerbicki, Edward
Trawiński, Bogdan
and
Nguyen, Van Du
2019.
Oboyob: A sequential-semantic Bengali image captioning engine.
Journal of Intelligent & Fuzzy Systems,
Vol. 37,
Issue. 6,
p.
7427.
Donnyson, Jessin
and
Khodra, Masayu Leylia
2020.
Contextual Caption Generation Using Attribute Model.
p.
1.
Kharchevnikova, A.S.
and
Savchenko, A.V.
2020.
Visual preferences prediction for a photo gallery based on image captioning methods.
Computer Optics,
Vol. 44,
Issue. 4,
Adak, Chandranath
Chaudhuri, Bidyut B.
Lin, Chin-Teng
and
Blumenstein, Michael
2020.
Why Not? Tell us the Reason for Writer Dissimilarity.
p.
1.
Miebs, Grzegorz
Mochol-Grzelak, Małgorzata
Karaszewski, Adam
and
Bachorz, Rafał A.
2020.
Efficient Strategies of Static Features Incorporation into the Recurrent Neural Network.
Neural Processing Letters,
Vol. 51,
Issue. 3,
p.
2301.
Wang, Junbo
Wang, Wei
Wang, Liang
Wang, Zhiyong
Feng, David Dagan
and
Tan, Tieniu
2020.
Learning visual relationship and context-aware attention for image captioning.
Pattern Recognition,
Vol. 98,
Issue. ,
p.
107075.
Rathi, Ankit
2020.
Deep learning apporach for image captioning in Hindi language.
p.
1.
Zhang, Ji
Mei, Kuizhi
Zheng, Yu
and
Fan, Jianping
2021.
Integrating Part of Speech Guidance for Image Captioning.
IEEE Transactions on Multimedia,
Vol. 23,
Issue. ,
p.
92.
Adithya Praveen, T.
and
Angel Arul Jothi, J.
2021.
Advances in Machine Learning and Computational Intelligence.
p.
805.
KESKİN, Rumeysa
ÇAYLI, Özkan
MORAL, Özge Taylan
KILIÇ, Volkan
and
ONAN, Aytuğ
2021.
A Benchmark for Feature-injection Architectures in Image Captioning.
European Journal of Science and Technology,
KILIÇ, Volkan
2021.
Deep Gated Recurrent Unit for Smartphone-Based Image Captioning.
Sakarya University Journal of Computer and Information Sciences,
Vol. 4,
Issue. 2,
p.
181.