Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-22T18:51:14.708Z Has data issue: false hasContentIssue false

Application of computer vision and deep learning models to automatically classify medically important mosquitoes in North Borneo, Malaysia

Published online by Cambridge University Press:  01 April 2024

Song-Quan Ong*
Affiliation:
Institute for Tropical Biology and Conservation, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu, Sabah Malaysia
Abdul Hafiz Ab Majid
Affiliation:
Household & Structural Urban Entomology Laboratory, Vector Control Research Unit, School of Biological Sciences, Universiti Sains Malaysia, 11800 Penang, Malaysia
Wei-Jun Li
Affiliation:
Laboratory of Invasion Biology, School of Agricultural Sciences, Jiangxi Agricultural University, Nanchang 330045, China
Jian-Guo Wang
Affiliation:
Laboratory of Invasion Biology, School of Agricultural Sciences, Jiangxi Agricultural University, Nanchang 330045, China
*
Corresponding author: Song-Quan Ong; Email: [email protected]; [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Mosquito-borne diseases have emerged in North Borneo in Malaysia due to rapid changes in the forest landscape, and mosquito surveillance is key to understanding disease transmission. However, surveillance programmes involving sampling and taxonomic identification require well-trained personnel, are time-consuming and labour-intensive. In this study, we aim to use a deep leaning model (DL) to develop an application capable of automatically detecting mosquito vectors collected from urban and suburban areas in North Borneo, Malaysia. Specifically, a DL model called MobileNetV2 was developed using a total of 4880 images of Aedes aegypti, Aedes albopictus and Culex quinquefasciatus mosquitoes, which are widely distributed in Malaysia. More importantly, the model was deployed as an application that can be used in the field. The model was fine-tuned with hyperparameters of learning rate 0.0001, 0.0005, 0.001, 0.01 and the performance of the model was tested for accuracy, precision, recall and F1 score. Inference time was also considered during development to assess the feasibility of the model as an app in the real world. The model showed an accuracy of at least 97%, a precision of 96% and a recall of 97% on the test set. When used as an app in the field to detect mosquitoes with the elements of different background environments, the model was able to achieve an accuracy of 76% with an inference time of 47.33 ms. Our result demonstrates the practicality of computer vision and DL in the real world of vector and pest surveillance programmes. In the future, more image data and robust DL architecture can be explored to improve the prediction result.

Type
Research Paper
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Introduction

In the dynamic landscapes of North Borneo, Malaysia, the juxtaposition of rapid urbanisation and the persistent threat of mosquito-borne diseases have created a challenging public health problem (Bin Said et al., Reference Bin Said, Kouakou, Omorou, Bienvenu, Ahmed, Culleton and Picot2022). The increasing urbanisation of the region, such as the city of Kota Kinabalu Sabah Malaysia and expanding suburbs like Sandakan and Tawau, has created numerous breeding grounds for various mosquito species (Li et al., Reference Li, Kamara, Zhou, Puthiyakunnon, Li, Liu, Zhou, Yao, Yan and Chen2014). In particular for the mosquitoes that breed mostly in human created breeding site, such as Aedes aegypti (L.) (Diptera: Culicidae), Aedes albopictus (L.) (Diptera: Culicidae) and Culex quinquefasciatus (Skuse) (Diptera: Culicidae), which transmit medically important diseases such as dengue, chikungunya, Zika, filariasis and Japanese encephalitis and pose a constant threat to public welfare (Nitatpattana et al., Reference Nitatpattana, Apiwathnasorn, Barbazan, Leemingsawat, Yoksan and Gonzalez2005; Ong Reference Ong2016). More specifically, a mosquito like Ae. albopictus, which is highly adapted to a dynamic range of environments, including forests, rural and suburban areas, could bridge disease transmission between the sylvatic cycle (transmission cycle associated with wildlife) and the urban cycle (transmission cycle associated with humans) (Pereira-dos-Santos et al., Reference Pereira-dos-Santos, Roiz, Lourenço-de-Oliveira and Paupy2020). In fact, arbovirus cases have increased at least 30-fold in the region over the last 50 years (Schaffner and Mathis, Reference Schaffner and Mathis2014); this incapacitates communities, burdens healthcare systems and even leads to fatalities. These arbovirus diseases include dengue and the re-emergence of Chikungunya and Zika, which have been severely neglected for a variety of reasons, including the recent pandemic COVID-19 (Ong et al., Reference Ong, Ahmad, Nair, Isawasan and Majid2021a). On the other hand, Japanese encephalitis (JE) and filariasis, which have been eradicated in many New World countries, continue to plague local communities in North Borneo and put a strain on local health systems (Maluda et al., Reference Maluda, Jelip, Ibrahim, Suleiman, Jeffree, Aziz, Jani, Yahiro and Ahmed2020). This is because the rural and forested areas where disease vectors may reside are rarely accessible and rarely surveyed for vector composition and disease prevalence. However, as the process of urbanisation changes the landscape and shifts the distribution of vector to the suburbs and urban areas, this would ultimately increase the risk of disease transmission.

As Borneo is undergoing rapid urbanisation, a mosquito surveillance programme in the urban area is of utmost importance to control and manage the outbreak of mosquito-borne diseases. However, surveillance is always hampered by the tedious, labour-intensive and time-consuming process of identification. Computer vision and deep learning (DL) models offer an excellent alternative to solve this global problem. Okayasu et al. (Reference Okayasu, Yoshida, Fuchida and Nakamura2019) compared three DL architectures in classifying three mosquito genera (Aedes, Anopheles and Culex) and were able to achieve 95.5% with ResNet. Kittichai et al. (Reference Kittichai, Pengsakul, Chumchuen, Samung, Sriwichai, Phatthamolrat, Tongloy, Jaksukam, Chuwongin and Boonsang2021) developed a DL model using the You-Only-Look-Once (YOLO) algorithm to classify the species and sex of Aedes, Anopheles, Culex, Armigeres and Mansonia mosquitoes. They achieved an average precision and sensitivity of 99% and 92.4%, respectively, on the internal test set. Ong et al. (Reference Ong, Ahmad and Mohd Ngesom2021b) showed how a lightweight DL architecture – MobileNetV2 – could classify two closely related Aedes mosquitoes – Ae. aegypti and Ae. albopictus – with 98% accuracy on the validation set. However, most of these previous studies used an internally split dataset (after a process called data splitting) for testing and validation. The use of these DL models in the real world has yet to be evaluated. Therefore, this study aims to construct an image dataset that covers three common mosquito vectors in suburban and urban areas of North Borneo in Malaysia, and we practically deployed the developed DL model as an app to classify the mosquitoes collected in the field.

Materials and methods

Mosquito

The selection of mosquito species was based on the vectors of endemic mosquito-borne flavivirus diseases in the Borneo region of Malaysia, which are Ae. aegypti and Ae. albopictus (vector of dengue and chikungunya), and Cx. quinquefasciatus (vector of filariasis and Japanese encephalitis). Adult females of Ae. aegypti, Ae. albopictus and Cx. quinquefasciatus were collected from three urban and suburban areas of Kota Kinabalu, Sabah North Borneo, Malaysia. The mosquitoes were identified by two taxonomists by their external key morphology and maintained in insectaria at 25 ± 1 °C and 70 ± 5% relative humidity and provided with 10% sucrose mixed with vitamin B complex as an energy source. The female mosquitoes were killed by freezing the sample in a container and used for image acquisition. The mosquitoes used were bred in the laboratory and the study was approved by the Animal Ethics Committee of Universiti Sabah Malaysia (AEC 007/2023).

Image data acquisition and pre-processing

Mosquito images were captured using a camera module (Pi NoIR v2, 8 megapixel, Sony IMX219 image sensor) on a microcomputer (Raspberry Pi 4 Model B, Quad-Core Cortex-A72 and 2 GB LPDDR4-3200 SDRAM). The focal length was set to 2.5 cm and supported with lighting (15 white LEDs – RGB visible light) and a white colour background.

The partitioning of the data to train and test the model was done in the ratio: training (85%), testing (15%). Most modern DL architectures typically require much more training data for stable performance. Therefore, we applied several data augmentation techniques to the training images, including 0, 90, 180 and 270°C rotation, which eventually quadrupled the number of samples. However, to prevent overfitting the model, augmentation was performed after data partitioning to ensure that the images from training were not found in the test set.

Development of a deep learning model

To develop a lightweight DL model, we developed the model using the MobileNetV2 architecture, which has the smallest size in the Keras library at 14 MB (https://keras.io/api/applications/). We used the architecture as a transfer learning algorithm, where the feature extraction layers transfer the learning weights from ImageNet training. In this way, we reduce the training time, mathematical computations, and consumption of available hardware resources (Qi et al., Reference Qi, Liu and Liu2017). More specifically, we adopted the first 29 convolutional layers from MobileNet (Howard et al., Reference Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto and Adam2017) for feature extraction. More convolutional layers can reduce the resolution of the feature map and extract more abstract high-level features (Tang et al., Reference Tang, Liang, Xie, Bao and Wang2019). The softmax layer of MobileNet was truncated and the output of the model was set as the final tensor representation of the image. The web-based platform therefore allowed us to create two layers – the first dense layer and the final softmax layer – with three classes (images of Ae. aegypti, Ae. albopictus and Cx. quinquefasciatus). The first dense layer must have the same input as the output from MobileNet. The transformation of the data into its tensor was performed by MobileNet and the training images were normalised to have a range of [0, 1] pixel values. The model was fine-tuned with four levels of learning rates (0.0001, 0.001, 0.005 and 0.0001) in response to test accuracy, precision, recall and F1 score for model training and evaluation. We used Python, Tensoflow and Keras Deep Learning Framework on an NVIDIA Tesla V100-PCIE GPU, running on the Google Colab Cloud platform (https://colab.research.google.com/github/data-psl/lectures2020/blob/master/notebooks/01_python_basics.ipynb).

Using deep learning models as apps in practise

In previous studies, the performance of a DL model in classifying mosquitoes was usually evaluated using an internal split set as a test set or validation set. These internal split evaluations did not take into account two important factors that occur in the real world: the mosquito image collected in the field has a random and varying background, and the time required for inference to yield a result. To validate the model, we applied the MobileNetV2 model to 100 mosquitoes collected in the field. The model was deployed as a simple web-based application using the p5.js platform (https://p5js.org/) that can operate an external web camera (Logitech C922 Pro) from a computer. We infer 100 mosquitoes for each of the species in the field and evaluate the performance in terms of accuracy of predictions and inference time required for predictions.

Statistical analysis

To obtain a statistically valid hyperparameter for the model, the performance matrices were averaged and compared to the one-way ANOVA to examine the learning rate that produces the significantly higher performance. Similar to the prediction with the apps, the accuracy and inference time for each mosquito species were also averaged and compared to ANOVA. Both hypothesis tests were further examined with a post hoc test using Tukey's test at P < 0.05 conducted in SPSS 22.0 if there were significant differences in these factors.

Result

Image dataset

Since we want to enable automatic classification of mosquitoes in a real situation, we took the images manually from different angles. We captured 510 images of each 100 female Ae. aegypti and Ae. albopictus and 200 images of 50 female Cx. quinquefasciatus. Through this manual image acquisition process, we collected a total of 4880 images after augmentation, of which the three classes of the DL model – Ae. aegypti, Ae. albopictus and Cx. quinquefasciatus – had 2040, 2040 and 800 images, respectively (table 1). The original image has a resolution of 5184 × 3456 pixels with 24-bit RGB channels and 72 dpi and was resized to 224 × 224 px before being fed into the neural network of the DL model. To alleviate the problem of imbalance in the image dataset, we conducted augmentation for the images of mosquito Cx. quinquefasciatus and downsampling the images of the classes of Ae. aegypti and Ae. albopictus. The dataset and code have been made available on Github at the link – https://github.com/songguan26/Deep-learning-models-to-automatically-classify-medically-important-mosquitoes-in-North-Borneo

Table 1. The examples of image that used to develop the deep learning model

Deep learning model performance

MobileNetV2 was able to achieve significantly higher performance with epoch 0.001 than with epoch 0.01 and 0.001 (P < 0.05, fig. 1). As can be seen from the smaller learning rate of 0.0001, a smaller step size at each iteration of the training should result in lower losses for the model DL, and a smaller learning rate also results in a more stable learning process (table 2). Therefore, the hyperparameter of 0.001 was later used in the apps and the images of the mosquitoes collected in the field were derived.

Figure 1. Comparison of the different learning rates for the performance of the model DL in classifying mosquitoes.

Table 2. Learning curve and confusion matrix in relation to the learning rate hyperparameter

Using deep learning as apps in practise

As can be seen from the trend of prediction (fig. 2), the learning rate is directly proportional to the accuracy of app inference. The learning rate of 0.005 gives the significant higher prediction than the learning rate of 0.001 and 0.0001. Based on the performance of the test set and the learning curve in the previous section, the learning rate 0.005 was considered as the most optimal hyperparameter for the app to predict the mosquito. For the inference time, there are no significant differences between the learning rate and the inference time (fig. 3). This could be due to the fact that the hardware, i.e. an external webcam (Logitech C922 Pro) from a computer used to deploy the model, was the same throughout the experiment and would differ if the app deployment devices were different (Han, Reference Han2017).

Figure 2. The predictive accuracy of the apps with the deep learning model.

Figure 3. Inference time for the prediction of mosquitoes collected in the field.

Discussion

More attention has been paid to the reliability of DL models used in the real world. This has been shown in many experiments with mosquitoes collected in the field, which have different deformations and uncertainties (Minakshi, Reference Minakshi2018) than high-resolution images from DSLRs. Our experiment was designed with this in mind, using training data obtained from field-collected mosquitoes in North Borneo, Malaysia. We also wanted to see the actual practical accuracy of the model as a mobile app for predicting mosquitoes in the field, which is usually excluded from the experiment. Our results on the test set are consistent with those of Isawasan et al. (Reference Isawasan, Abdullah, Ong and Salleh2023) and Siddiqua et al. (Reference Siddiqua, Rahman and Uddin2021), who showed that MobileNetV2 is able to discriminate between Ae. aegypti and Ae. albopictus with more than 95% accuracy. This study also extends our previous studies using software to classify Ae. aegypti and Ae. albopictus and describing pipelines for using the model as apps to classify mosquitoes.

The trend in the literature shows that more and more experiments are using DL to classify mosquitoes to improve prediction in real-world situations. Although we use a different DL architecture, our prediction result is similar to that of Siddiqui and Kayte (Reference Siddiqui and Kayte2023), who were able to classify six mosquito species with 85.75% accuracy using the VGG16 architecture and 97.1751 accuracy using VGG16. Lee et al. (Reference Lee, Kim and Cho2023), who achieved an F1 score of 97.1% using YOLOv5, demonstrated the potential of automatically measuring mosquito species and populations in the field, which includes the Ae. albopictus mosquito. Regarding the use of apps for mosquito classification, our result is compatible with several studies that use ready-made apps for mosquito classification. For example, Asmai et al. (Reference Asmai, Abidin and Nizam AFNAR2020) developed a mobile app – Intelligent Detection of Mosquito Larvae (iMOLAP) – to predict mosquito species and stages.

As far as we know, a pipeline that uses DL models as a framework for app development is rarely explored. This could be because it is difficult to embed the model in apps that are to be used to classify mosquitoes. Therefore, our study extended certain content from Ling et al. (Reference Cheong, Rosilawati, Mohd-Khairuddin, Siti-Futri, Nur-Ayuni, Lim, Khairul-Asuad, Mohd-Zahari, Mohd-Izral, Mohd-Zainuldin, Nazni and Lee2021), who developed a mobile app – PesTrapp – from scratch using PHP and a MySQL database that is able to record, map and identify mosquitoes based on Ovitrap, but did not include DL algorithms in the pipeline.

However, there are differences between the test set and the external use of the DL model as an app to predict mosquitoes. It can be seen that the accuracy decreases by 22%. This decrease in accuracy is referred to as model degradation. Vela et al. (Reference Vela, Sharp, Zhang, Nguyen, Hoang and Pianykh2022) have analysed and described the phenomenon of AI model quality degradation using datasets from four different industries, including healthcare. One of the main reasons is feature divergence (Cioffi et al., Reference Cioffi, Travaglioni, Piscitelli, Petrillo and De Felice2020; Vela et al., Reference Vela, Sharp, Zhang, Nguyen, Hoang and Pianykh2022) especially when the apps are used in a different environment than the training data. This is due to the models not being able to keep up with the evolving relationship. Either the models or the features evolve, and the features could be the shape, pattern and background of the object for classification. Therefore, the development of the software is not the same as the development of the model and hence the deployment should also be different (Harshit, Reference Harshit2021). To mitigate this, one of the solutions was to increase more diverse data that could be inferred in the real-world situations. In addition, the difference between the model performance from the apps deployment could be because the machine that was used to train and test the model was the GPU V100 Tesla from the cloud, which showed higher performance than the machine that was used to deploy the model – webcam from a computer.

We focus on one important hyperparameter – the learning rate regulates the rate at which an algorithm updates or learns the values of a parameter estimate, which regulates the weights of our neural network with respect to the loss gradient, and has been considered the most important hyperparameter (Maclaurin et al., Reference Maclaurin, Duvenaud and Adams2015). Therefore, in order to simplify the optimisation process and use the model as an app to generalise the result, we standardised other hyperparameters such as epoch, number of layers, optimisers, etc. The standardisation of these hyperparameters was due to the changes of any of them, and could also influence the model performance, as performed by Ong et al. (Reference Ong, Nair, Yusof and Ahmad2022). In addition, only MobilenetV2 architecture was used in this study due to its small size, which is also a certain limitation for the study. For example, the NASNetMobile and EfficientNetB0 architectures, which comprise 23 MB and 29 MB respectively, are DL architectures that are less than 30 MB in size and can be used as apps on mobile devices such as phones and tablets.

Our study is limited by the number of images. This is due to the limited cost and time required to collect adequate number of mosquitoes in the field, as opposed to mosquitoes from the lab, which could provide a greater amount of training data. The generalisation of the model, especially the app, could be better with better quality images taken in the real situation. Our studies and results contribute to the continuous improvement of DL studies for the classification of medically important insects. In the future, it may be necessary to develop the apps based on the software development pipeline rather than the model development pipeline, which could greatly improve the performance of mosquito prediction apps. The pipeline could be integrated with the pest control so that the data could be linked to geographical data and predictions in the form of time series after classification. Therefore, this would ultimately be one of the key elements in integrated pest management, where timely and accurate decision-making is required. In addition, the pipeline that includes human validation to cross-check the result, as mentioned in the trend of augmented intelligence (Au.I), is also crucial to ensure the validity of the result (Cerf, Reference Cerf2013). Involving humans, especially medical entomologists and taxonomists, in Au.I would improve data quality by removing the mismatched data and improving machine learning of features by processing the mosquito images by magnifying the key morphology of the mosquito.

Author contributions

Song-Quan Ong: conceptualisation, methodology, validation, formal analysis, investigation, resources, data curation, writing – original draft, writing – review and editing, visualisation, project administration. Abdul Hafiz Ab Majid: conceptualisation, methodology, validation, supervision, writing – original draft, writing – review and editing. Wei-Jun Li: conceptualisation, methodology, validation, supervision, project administration. Jian-Guo Wang: conceptualisation, methodology, validation, supervision, project administration.

Competing interests

None.

References

Asmai, SA, Abidin, ZZ and Nizam AFNAR, MAM (2020) Aedes mosquito larvae recognition with a mobile app. Int J Adv Trends Comput Sci Eng 9, 50595065.CrossRefGoogle Scholar
Bin Said, I, Kouakou, YI, Omorou, R, Bienvenu, AL, Ahmed, K, Culleton, R and Picot, S (2022) Systematic review of Plasmodium knowlesi in Indonesia: a risk of emergence in the context of capital relocation to Borneo? Parasites & Vectors 15, 258.CrossRefGoogle ScholarPubMed
Cerf, VG (2013) Augmented intelligence. IEEE Internet Computing 17, 9696.Google Scholar
Cheong, YL, Rosilawati, R, Mohd-Khairuddin, CI, Siti-Futri, FF, Nur-Ayuni, N, Lim, KH, Khairul-Asuad, M, Mohd-Zahari, TH, Mohd-Izral, YU, Mohd-Zainuldin, T, Nazni, WA and Lee, HL (2021) PesTrapp mobile app: a trap setting application for real-time entomological field and laboratory study. Tropical Biomedicine 38, 171179.Google ScholarPubMed
Cioffi, R, Travaglioni, M, Piscitelli, G, Petrillo, A and De Felice, F (2020) Artificial intelligence and machine learning applications in smart production: progress, trends, and directions. Sustainability 12, 492.CrossRefGoogle Scholar
Han, S (2017) Efficient methods and hardware for deep learning (Doctoral dissertation, Stanford University).Google Scholar
Harshit, D. (2021) Different architectures of machine learning model deployment! Available at https://medium.com/mlearning-ai/different-architectures-of-machine-learning-model-deployment-250a4a3a37b4Google Scholar
Howard, AG, Zhu, M, Chen, B, Kalenichenko, D, Wang, W, Weyand, T, Andreetto, M and Adam, H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861./.Google Scholar
Isawasan, P, Abdullah, ZI, Ong, SQ and Salleh, KA (2023) A protocol for developing a classification system of mosquitoes using transfer learning. MethodsX 10, 101947.CrossRefGoogle ScholarPubMed
Kittichai, V, Pengsakul, T, Chumchuen, K, Samung, Y, Sriwichai, P, Phatthamolrat, N, Tongloy, T, Jaksukam, K, Chuwongin, S and Boonsang, S (2021) Deep learning approaches for challenging species and gender identification of mosquito vectors. Scientific Reports 11, 4838.CrossRefGoogle ScholarPubMed
Lee, S, Kim, H and Cho, BK (2023) Deep learning-based image classification for major mosquito species inhabiting Korea. Insects 14, 526.CrossRefGoogle ScholarPubMed
Li, Y, Kamara, F, Zhou, G, Puthiyakunnon, S, Li, C, Liu, Y, Zhou, Y, Yao, L, Yan, G and Chen, XG (2014) Urbanization increases Aedes albopictus larval habitats and accelerates mosquito development and survivorship. PloS Neglected Tropical Diseases 8, e3301.CrossRefGoogle ScholarPubMed
Maclaurin, D, Duvenaud, D and Adams, R (2015) June. Gradient-based hyperparameter optimization through reversible learning. In International conference on machine learning (pp. 2113–2122). PMLR.Google Scholar
Maluda, MCM, Jelip, J, Ibrahim, MY, Suleiman, M, Jeffree, MS, Aziz, AFB, Jani, J, Yahiro, T and Ahmed, K (2020) Nineteen years of Japanese encephalitis surveillance in Sabah, Malaysian borneo. The American Journal of Tropical Medicine and Hygiene 103, 864.CrossRefGoogle Scholar
Minakshi, M (2018) A Machine Learning Framework to Classify Mosquito Species From Smart-Phone Images. USF Tampa Graduate Theses and Dissertations. https://digitalcommons.usf.edu/etd/7340Google Scholar
Nitatpattana, N, Apiwathnasorn, C, Barbazan, P, Leemingsawat, S, Yoksan, S and Gonzalez, J (2005) First isolation of Japanese encephalitis from Culex quinquefasciatus in Thailand. Southeast Asian Journal of Tropical Medicine and Public Health 36, 875.Google ScholarPubMed
Okayasu, K, Yoshida, K, Fuchida, M and Nakamura, A (2019) Vision-based classification of mosquito species: comparison of conventional and deep learning methods. Applied Sciences 9, 3935.CrossRefGoogle Scholar
Ong, SQ (2016) Dengue vector control in Malaysia: a review for current and alternative strategies. Sains Malays 45, 777785.Google Scholar
Ong, SQ, Ahmad, H, Nair, G, Isawasan, P and Majid, AHA (2021a) Implementation of a deep learning model for automated classification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) in real time. Scientific Reports 11, 9908.CrossRefGoogle ScholarPubMed
Ong, SQ, Ahmad, H and Mohd Ngesom, AM (2021b) Implications of the COVID-19 lockdown on dengue transmission in Malaysia. Infectious Disease Reports 13, 148160.CrossRefGoogle ScholarPubMed
Ong, SQ, Nair, G, Yusof, UK and Ahmad, H (2022) Community-based mosquito surveillance: an automatic mosquito-on-human-skin recognition system with a deep learning algorithm. Pest Management Science 78, 40924104.CrossRefGoogle ScholarPubMed
Pereira-dos-Santos, T, Roiz, D, Lourenço-de-Oliveira, R and Paupy, C (2020) A systematic review: is Aedes albopictus an efficient bridge vector for zoonotic arboviruses? Pathogens 9, 266.CrossRefGoogle ScholarPubMed
Qi, H, Liu, W and Liu, L (2017) November. An efficient deep learning hashing neural network for mobile visual search. In 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (pp. 701–704). IEEE.CrossRefGoogle Scholar
Schaffner, F and Mathis, A (2014) Dengue and dengue vectors in the WHO European region: past, present, and scenarios for the future. The Lancet Infectious Diseases 14, 12711280.10.1016/S1473-3099(14)70834-5CrossRefGoogle Scholar
Siddiqua, R, Rahman, S and Uddin, J (2021) A deep learning-based dengue mosquito detection method using faster R-CNN and image processing techniques. Annals of Emerging Technologies in Computing (AETiC) 5, 1123.CrossRefGoogle Scholar
Siddiqui, AA and Kayte, C (2023) August. Transfer Learning for Mosquito Classification Using VGG16. In First International Conference on Advances in Computer Vision and Artificial Intelligence Technologies (ACVAIT 2022) (pp. 471–484). Atlantis Press.CrossRefGoogle Scholar
Tang, G, Liang, R, Xie, Y, Bao, Y and Wang, S (2019) Improved convolutional neural networks for acoustic event classification. Multimedia Tools and Applications 78, 1580115816.CrossRefGoogle Scholar
Vela, D, Sharp, A, Zhang, R, Nguyen, T, Hoang, A and Pianykh, OS (2022) Temporal quality degradation in AI models. Scientific Reports 12, 11654.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. The examples of image that used to develop the deep learning model

Figure 1

Figure 1. Comparison of the different learning rates for the performance of the model DL in classifying mosquitoes.

Figure 2

Table 2. Learning curve and confusion matrix in relation to the learning rate hyperparameter

Figure 3

Figure 2. The predictive accuracy of the apps with the deep learning model.

Figure 4

Figure 3. Inference time for the prediction of mosquitoes collected in the field.