Introduction
Weeds, which are considered to be any unwanted plants in the field, not only affect the crops around them, but can also jeopardize agricultural areas. Weeds compete for nutrients, soil, water, and space and should be detected and eliminated at an early stage. As the most important crop protection strategy, weed control can lead to a 20% increase in yield (Buddenhagen et al. Reference Buddenhagen, Gunnarsson, Rolston, Chynoweth, Bourdot and James2020). Traditional weed control using chemicals is expensive and can be reduced by more than 50% if novel technologies are employed (Gerhards et al. Reference Gerhards, Andujar Sanchez, Hamouz, Peteinatos, Christensen and Fernandez-Quintanilla2022). The use of herbicides has environmental impacts, including potentially polluting soil, surface water, and groundwater (Agüera-Vega et al. Reference Agüera-Vega, Agüera-Puntas, Agüera-Vega, Martínez-Carricondo and Carvajal-Ramírez2021; Akbarzadeh et al. Reference Akbarzadeh, Paap, Ahderom, Apopei and Alameh2018; Le et al. Reference Le, Apopei and Alameh2019; Sabzi and Abbaspour-Gilandeh Reference Sabzi and Abbaspour-Gilandeh2018; Slaven et al. Reference Slaven, Koch and Borger2023; Sunil et al. Reference Sunil, Koparan, Ahmed, Zhang, Howatt and Sun2022). However, weed detection and control constitute a complicated affair, as crops and weeds are quite similar in many aspects, including color features, leaf shapes and forms, leaf patterns, and leaf/plant dimensions (Iqbal et al. Reference Iqbal, Khaliq and Cheema2020; Liu et al. Reference Liu, Li, Li, You, Yan and Tong2019; Sodjinou et al. Reference Sodjinou, Mohammadi, Mahama and Gouton2021). Recently, weed detection and separation from crops have advanced rapidly and have benefited from modern solutions. These recent solutions include satellite-based detection (Rasmussen et al. Reference Rasmussen, Azim and Nielsen2021; Shanmugam et al. Reference Shanmugam, Assunção, Mesquita, Veiros and Gaspar2020; Shendryk et al. Reference Shendryk, Rossiter-Rachor, Setterfield and Levick2020), drone-based detection (Esposito et al. Reference Esposito, Crimaldi, Cirillo, Sarghini and Maggio2021; Liang et al. Reference Liang, Yang and Chao2019; Revanasiddappa et al. Reference Revanasiddappa, Arvind and Swamy2020), hyperspectral imaging (Che’Ya et al. Reference Che’Ya, Dunwoody and Gupta2021; Li et al. Reference Li, Al-Sarayreh, Irie, Hackell, Bourdot, Reis and Ghamkhar2021; Pignatti et al. Reference Pignatti, Casa, Harfouche, Huang, Palombo and Pascucci2019; Sulaiman et al. Reference Sulaiman, Che’Ya, Mohd-Roslim, Juraimi, Mohd-Noor and Fazlil-Ilahi2022), and multispectral imaging (Barrero and Perdomo Reference Barrero and Perdomo2018; Osorio et al. Reference Osorio, Puerto, Pedraza, Jamaica and Rodríguez2020).
Spectral detection can be a promising solution for crop/weed separation based on the concept that every object in the nature has its own spectral signature (Falcioni et al. Reference Falcioni, Moriwaki, Pattaro, Furlanetto, Nanni and Antunes2020; Putra Reference Putra2020). This spectral signature comes from the physical properties and the nutrient, chemical, and water contents. These properties influence the amount of absorption and reflection of electromagnetic waves that can be used for distinguishing crops and weeds. The use of spectral data in agricultural applications has been extensively researched. Applications include crop/weed discrimination (Fletcher et al. Reference Fletcher, Reddy and Turley2016; Gómez-Casero et al. Reference Gómez-Casero, Castillejo-González, García-Ferrer, Peña-Barragán, Jurado-Expósito, García-Torres and López-Granados2010; Kamath et al. Reference Kamath, Balachandra and Prabhu2020; Subeesh et al. Reference Subeesh, Bhole, Singh, Chandel, Rajwade, Rao, Kumar and Jat2022), disease detection (Cordon et al. Reference Cordon, Andrade, Barbara and Romero2021; Mahlein et al. Reference Mahlein, Steiner, Dehne and Oerke2010, Reference Mahlein, Rumpf, Welke, Dehne, Plümer, Steiner and Oerke2013; Shafri et al. Reference Shafri, Anuar, Seman and Noor2011), ripeness estimation (Silalahi et al. Reference Silalahi, Reaño, Lansigan, Panopio and Bantayan2016), estimation of plant nutrient deficiencies (Abdulridha et al. Reference Abdulridha, Ampatzidis, Ehsani and de Castro2018; Ayala-Silva and Beyl Reference Ayala-Silva and Beyl2005), classification of grass-dominated habitats (Bradter et al. Reference Bradter, O’Connell, Kunin, Boffey, Ellis and Benton2020), plant species/varieties discrimination (Manevski et al. Reference Manevski, Manakos, Petropoulos and Kalaitzidis2011; Prospere et al. Reference Prospere, McLaren and Wilson2014; Ullah et al. Reference Ullah, Schlerf, Skidmore and Hecker2012; Vaiphasa et al. Reference Vaiphasa, Skidmore, de Boer and Vaiphasa2007; Yu et al. Reference Yu, Schumann, Sharpe, Li and Boyd2020), distinguishing herbicide-resistant plants (Jones et al. Reference Jones, Austin, Dunne, Leon and Everman2023), and classifying forest logging residue (Acquah et al. Reference Acquah, Via, Billor and Fasina2016). In all these applications, the discrimination or detection technique was built using the specific spectral reflection of plants or plant organs. There has been one or several wavelengths in which the reflectance of electromagnetic energy has been different for the healthy crop and weed, diseased crop, or malnourished crop. In this regard, the use of hyperspectral data analysis can provide promising tools that are fast and generalizable and can be integrated in the analyses with semi-automated procedures (Hennessy et al. Reference Hennessy, Clarke and Lewis2020). Another advantage of spectral datasets is the potential for detailed analysis of spectral reflectance which comes from biochemical and biophysical attributes of plants. However, a disadvantage of hyperspectral analysis is the processing of the data, which can be difficult due to the high dimensionality of the data. Also, the excessive demand for obtaining and providing sufficient samples and the high cost of spectral measurements are among the limitations of hyperspectral technologies (Adelabu et al. Reference Adelabu, Mutanga, Adam and Sebego2013). While spectral data have been quite critical for species discrimination, a disadvantage is the redundant information within high-resolution spectral data (Nagasubramanian et al. Reference Nagasubramanian, Jones, Singh, Sarkar, Singh and Ganapathysubramanian2019).
Among the techniques that mostly have been used for the analysis of spectral data and classifications are k-nearest neighbors classifier (KNN), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), principal component analysis, Normalized Difference Vegetation Index (NDVI), Fourier transform, Jeffries–Matusita distance measure, support vector machines (SVMs), and artificial neural networks (ANNs) (Bell and Baranoski Reference Bell and Baranoski2004; Durgante et al. Reference Durgante, Higuchi, Almeida and Vicentini2013; Longchamps et al. Reference Longchamps, Panneton, Samson, Leroux and Thériault2010; Louargant et al. Reference Louargant, Jones, Faroux, Paoli, Maillot, Gée and Villette2018; Noble and Brown Reference Noble and Brown2009; Strothmann et al. Reference Strothmann, Ruckelshausen, Hertzberg, Scholz and Langsenkamp2017; Talaviya et al. Reference Talaviya, Shah, Patel, Yagnik and Shah2020; Zarco-Tejada et al. Reference Zarco-Tejada, Camino, Beck, Calderon, Hornero, Hernández-Clemente, Kattenborn, Montes-Borrego, Susca, Morelli and Gonzalez-Dugo2018). Symonds et al. (Reference Symonds, Paap, Alameh, Rowe and Miller2015) developed a real-time plant discrimination system based on discrete reflectance spectroscopy. In this study, three different laser diodes (i.e., 635, 685, and 785 nm) were used. It was reported that the system could make a practical discrimination for a vehicle speed of 3 km h−1. In a recent work, Nidamanuri (Reference Nidamanuri2020) used machine learning to discriminate tea (Camellia sinensis (L.) Kuntze) plant varieties. Canopy-level hyperspectral reflectance measurements were acquired for tea and natural plant species in the range of 350 to 2,500 nm. The classifier could discriminate six out of nine tea plant varieties successfully, with accuracies between 75% and 80%.
Recently, attention has been paid to the implementation and improvement of convolutional neural networks (CNNs) for classification purposes. The good thing about CNNs is that they learn features on their own through the network training process, which permits them to discriminate between unseen samples in high performance rate (Garibaldi-Márquez et al. Reference Garibaldi-Márquez, Flores, Mercado-Ravell, Ramírez-Pedraza and Valentín-Coronado2022). Andrea et al. (Reference Andrea, Daniel and Misael2017) discriminated between maize (Zea mays L.) and weed using CNNs. They verified LeNET, AlexNet, cNET, and sent architectures, and cNET resulted in the best performance in terms of accuracy (95.05%) and processing time (2.34 ms). Xi et al. (Reference Xi, Li, Su, Tian, Zhang, Sun, Long, Wan and Qian2020) proposed a network called MmNet consisting of the local response normalization of AlexNet, GoogLeNet, and VGG inception models. The proposed MmNet led to an accuracy of 94.50% and a time cost of 10.369 s. Nguyen et al. (Reference Nguyen, Sagan, Maimaitiyiming, Maimaitijiang, Bhadra and Kwasniewski2021) used SVM and random forest (RF) techniques for disease detection in grapevine (Vitis vinifera L.) plants based on hyperspectral data in the range of 400 to 1,000 nm. It was observed that the SVM classifier performed better for vegetation index-wise classification, while the RF classifier showed better results for pixel-wise and image-wise classification. Garibaldi-Márquez et al. (Reference Garibaldi-Márquez, Flores, Mercado-Ravell, Ramírez-Pedraza and Valentín-Coronado2022) studied the use of shallow and deep learning techniques for the discrimination of crop and weeds. RGB images were captured in field conditions and different locations. The images were obtained in cornfields with three different weeds present. VGG16, VGG19, and Xception models were trained and tested, leading to accuracies of 97.93%, 97.44%, and 97.24%, respectively. In a recent work, Wang et al. (Reference Wang, Chen, Ju, Lin, Wang and Wang2023) took advantage of CNNs for the classification of weed species based on hyperspectral (HS) images. The study was based on a database of HS images of 40 weed species. Preprocessing was applied to the data, and the best accuracy of 98.15% was achieved.
The use of deep learning techniques and spectral data can facilitate the detection of weeds in agricultural fields. This will lead to the precise detection of weeds using a noncontact and noninvasive method. This study evaluates a method based on wavelet transform and deep networks for the separation of crops and weeds and compares it with the traditional classifiers.
Materials and Methods
Instrumentation and Measurements
Three crops, namely, cucumber (Cucumis sativus L.), tomato (Solanum lycopersicum L.), and bell pepper (Capsicum annuum L.) and five weed species including bindweed (Convolvulus spp.), purple nutsedge (Cyperus rotundus L.), narrowleaf plantain (Plantago lanceolata L.), common cinquefoil (Potentilla simplex (Michx.), and garden sorrel (Rumex acetosa L.) were used for this study. Leaves were taken from different parts of young plants of different sizes. Samples of plants were taken from plants in vegetative and flowering stages of growth. The number of samples for each growth stage was almost the same. For each plant, more than 70 samples were obtained, for a total of 626 samples. The plants (with the soil and roots) were removed from the farm and quickly transferred to the laboratory. All measurements were done under the same conditions. For illumination, one lamp of type A and one halogen lamp were used (Figure 1). Spectral reflectances in the range 380 to 1,000 nm were obtained using the spectroradiometer Specbos 1211 (JETI Technische Instrumente GmbH, Jena, Germany). This machine is a noncontact spectroradiometer that is connected to the PC via a USB port. The optical bandwidth of this spectroradiometer is 4.5 nm, and the measuring range for the illuminance is 1 to 1,500,000 Lx. As shown in Figure 1, the spectroradiometer was set at an angle of 90° in relation to the leaves, and the standard observer of 2° was used for the measurements.
Preprocessing
Statistical Pretreatment
Preprocessing for high-dimensional data normally leads to better discovery of relationships and trends of the data. In this regard, first, the beginning of the spectra that was noisy was removed. Then, the data were denoised using a smoothing filter (i.e., Savitzky-Golay filter). Next, standard normal variate for applying normalization was used. Afterward, first derivative and mean centering were applied to the data.
Continuous Wavelet Transform
The preprocessing was inefficient for the data for the convolutional neural networks, as explained in the next section. In this regard, continuous wavelet transform (CWT) was used for preprocessing. CWT is used for the decomposition of a signal into wavelets. It is a perfect tool for mapping the changing properties of nonstationary signals. The basic functions of CWT are the scaled and shifted versions of the mother wavelet. The formula used for this transformation is as follows:
Based on Equation 1, the wavelet ${\rm{\psi }}$ (t) is shifted by ${\rm{\tau }}$ and scaled by factor a $.$ In this study, a Morse wavelet having the following formula was used:
where $U\left( {\rm{\omega }} \right)$ represents the unit step, and a is a normalizing constant. ${\rm{\Gamma }}$ , which controls the symmetry of the wavelet, was set to 3; and p is the square root of the time–bandwidth product being in proportion to the wavelet duration was selected as $\sqrt {60} $ . Hence, CWT was applied on all spectral reflectances, and a database of scalograms was constructed. These scalograms in the form 2D images were used for training the network and classification. Figure 2 provides an example of a scalogram randomly chosen from pepper plant samples.
Classification Techniques
Common Classifiers
For comparison purposes, six common classifiers were employed for the task of discrimination of crops/weeds. These techniques include LDA, QDA, linear support vector machine (LSVM), quadratic support vector machine (QSVM), ANNs, and fine k-nearest neighbors (FKNN). Table 1 presents the technical details of these methods.
Convolutional Neural Network Classifiers
GoogLeNet was utilized in this study to verify its ability to classify crops and weeds based on the spectral data. This pretrained network was used for two reasons. First, this is quite a strong network trained with a large database consisting of over 1,000 different categories. Second, use of this network saved time, as it eliminated the trial and error of building new networks. In addition, a pretrained network can be used by other researchers working in the same field.
GoogLeNet is a convolutional network that is 22 layers deep with 7 pooling layers included. There are nine inception modules stacked linearly in total. The training uses an asynchronous stochastic gradient descent with a momentum of 0.9. Initial learning rate of 1 × 10−4, gradient threshold method of l2 norm, and maximum epochs of 20 were used for building the network. The inputs for GoogLeNet, which are the outputs of CWT, need to be RGB image arrays 224 × 224 × 3. To avoid overfitting, a dropout layer was employed that randomly sets input elements to zero at a level of probability. The flowchart of the proposed method is shown in Figure 3. Morse wavelet was applied to the signals, and scalograms were extracted. Scalograms are the RGB representations of the spectral reflectances. Then, these RGB images were used for retraining the CNN. Finally, the classifier was built to carry out the classification task on new samples.
SqueezeNet is a CNN having an 18-layer depth. Like GoogLeNet, it is pretrained for more than 1,000 categories. The size of the input image is 227 × 227 × 3. The weighted learning factor was set to 10. The last learnable layer was replaced with a convolutional layer with two filters. A bias learning factor of 10 was chosen. For training, the mini-batch size, maximum epochs, initial learning rate, and learning optimizer method were chosen as 10, 15, 3 × 10−4, and a gradient descent with momentum, respectively.
Programming and Analysis
In this study, the data were randomly divided into three groups of training, validation, and testing. Therefore, 70% was used for training, 15% for validation, and 15% for testing, which was not presented to the algorithms while training (i.e., unseen data). All the programming was done using MATLAB (R2019b, MathWorks Inc., Massachusetts, USA) and MS Excel (Microsoft Office Excel, Washington, USA, 2016) software. The processing and analysis were performed on a PC with an Intel® Core™ i7 processor and 16 GB of RAM.
Results and Discussion
The spectral reflectance of leaves of crops and weeds is remarkably similar. This makes the discrimination of crops and weeds difficult. Figure 4 presents the spectral reflectance of bell pepper plant and five weeds. It is observed that use of techniques for the reduction of data volume or use of efficient classification techniques is necessary. As seen in Figure 4, most of the relevant information can be obtained from 500 to 750 nm. In the blue area of the spectrum, there are not many changes in the spectral reflectances, and absorbance is close to 1. In a study on the discrimination of weeds (i.e., spurge [Euphorbia spp.] and purple loosestrife [Lythrum salicaria L.]) from the surrounding vegetation, Hom et al. (Reference Hom, Bajwa, Lym and Nowatzki2020) found the significant spectral bands in the same regions. Sayed Yones et al. Reference Sayed Yones, Amin Aboelghar, Ali Khdery, Massoud Ali, Hussien Salem, Farag and Ahmed Mahmoud Mamon2019 also observed that a good discrimination of healthy/infested plants could be obtained in green and red parts of spectrum for monitoring of sugar beet (Beta vulgaris L.) infestation. In addition, in a large part of the near-infrared (NIR) area, there is little fluctuation and most of the energy has been reflected. This is expected, as plants use the visible part of the spectrum for photosynthesis and other metabolic processes (Hua et al. Reference Hua, Lin, Guo, Fan, Zhang, Yang, Hu and Zhu2019; Mahlein et al. Reference Mahlein, Rumpf, Welke, Dehne, Plümer, Steiner and Oerke2013; Su Reference Su2020).
Pretreatment
To remove random noise in the data, the spectra were smoothed. This pretreatment has been reported to be efficient in other works (Huang et al. Reference Huang, Li, Yang, Wang, Li, Zhang, Wan, Qiao and Qian2021; Jiang et al. Reference Jiang, Steven, He, Chen, Du and Guo2015; Yang et al. Reference Yang, Yang, Hao, Xie and Li2019). Afterward, they were normalized, followed by first derivative and mean centering. These techniques help to avoid irrelevant information and to better represent data trends (Türker-Kaya and Huck Reference Türker-Kaya and Huck2017). Recently, Amirvaresi et al. (Reference Amirvaresi, Nikounezhad, Amirahmadi, Daraei and Parastar2021) reported that mean centering and second derivative resulted in the best performance for saffron (Crocus sativus L.) authentication and adulteration detection based on NIR and mid-infrared (MIR) spectroscopy. In this regard, the choice of preprocessing and combination of the techniques is a critical step. This preprocessing led to a remarkable diagram representing the differences of the spectra of crops and weeds. As Figure 5 shows, the average spectrum of crops has significant zones that are different from those of weeds. The peak of the spectrum of crops is at 735 nm, with the trough of the weed spectrum at this point, while the peak for the weeds is at 695 nm. Therefore, the spectra preprocessed by smoothing were used as the input for six traditional classifiers.
Traditional and Deep Classifiers
CWT was used for preprocessing for the deep networks. Table 2 presents the validation and test accuracies achieved by each classifier. As can be observed, the proposed method using SqueezNet has led to complete separation of crops and weeds both for validation and test samples. However, in case of GoogLeNet, accuracy of 97.8% was achieved. It can be noted that among traditional classifiers, FKNN led to complete separation. Next, the LDA and QSVM represented better performance, both of them had a 5-fold validation accuracy of 99.6% and a test accuracy of 100%. The QDA technique ranked last, with validation and test accuracies equal to 82.5% and 86.6%, respectively. Comparison of the training time shows that the GoogLeNet (106.63 min) and then the SqueezNet (26.85 min) required more time for training (Table 2). The great difference between the deep networks and traditional classifiers is that these networks involve the conversion of spectra to images and then use the images for training, which takes a significant time.
a LDA, linear discriminant analysis; QDA, quadratic discriminant analysis; LSVM, linear support vector machine; QSVM, quadratic support vector machine; ANN, artificial neural network; FKNN, fine k-nearest neighbors.
Compared with previous research, the performance of SqueezNet and FKNN has been remarkable. Nidamanuri (Reference Nidamanuri2020) utilized ANNs for the discrimination of tea plant varieties using spectral discrimination. Here, ANN was compared with other methods, including KNN, LDA, SVMs, and normalized spectral similarity score. It was observed that SVM, as a machine learning technique, led to higher classification accuracies. Next, it was LDA that provided a high-accuracy performance. It was reported that six out of nine varieties could be discriminated with accuracies ranging between 75% and 80%. The inclusion of natural tea plants increased the variability of the spectral data and reduced the classification accuracy. Shirzadifar et al. (Reference Shirzadifar, Bajwa, Mireei, Howatt and Nowatzki2018) used soft independent modeling of the class analogy method for discrimination of three weeds based on spectral data. It was observed that the use of preprocessing was necessary for achieving proper results. Five preprocessing methods were evaluated, and second derivative was effective. The authors reported NIR area as the best area for the discrimination. The proposed method could discriminate three weed species with 100% accuracy for 63 samples. Jiang et al. (Reference Jiang, Zhang, Qiao, Zhang, Zhang and Song2020) proposed a graph convolutional network for crop and weed recognition. Their network achieved accuracies of 97.80%, 99.37%, 98.93% and 96.51% for four different datasets and had better results compared with AlexNet, VGG16, and ResNet-101. De Souza et al. (Reference De Souza, do Amaral, de Medeiros Oliveira, Coutinho and Netto2020) studied the differentiation of sugarcane (Saccharum officinarum L.) from weeds based on spectral data and using soft independent modeling. They observed that the selection of only four significant bands in VIS-NIR could lead to the same results as the whole spectrum. Their method obtained an accuracy of 97.4%. In a recent work, Su et al. (Reference Su, Yi, Coombes, Liu, Zhai, McDonald-Maier and Chen2022) mapped blackgrass (Alopecurus myosuroides Huds.) in wheat (Triticum aestivum L.) fields using multispectral images and deep learning. For the classification task, RF with Bayesian hyperparameter optimization was used. This work led to an accuracy of 93%, and the most discriminant spectral index was composed of green-NIR.
The training process with SqueezNet in the present study shows that the original training had been very well performed (Figure 6). In this figure, the most important element is the validation curve, which has been improving and following the training data. It can be seen that from iteration 258, the network could remarkably discriminate the crops and weeds (i.e., 100% accuracy). Table 3 provides the details of training of the network. As the table indicates, in the 6th epoch, when validation accuracy reaches 100%, the validation loss is quite small, and in the 13th epoch, it reaches 0.0003. The mini-batch accuracy, which represents the accuracy of training for mini-batches or subbatches (if the whole dataset is considered to be a batch), has also been provided. The mini-batch accuracy shows that the training gets stable after the fourth epoch. Figure 7 presents the amount of loss function for each iteration. Minimizing loss function is based on the gradient descent algorithm. In every iteration, the gradient of the loss function is obtained and evaluated, and then the weights for the descent algorithm are updated. In the figure, it can be seen that the training has been going uniformly better, and the loss value for the validation data has been gradually decreasing while following the training data, showing that the learning process has been correctly performed.
The confusion matrix describing the performance of SqueezNet has been provided in Figure 8. In this matrix, output class is the predicted classification, and the target class refers to the actual classes. It can be seen that the algorithm has randomly chosen 34 crop samples and 61 weed samples as test spectra that all have been classified correctly. Akbarzadeh et al. (Reference Akbarzadeh, Paap, Ahderom, Apopei and Alameh2018) utilized SVM for the discrimination of crops and weeds based on spectral data. They reported that their gaussian SVM algorithm could classify the plants with a success rate of 97%. They obtained spectral data in three wavelengths and combined the SVM with the Normalized Difference Vegetation Index (NDVI). Rock et al. (Reference Rock, Gerhards, Schlerf, Hecker and Udelhoven2016) performed the discrimination of eight plant species using emissive thermal infrared spectroscopy. The hyperspectral images were acquired in the range of 7.8 to 11.6 ${\rm{\mu m}}$ at 40-nm resolution. The overall accuracy of discrimination obtained was equal to 92.26%. In a recent work, Jin et al. (Reference Jin, Bagavathiannan, Maity, Chen and Yu2022) compared GoogLeNet, MobileNet-v3, ShufeNet-v2, and VGGNet for the discrimination of weeds. It was observed that ShufeNet-v2 and VGGNet showed higher overall accuracies (≥0.999). However, among the classifiers, ShufeNet-v2 and MobileNet-v3 were remarkably faster than GoogLeNet and VGGNet.
An advantage of the spectral data for the discrimination of plants is that it is light independent, as the spectral reflectance of each object is specific and acts as a fingerprint. Therefore, the spectral responses of plants can be measured on-farm and used for discrimination purposes in agricultural applications. Other techniques that have recently been employed for plant discrimination and weed detection are multispectral/hyperspectral imaging, 3D modeling of plants, and LiDAR (Sandoval et al. Reference Sandoval, Gor, Ramallo, Sfer, Colombo, Vilaseca, Pujol, Caivano and Buera2012; Andújar et al. Reference Andújar, Calle, Fernández-Quintanilla, Ribeiro and Dorado2018; Jarocińska et al. Reference Jarocińska, Kopeć, Tokarska-Guzik and Raczko2021; Jin et al. Reference Jin, Bagavathiannan, Maity, Chen and Yu2022; Reiser et al. Reference Reiser, Vázquez-Arellano, Paraforos, Garrido-Izard and Griepentrog2018; Su et al. Reference Su, Fennimore and Slaughter2019). Barrero and Perdomo (Reference Barrero and Perdomo2018) fused multispectral and RGB images for weed detection. As the result of their analysis, it was observed that the Normalized Green–Red Difference Index provided better features than NDVI. The preprocessing included transformation of RGB images to hue, intensity, and saturation and usage of Haar transformation. The best weed detection performance was obtained using the neural network for the percentage of detected weed area of between 80% and 108%. In the research work conducted by Özlüoymak (Reference Özlüoymak2020) on the usage of stereo-imaging for the detection of crops and weeds, artificial plants, including one crop and six weeds, were utilized. The proposed technique led to R2 values of 0.962 and 0.978 for the detection of crops and weeds, respectively. In a recent study, Shahbazi et al. (Reference Shahbazi, Ashworth, Callow, Mian, Beckie, Speidel, Nicholls and Flower2021) studied the ability of light detection and ranging (LiDAR) sensors for the detection of weeds. It was reported that the ability to detect the weeds at different scanning distances from the sensor was significantly dependent on the size of the target and its orientation toward the LiDAR. The study showed that LiDAR could detect 100% of the weeds based on their height differences with the plant canopy. Tao and Wei (Reference Tao and Wei2022) used a hybrid classifier based on CNN-SVM for weed recognition. For the deep CNN, the VGG network, which was trained based on true-color images, was employed. The VGG-SVM classifier resulted in an accuracy of 92.1% for the separation of winter rape (Brassica napus L.) seedlings and four weeds.
This study showed that spectral data are a proper tool for the discrimination of crops and weeds. The spectral reflectances of leaves of three crops (cucumber, tomato, and pepper) and five weeds (Convolulus spp., C. rotundus, P. lanceolata, P. simplex, and R. acetosa) were obtained in the wavelength range of 380 to 1,000 nm. The classification performance of two deep CNNs and six common classifiers was investigated and compared. Two types of preprocessing (i.e., mathematical pretreatment and wavelet transform) were used for achieving the best performance of the techniques. It was observed that the utilization of continuous wavelet transform for dimensionality reduction of spectral data was quite successful. Results of analysis showed that the proposed method using SqueezNet discriminated crops and weeds with 100% accuracy. This study demonstrates successful use of spectral data for accurate discrimination of various crops and weeds based on their spectral signatures. Future studies may consider the generalization of the technique. The usage of a bigger dataset with many different types of crops and weeds will lead to the development of a robust classifier for crop/weed separation. It is suggested that the classifier be integrated into real-time weed detection systems for the evaluation of the technique in the field.
Acknowledgments
The authors cordially appreciate the support of ImViA laboratory, University of Burgundy, France, for the instrumentation and laboratory facilities provided for this research. The work is part of a joint Ph.D study between Tarbiat Modares University, Tehran, Iran, and the University of Burgundy, Dijon, France.
Funding
This research received no specific grant from any funding agency or the commercial or not-for-profit sectors.
Competing interests
The authors declare no competing interests.