Hybrid feature optimized CNN for rice crop disease prediction

Daily Zen Mews


The main goal of the work is to use an optimum feature selection-assisted classifier to classify the illnesses of rice leaves.The productions and quality of the crop will be adversely affected by any crop disease that affects the rice crop. With the goal to measures that can minimize rice yields loss improve rice quality and increase farmer income; it is important to accurately diagnose rice leaves disease. Currently most of the farmers must diagnosis and categorize disease by hand which takes lengthier. To get around this automated technique can be used to spot plant leaf disease. Data accessibility becomes a crucial component of CNN training since they are capable at classifying and identifying images. The detailed workflow of our proposed model is exhibited in Algorithm 1 and Fig. 2.

Algorithm 1
figure a

Hybrid WOA-APSO-based feature selection and CNN classification for rice crop disease detection.

The stages of the proposed computer-aided automatic diagnosis system methodology that are simulated and implemented are as follows the first step is images collected from Kaggle archive or real time dataset from filed. In the second step image preprocessing, it involves removing noise, enhancing image quality etc. The step 3 segment each input images followed by mathematical morphological operations step 5 and 6 is feature extraction and classification.

Fig. 2
figure 2

Architecture of proposed model.

Image collection

The dataset contains 2096 rice crop images of four different classes which include healthy class. The dataset collected from kaggle archive for model training; the crop images stored in imaging archive undergo normalization. The Fig. 3 shows the sample images of our dataset. The images which were inputted mathematically expressed as in Eq. (1).

$$\:{R}_{ia}=\left\{{I}_{r1},\:{I}_{r2},{I}_{r3},\cdots\:\:{I}_{ri,}\right\}$$

(1)

where

Ri.a. the input rice leaf archive.

Ir the leaf image present in the archive.

i total images in the archive.

Fig. 3
figure 3

Each class sample images from archive.

Preprocessing

Preprocessing, it is an optional process, it need when we acquire image from different environment, images is look upon as a crucial step in the process of identifying crop diseases and enhancing their quality. A wiener filter was applied to diagnosis the image with a minimum mean square error allows for image enhancement that is the statistical method for lessening the image’s blurring and smoothing effect. The Fig. 4 shows the result of after apply the wiener on the input image. The Wiener filter’s operation, as shown in mathematical model (2), is denoted by x[n].

$$\:x\left[n\right]=\:\sum\:_{i=0}^{n}{a}_{i}w[n-i]$$

(2)

where.

x[n] The main variable or signal of interest.

ai Coefficients or weights associated with each term.

w[n − i] The shifted input or weighting function at step (n − i).

Fig. 4
figure 4

Sample images after applying wiener filter from each class.

Segmentation

The technique of dividing a picture into several sections, each with a distinct set of pixels, is known as image segmentation. The image is expected to be divided using the global thresholding technique based on the gray-level pixel intensity for threshold (T). Using (3), the segmented image obtained from global thresholding, can be expressed as s(x, y). In this case s(x, y) represents the image’s pixel value.

$$\:s\left(x,y\right)=\left\{\begin{array}{c}1\:if\:s\left(x,y\right)>threshold\left(T\right)\\\:0\:ifs\left(x,y\right)\le\:threshold\left(T\right)\end{array}\right.$$

(3)

where.

s(x, y) The value at coordinates(x, y)in the given space or matrix.

threshold(T ) The threshold value, represented by T, used for comparison.

1 Output when s(x, y) exceeds the threshold T.

0 Output when s(x, y) is less than or equal to the threshold T.

The segmented images is obtained by comparing the pixel intensity value with threshold value, the pixel intensity is greater than threshold values, the segmented image t(x, y) is obtained from the actual image, followed by segmentation mathematical morphological operations are estimated by applying a specific structuring element at every feasible location to smooth the region of interest,.

$$\:Erosion\::B\ominus\:S=\:\left\{A\:|\:{\left(S\right)}_{A}\:\subseteq\:B\right\}$$

(4)

$$\:Dilation\::B\:\oplus\:S=\:\left\{A\:|\:{\left(S\right)}_{A}\:\cap\:B\ne\:\:\varnothing\:\right\}$$

(5)

$$\:Opening:\:B\ominus\:S=\:B\ominus\:S\:\oplus\:BS$$

(6)

$$\:Closing\::B\:\ominus\:\:S=\:B\:\oplus\:S\:\oplus\:S$$

(7)

where.

B The input binary image or set being processed.

S The structuring element used for morphological operations.

The erosion operator, reduces the set B based on the structuring element S.

The dilation operator, which expands the set B using the structuring element S.

(S)A The translated version of the structuring element S centered at A.

Denotes that (S)A is a subset of B.

∩ Intersection operation.

Represents the empty set.

This morphological operation mathematical expression is illustrated in (4), (5), (6), and (7), where B represents the binary image and S represents the structuring element.The erosion remove the unfinished part and prepare the image as thin one to accomplish the smoothening the image. The dilatation finishes the unfinished region of image boundaries and made thick to enhance the image. The Fig. 5 exhibits the sample images after applying thresholding, morphing and ROI. The ROI is highlighted with red box.

Fig. 5
figure 5

Sample images after applying thresholding, morphing and ROI.

Feature extraction

Feature extraction is a critical process for obtaining pattern information from segmented crop images. In the proposed approach, several techniques are employed to extract distinct geometrical, statistical, textural, and structural features from each segmented image. These include local binary pattern (LBP), histogram of oriented gradients (HOG), gray-level dependence matrix (GLDM), and gray-level co-occurrence matrix (GLCM). GLCM, a second-order statistical method, considers the spatial relationships between pairs of pixels. Higher-order statistical features, derived from clusters of continuous pixels with similar gray levels, are captured using the gray-level run length matrix (GLRLM). GLDM extracts features by calculating the absolute difference in gray levels between two pixels separated by a specified distance. HOG focuses on the structural aspects of the image by analyzing gradient orientations within localized regions using a feature descriptor. For rice crop images, LBP employs a shapebased operator to threshold neighbouring pixels based on the intensity of the central pixel. Table 5 outlines the features extracted from segmented images of damaged rice crops used for the analysis.

Table 5 Feature extracted by the feature extraction techniques.

Feature optimization – Hybrid bio-inspired algorithm

Hybridisation of WOA with APSO may effectively enhance feature optimization by harnessing the advantages of two methods.

Whale optimization algorithm

The working of WOA is social behaviour of humpback whales, specially the bubble-net feeding technique. The behaviors of WOA are encircling and bubble-net attacking, the feature subset selection is found by the WOA encircling and bubble-net attacking. It has couple of important phases that is exploitation and exploration. The WOA searching the prey at exploration, encircle the prey by using a bubble-net spiral method at exploitation phase. Spiral updating and prey encircling are updated by search agents randomly between spiral and prey encircling. The WOA has three important phases such as search for prey, encircling the prey and hunting23.

Search for prey

Whales search the prey in a random manner based on their relative position. The vector \(\:\overrightarrow{A}\) plays a role in guiding this search process. The Fig. 6 illustrates the mathematical representation of how whales search for prey.

Fig. 6
figure 6

The \(\:\overrightarrow{A}\) vector initialized by random values. If these values are greater than 1 or less than − 1, the search agent is directed to move farther away from a reference whale. During the exploration phase, unlike the exploitation phase, the position of the search agent is updated based on a randomly selected search agent rather than the best one identified so far. The algorithm \(\:\left|\overrightarrow{A}\right|>1\), emphasize exploration and allow the WOA algorithm to perform a global search. The process continues to iterate \(\:\overrightarrow{A}\) decreases linearly until \(\:\left|\overrightarrow{A}\right|\le\:1\), the algorithm reach the stage of encircling the prey and hunting the fish. The Fig. 7 sows the whale optimization behavior.

Fig. 7
figure 7

Whale optimization algorithm.

$$\:\overrightarrow{\text{D}}=|\overrightarrow{\text{C}}.\overrightarrow{{\text{X}}_{\text{r}\text{a}\text{n}\text{d}}}-\overrightarrow{\text{X}}|$$

$$\:\overrightarrow{\text{X}}\left(\text{t}+1\right)=\overrightarrow{{\text{X}}_{\text{r}\text{a}\text{n}\text{d}}}-\overrightarrow{\text{A}}.\overrightarrow{\text{D}}$$

where

\(\:\overrightarrow{{X}_{rand}}\) random position vector (a random whale) chosen from the current population.

\(\:\overrightarrow{\text{D}}\) representing the difference between the current and random position.

\(\:\overrightarrow{C}\) constant or coefficient vector that scales the difference between vectors.

\(\:{\overrightarrow{X}}_{rand}\) A random position vector.

\(\:\overrightarrow{X}\) The current position vector.

\(\:\overrightarrow{A}\) A scaling factor or step size.

T The current time or iteration index.

The WOA algorithm starts with a set of random solutions, at each iteration search agents update their positions with respect to either a randomly chosen search agent or the best solution obtained so far. The parameter a is decreased from 2 to 0 in order to provide exploration and exploitation respectively. A random search agent is chosen when \(\:\left|\overrightarrow{A}\right|>1\), while the best solution is selected when \(\:\left|\overrightarrow{A}\right|<1\), for updating the position of the search agents. Depending on the value of probability, WOA is able to switch between either a spiral or circular movement. Finally the WOA algorithm is terminated by the satisfaction of a termination criterion.

The WOA can be considered a global optimizer because it includes exploration or exploitation ability. Furthermore the proposed hyper cube mechanism define a search space in neighborhood of the best solution and allows other search agents to exploit the current best record inside that domain adaptive variation of search vector \(\:\overrightarrow{A}\) allows the WOA algorithm to smoothly transit between exploration and exploitation by decreasing \(\:\overrightarrow{A}\) some iteration are devoted to exploration \(\:\left|\overrightarrow{A}\right|<1\). Remarkably WOA includes only two main internal parameters to be adjusted (A and C). Although mutation and other evolutionary operations might have been included in the WOA formulation to fully reproduce the behavior of humpback whales we decided to minimize the amount of heuristic and the number of internal parameters thus implementing a very basic version of the WOA algorithm.

WOA begins with a set of randomly generated solutions. During each iteration search agents update their positions based either on a randomly chosen agent or the best solution identified so far. To facilitate both exploration and exploitation, the parameter a decreases linearly from 2 to 0. When \(\:\left|\overrightarrow{A}\right|<1\), the algorithm opts for exploration by selecting a random search agent, whereas when \(\:\left|\overrightarrow{A}\right|<1\), the best solution is used for position updates. Depending on the probability value, the WOA alternates between spiral and circular movements. The algorithm concludes once a predefined termination criterion is met.

WOA is considered a global optimizer due to its balanced exploration and exploitation capabilities. The proposed hypercube mechanism defines a search space around the best solution, enabling other agents to refine the search within this region. By adaptively varying the search vector A, WOA transitions smoothly between exploration and exploitation, with some iterations dedicated specifically to exploration \(\:\left|\overrightarrow{A}\right|<1\). Notably, WOA relies on only two key internal parameters, A and C, simplifying its implementation. While additional mechanisms such as mutation could have been incorporated to better mimic the behavior of humpback whales, the algorithm was intentionally kept straight forward to minimize heuristics and internal parameters, resulting in a basic yet effective version of WOA.

Encircling prey

Humpback whales exhibit a distinctive behavior known as bubble-net feeding, where they dive approximately 12 m below the water’s surface and create spiraling bubbles around their prey. They then swim upward toward the surface, herding the prey within the bubble net. This remarkable feeding strategy is unique to humpback whales. Each whale (agent) updates its position Xi (t) relative to the best-known position X*(t) (assumed as the prey). The distance between the whale and prey computed by (10) and (11).

$$\:\overrightarrow{D}=|\:\overrightarrow{C}.{\overrightarrow{X}}^{*}\left(t\right)-\overrightarrow{X}\left(t\right)|\:$$

(10)

$$\:\overrightarrow{{X}_{i}}\left(t+1\:\right)={\overrightarrow{X}}^{*}\left(t\right)-\overrightarrow{A}\:.\overrightarrow{D}$$

(11)

where

t the current iteration.

A, C are coefficient vector.

X* is the position vector of the best solution obtained so for.

X is the position vector.

\(\:\overrightarrow{D}\) distance between whale i and best solution.

The vectors A and C are calculated as follows

$$\:\overrightarrow{C}=2.\overrightarrow{r}$$

(12)

$$\:\overrightarrow{A}=2.\overrightarrow{a}.\overrightarrow{r}-\overrightarrow{a}$$

(13)

\(\:\overrightarrow{a}\) is the algorithm convergence factor linearly decreased from 2 to 0 over the course of iterations in both exploration and exploitation \(\:a=2-\frac{2\:\times\:\:t}{{t}_{max}}\), \(\:\overrightarrow{r}\) is a random vector in [0,1]

The position of a search agent is updated based on the location of the current best solution. By modifying the values of the \(\:\overrightarrow{A}\) and \(\:\overrightarrow{C}\) vectors, different positions around the best agent can be explored relative to the agent’s current location. The random vector r enables the search agent to reach and position itself within the search space between key points. Equation (10) allows any search agent to adjust its position within the vicinity of the best solution, effectively simulating the behavior of encircling prey. This concept can be extended to an n-dimensional search space, where agents navigate to positions around the best solution identified during the search process.

Spiral updating – bubble-net attack

The bubble net attacking method has two sub tasks.

1. Shrinking encircling mechanism.

The shrinking encircling mechanism is implemented by gradually reducing the value of \(\:\overrightarrow{a}\) in the equation \(\:\overrightarrow{A}=2.\overrightarrow{a}.\overrightarrow{r}-\overrightarrow{a}\). This results in \(\:\overrightarrow{\text{A}}\) taking on random values within the range [− a, a], where a decreases linearly from 2 to 0 over the iterations. When \(\:\overrightarrow{\text{A}}\) is assigned random values within the range [-1,1], the updated position of a search agent can lie anywhere between its original position and the position of the current best agent. The Fig. 8 illustrates the potential positions achievable along the path from x, y to X* when 0 < = A<=1.

2. Spiral updating position.

The spiral updating mechanism calculates the distance between the position of a whale at X and the prey at *X. To simulate the helical movement of humpback whales, a spiral equation is formulated between the positions of the whale and the prey. Whales adjust their positions along a spiral trajectory toward the optimal solution, as represented by Eq. (14).

$$\:{X}_{i}\left(t+1\right)=\overrightarrow{D}\:.{e}^{bl}.\text{cos}\left(2\pi\:l\right)+{X}^{*}\left(t\right)$$

(14)

D is \(\:\overrightarrow{{D}^{{\prime\:}}}=|\overrightarrow{{X}^{*}}\left(t\right)-X\left(t\right)|\) and indicates the distance of the ith whale of the prey (best solution obtained so far).

The parameter b is a constant that defines the shape of the logarithmic spiral, while l is a random value in the range [− 1, 1].

Humpback whales simultaneously move in a shrinking circular pattern and along a spiral path around the prey. To replicate this dual behavior, it is assumed that there is a 50% probability of selecting either the shrinking encircling mechanism or the spiral model for position updates during optimization. The corresponding mathematical representation captures this combined behavior.

Note that humpback whales swim around the prey within a shrinking circle and long a spiral shaped path simultaneously. To model this simultaneous behavior we assume that there is a probability of 50% to choose between both the shrinking encircling mechanism and the spiral model to update the positions of whales during optimization. The mathematical model is

$$\:\overrightarrow{X}\left(t+1\right)=\left\{\begin{array}{c}\overrightarrow{{X}^{*}}\left(t\right)-\overrightarrow{A}\:.\:\overrightarrow{D}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:if\:p<0.5\\\:\overrightarrow{{D}^{{\prime\:}}}.\:{e}^{bl}.\text{cos}\left(2\pi\:l\right)+\:\overrightarrow{{X}^{*}}\left(t\right)\:\:if\:p\ge\:0.5\end{array}\right.$$

(15)

where.

p is random number in [0,1].

Switching mechanism

APSO imitates the social behavior of bird flocks. It enhances the standard Particle Swarm Optimization (PSO) by dynamically adjusting parameters, such as acceleration coefficients and inertia weight, based on the optimization state. Similar to birds adjusting their velocity and position relative to their current location, the optimal known position, and the flock’s best position, APSO particles adapt their movements accordingly. The particles in the swarm navigate the search space to identify the optimal solution. Each particle moves based on its own experience and the collective experience of the swarm. Every particle has three key attributes: position, velocity, and its previous best solution. The particle with the highest fitness value is referred to as the global best gbest. The swarm consists of particles exploring and exchanging potential solutions, refining their search in pursuit of the global optimum. During the search, particles dynamically adjust their velocity based on their individual and collective flying experiences. Each particle maintains a record of its personal best solution (personal best- pbest) and considers the global best solution (global best, gbest) achieved by the swarm. The movement of a particle is influenced by its current position, velocity, and the distances to its (pbest) and gbest. Figure 8 illustrates the behavior of APSO along with its mathematical representation.

Fig. 8
figure 8

Adaptive particle swarm optimization algorithm.

APSO updates

Each particle adjusts its velocity and position in the search space is represented in (16)50 based on three main components such as inertia, cognitive and social components. The APSO adaption introduces a dynamic adjustment of these parameters based on current optimization states.

Velocity update

$$\:{\text{v}}_{\text{i}}\left(\text{t+1}\right)\text{=w.}{\text{v}}_{\text{i}}\left(\text{t}\right)\text{+}{\text{c}}_{\text{1}}\text{.}{\text{r}}_{\text{1}}\text{.}\left({\text{p}}_{\text{i}}\left(\text{t}\right)\text{-}{\text{x}}_{\text{i}}\left(\text{t}\right)\right)\text{+}{\text{c}}_{\text{2}}\text{.}{\text{r}}_{\text{2}}\text{.}\left(\text{g}\left(\text{t}\right)\text{-}{\text{x}}_{\text{i}}\left(\text{t}\right)\right)$$

(16)

where.

vi(t + 1) Velocity of particle i at time t + 1.

W Inertia weight.

vi(t) Velocity of particle i at time t.

c1 and c2 Acceleration coefficients.

r1 and r2 Random numbers between 0 and 1.

pi(t) Personal best position of particle i at time t.

xi(t) Current position of particle i at time t.

g(t) Global best known position.

Position updates

The position updates of particles are represented by (17).

$$\:{x}_{i}\left(t+1\right)=\:{x}_{i}\left(t\right)+{v}_{i}(t+1)$$

(17)

Adaptive mechanisms and APSO process

APSO, w, c1, c2 adapt based on the swarm’s state increasing convergence speed and precision.

  1. 1.

    Get the input parameters like range min, max for each of the variables c1, c2, iteration counter = 0, Vmax, wmax, wmin.

  2. 2.

    Initialize n number of population of particles of dimension d with random positions and velocities.

  3. 3.

    Increment iteration counter by one.

  4. 4.

    Evaluate the fitness function of all particles in the population, find particles best position, pbest of each particle and update its objective value. Similarly find the global best position gbest among all the particles and update its objective value.

  5. 5.

    If stopping criterion is met go to process 11. Otherwise continue.

  6. 6.

    Evaluate the inertia factor according to equation.

$$\:{\upomega\:}=\:{{\upomega\:}}_{\text{m}\text{a}\text{x}}-\left(\frac{\text{c}\text{u}\text{r}\text{r}\text{e}\text{n}\text{t}\_\text{i}\text{t}\text{e}\text{r}\text{a}\text{t}\text{i}\text{o}\text{n}}{\text{m}\text{a}\text{x}\_\text{i}\text{t}\text{e}\text{r}\text{a}\text{t}\text{i}\text{o}\text{n}}\right)({{\upomega\:}}_{\text{m}\text{a}\text{x}}-{{\upomega\:}}_{\text{m}\text{i}\text{n})}$$

so that each particles movement is directly controlled by its fitness value.

  1. 7.

    Update the velocity using the equation.

\(\:{\text{v}}_{\text{i}}\left(\text{t+1}\right)\text{=w.}{\text{v}}_{\text{i}}\left(\text{t}\right)\text{+}{\text{c}}_{\text{1}}\text{.}{\text{r}}_{\text{1}}\text{.}\left({\text{p}}_{\text{i}}\left(\text{t}\right)\text{-}{\text{x}}_{\text{i}}\left(\text{t}\right)\right)\text{+}{\text{c}}_{\text{2}}\text{.}{\text{r}}_{\text{2}}\text{.}\left(\text{g}\left(\text{t}\right)\text{-}{\text{x}}_{\text{i}}\left(\text{t}\right)\right)\) and correct it using :

$$\:{v}_{ij}\left(t+1\right)=sign\left({v}_{ij}\left(t+1\right)\right)*\text{min}\left(\right|{v}_{ij}\left(t+1\right)\left)\right|,{V}_{jmax})$$

  1. 8.

    Update the position of each particle according to equation \(\:{x}_{i}\left(t+1\right)=\:{x}_{i}\left(t\right)+{v}_{i}(t+1)\) if the new position goes out of range sit it to the boundary value using equation :

$$\:{x}_{ij}\left(t+1\right)=\text{min}\left({x}_{ij}\left(t+1\right)\right),{range}_{jmax,}$$

$$\:{x}_{ij}\left(t+1\right)=\text{max}\left({x}_{ij}\left(t+1\right)\right),{range}_{jmin}$$

  1. 9.

    The elites are inserted in the first position of the new population in order to maintain the best particle found so far.

  2. 10.

    For every 5 generations this Fbest, new value is compared with the Fbest, old, if there is no noticeable change then re-initialize K % of the population. Go to process 3.

  3. 11.

    Output the gbest particle and its objective value.

WOA-APSO

The hybrid WOA_APSO has two primary components. WOA is exploration and APSO exploitation, the mathematical for each component is brief explained in the above paragraph. The precision optimal and broader solution is covered by the WOA for exploration and APSO for fine tuning hybrid approach. After WOA converges on promising regions APSO is used to refine the feature subset selection within these regions. The hybrid WOA_APSO algorithm begins with a random solution. The processes of hybrid optimization are brief as follows.

Step 1: Define initial population of features including the positions of both APSO (birds/particles) and WOA (search agents) randomly.

Step 2: Exploration phase here to focus on different regions of the search space by WOA encircling behavior, to optimize the feature set we calculate fitness of each whales and update the whale positions according to the best solutions found, leading them closer to an optimal feature subset.

Step 3: Exploitation phase here fins tuning by APSO, The APSO get current best solution from the WOA as the beginning positions for APSO particles, then update the particle velocities and positions by APSO adaptive rules to further refine the feature subset, ensuring convergence to a more optimal solution.

Step 4: Convergence check, set the convergence constraint or condition. It the convergence is reached stop the process else the alternate between WOA and APSO till convergence reached.

Pseudo code – WOA_APSO

1. Initialize the population for a search agents randomly.

Iteration initial value t=0, Maxiteration=100.

Coefficient a=2-t*((2) /Maxiteration.

Coefficient A=2*a*r1-a.

Coefficient C=2*r2.

Random value =r.

No of search agents = 5 or initialize population size n= 5.

2. Calculate the fitness value for each search agent.

3. Choose best search agent.

4. While(t< MaxT).

5. Update w, a,A, C,l and p for each search agent.

6. For each search agent.

7. if1(p<0.5).

8. if2(|A|>1) select random agent and update position

\(\:X\left(t+1\right)={X}_{random}\left(t\right)-A*D\)

9. elseif2(|A|<1) update position of agent.

10. elseif2 \(\:X\left(t+1\right)=\left\{\begin{array}{c}w.{X}^{*}\left(t\right)=A.D\:,\:p<0.5\\\:{D}^{{\prime\:}}.{e}^{bl}.\text{cos}\left(2\pi\:l\right)+w.{X}^{*}\left(t\right),\:p>0.5\end{array}\right.\)

11. elseif1(p>=0.5) update.

12. calculate fitness for each agent.

13. update optimal solution.

14. increment counter t=t+1.

15. returned best search and its fitness value.

Table 6 Hybrid WOA-APSO parameters and values.

In the proposed work we apply this concept to search for optimal features from the extracted features. The objective of the hybrid WOA_APSO is to select the features optimally, which improves the system’s overall classification efficiency. The WOA_APSO accepts the extracted feature vector containing attributes like edges, texture patterns, contrast, spatial attributes, etc., as input. The WOA_APSO algorithm starts with initializing a maximum number of iteration and population sizes that is number of search agents in which each swarm / whale indicates the potential image feature subset. After initialization the fitness value of each feature subset was determined based on the defined objective function that maximizes classification performance by reducing data dimension, training time and over fitting issues, then the fitness value of the individual feature subset swarm /whale is estimated based on its effectiveness on classification performance (Table 6).

Classification

A deep learning classification method for training and testing the learning network is the convolutional neural network. The neural network is composed of three densely connected layers with activation functions that link one neuron to another. The weights and deltas are updated using the backpropagation technique, which has a 0.001 learning rate. To find the ideal combination for figuring out how resilient the experiment is, testing is done on a variety of criteria. The following are the different parameters:5,10,15,20 are the layer neurons, relu, softmax are the activation functions; batch size is 1,2,3; validation split is 0.1, 0.2,0.3; learning rate is 0.1, 0.01,0.001; and epochs are 10, 20, 40, 60, 80, 100, and 200. The CNN’s parameters and values are shown in Table 7.

Table 7 CNN parameters and values.

CNN structure

The CNN will be made up of several layers that are intended to learn from the features that are extracted:

The CNN input layer accepts the 224 × 224 pixel scaled photos and Convolutional Layers to extract hierarchical characteristics, apply several filters. For example, whereas deeper layers learn complex structures, the initial layer may only learn edges. Activation function to introduce non-linearity, use the rectified linear unit (ReLU). Pooling layers to minimize dimensionality while preserving significant characteristics, max pooling will be utilized. Fully connected layer provides the final categorization into groups like Leaf Blast, Brown Spot, Hispa, and Healthy by combining features. Assessment of performance to guarantee robustness, the model will be assessed using a range of performance metrics: The proportion of accurately anticipated cases to all instances is known as accuracy. Sensitivity (Recall): Evaluates how well the model detects affirmative cases. Specificity: Indicates how well the model can detect negative cases. Computational cost: Examine how long and what resources the model required to train and test.

Results and discussion

To assess the effectiveness of the experiments, we use 2096 rice crop image kaggele dataset which has four classes. The WOA and APSO are contrasted with the optimal threshold value obtained from the WOA_APSO algorithm. WOA_APSO, WOA, and APSO have reached their respective threshold values of 1.16, 2.09, and 1.9. Thus, the suggested hybrid WOA_APSO algorithm with a bio-inspired design provides accurate information-optimized feature subsets. Table 8 displays the comparative performance analysis of several classification techniques.

Table 8 Performance comparison of classification technique.

The Fig. 9 shows the performance measures of rice crop diseases are compared with the different classification algorithms.

Fig. 9
figure 9

The evaluation analysis parameters used for determining the effectiveness of the model are accuracy, sensitivity and specificity shown in (18), (19) and (20) respectively.

$$\:Accuracy=\:\left[\frac{TP+TN}{TOTAL}\right]\:X\:100$$

(18)

$$\:Sensitivity=\:\left[\frac{TP}{TP+FN}\right]\:X\:100$$

(19)

$$\:Specificity=\:\left[1-FPR\right]\:X\:100$$

(20)

where

TP shows that number of images correctly classified.

FN shows that the number of images incorrectly classified.

FPR the incorrect images classified correctly.

Table 9 shows the APSO, WOA and WOA_APSO the computational time. The APSO and WOA take more time when compare to WOA_APSO. By contrasting the various classification methods, hybrid WOA_APSO with CNN model performance metrics are determined. The model’s efficacy was assessed using the evaluation analysis parameters of accuracy, sensitivity, and specificity, as presented in (18), (19) (20) in that order. Figure 10 shows the APSO, WOA and WOA_APSO the computational time and accuracy. The APSO and WOA take more time when compare to WOA_APSO. Figure 11 represents the model accuracy and loss function.

Table 9 Comparison of computation time and accuracy of feature optimizers.
Fig. 10
figure 10

Impact of feature optimization.

Fig. 11
figure 11

Model accuracy and loss C curve.




Source link

Leave a Comment