Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-22T23:50:31.348Z Has data issue: false hasContentIssue false

A high-accuracy ionospheric foF2 critical frequency forecast using long short-term memory LSTM

Published online by Cambridge University Press:  22 November 2024

Alexandra Denisenko-Floyd*
Affiliation:
Eurasia Institute of Earth Sciences, Istanbul Technical University, Istanbul, Turkey
Meric Yucel
Affiliation:
National Software Certification Research Center, Istanbul Technical University, Istanbul, Turkey
Burak Berk Ustundag
Affiliation:
Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
*
Corresponding author: Alexandra Denisenko-Floyd; Email: [email protected]

Abstract

Due to the F2 ionospheric layer’s ability to reflect radio waves, the foF2 critical frequency is essential since sudden irregularities can disrupt communication and navigation systems, affecting the weather forecast’s accuracy. This paper aims to develop accurate foF2 critical frequency prediction up to 24 hours ahead, focusing on mid and high latitudes, using the long short-term memory (LSTM) model covering the 24th solar cycle from 2008 to 2019. To evaluate the effectiveness of the proposed model, a comparative analysis is conducted with commonly referenced machine learning techniques, including linear regression, decision tree algorithms, and multilayer perceptron (MLP) using the Taylor diagram and error plots. The study involved five monitoring stations, different years with minimum and maximum solar activity, and prediction timeframes. Through extensive experimentation, a comprehensive set of outcomes is evaluated across diverse metrics. The findings conclusively established that the LSTM model has demonstrated superior performance compared to the other models across all stations and years. On average, LSTM is 1.2 times better than the second-best model (DT), 1.6 times as effective as the multilayer perceptron MLP, and three times more accurate than linear regression. The results of this research hold promise for increasing the precision of foF2-prediction, with potential implications for enhancing communication systems and weather forecasting capabilities.

Type
Application Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Impact Statement

Accurate prediction of ionospheric variations is critical for various applications, including communication, navigation, remote sensing, and climate change monitoring. The traditional models are based on physical and empirical models, which have limitations in capturing the ionosphere’s dynamic nature. The development of machine learning methods has created a new opportunity to improve ionospheric variation prediction accuracy significantly. Development of the long short-term memory (LSTM) model leads to accurate short-term $ foF2 $ predictions during maximum and minimum solar activity. The results provide an efficient and reliable prediction of ionospheric variations, which can have a significant impact on a variety of applications. Applying machine learning methods can provide new insights into the complex ionosphere dynamics, improving the quality of life for people worldwide.

1. Introduction

The ionosphere experiences daily fluctuations that can arise from various factors, such as solar and geomagnetic activity, as well as large-scale lower atmospheric waves. Understanding the ionosphere is essential in predicting ionospheric parameters and mitigating the effects of climate change. The ionosphere consists of ions and charged particles produced by solar radiation absorption and the solar wind’s impact on the upper atmosphere. Therefore, man-made emissions have a significant impact on the entire atmospheric system since greenhouse gas concentrations have a cooling effect on the upper atmosphere, which leads to significant changes in the mesosphere, thermosphere, and ionosphere (Laštovička et al., Reference Laštovička, Akmaev, Beig, Bremer, Emmert, Jacobi and Ulich2008; Laštovička, Reference Laštovička2013). Although accurate ionospheric predictions during severe disturbances are challenging, the ionosphere remains complex and nonlinear (Cander, Reference Cander2015). Various approaches have been developed to overcome the effects of ionospheric delay on communication systems, but further research is necessary for precise predictions (Paziewski & Sieradzki, Reference Paziewski and Sieradzki2020).

The characteristics of the ionosphere are determined by the $ F2 $ layer, which has a high electron density. The $ foF2 $ critical frequency of the $ F2 $ layer is an essential parameter for high-frequency ( $ HF $ ) radio propagation, navigation, and remote sensing systems, as sudden irregularities can cause disruptions. The Earth’s magnetic field can also influence the $ F2 $ layer by deflecting charged particles in the ionosphere and affecting their distribution. Therefore, forecasting the $ foF2 $ critical frequency has become a significant concern in space weather studies, with applications ranging from high-frequency radars to communication systems (Ismail Fawaz et al., Reference Ismail Fawaz, Forestier, Weber, Idoumghar and Muller2022).

Accurate ionospheric prediction remains challenging under extreme weather conditions, particularly during geomagnetic storms. Forecasting the $ foF2 $ critical frequency encounters challenges due to the complex and nonlinear relationship between ionospheric parameters, the volatile nature of the ionosphere, model bias, and limitations of input parameters. Scientists have made efforts to determine which parameters are most affected by changes in $ foF2 $ . Geomagnetic indices, such as $ ap $ and $ Kp $ , are considered significant indicators of geomagnetic activity that impact the ionosphere (Gruet et al., Reference Gruet, Chandorkar, Sicard and Camporeale2018; Jakowski et al., Reference Jakowski, Stankov, Schlueter and Klaehn2006). The sunspot number ( $ Rz $ ), solar flux ( $ F10.7 $ ), and zenith angle are also closely linked to ionospheric variations (Chen et al., Reference Chen, Liu and Chen2000; Blagoveshchensky et al., Reference Blagoveshchensky, Sergeeva and Kozlovsky2017; Zhang et al., Reference Zhang, Zhao, Feng, Liu, Xiang, Li and Lu2022). Identifying the most relevant and effective parameters is still an important area of research.

At present, the complex behavior of the ionosphere cannot be fully encapsulated by any single model across all regions and geomagnetic conditions. Several models, such as the International Reference Ionosphere (IRI) and the Global Ionosphere-Thermosphere Model (GITM), are designed to focus on specific regions or concentrate on particular aspects of ionospheric behavior (Bilitza et al., Reference Bilitza, McKinnell, Reinisch and Fuller-Rowell2011; Ridley et al., Reference Ridley, Deng and Toth2006). In comparison, these models offer predictions but have limitations in their global applicability. Admittedly, data collection and sharing across organizations and countries need to be more consistent to compile detailed and comprehensive data sets. Despite these efforts, accurate prediction under extreme weather conditions, especially during geomagnetic storms, remains a challenge.

1.1. Previous studies

In recent years, machine learning methods have been applied to analyze large data sets and identify patterns, demonstrating high sensitivity and accuracy prediction improvement. The linear regression algorithm is one of the most established and comprehended algorithms in statistics and machine learning. It is easy to implement and interpret, as well as it is computationally efficient. However, linear regression assumes that the independent and dependent variables are linear, which may not always be the case in the ionosphere (Liu et al., Reference Liu, Wan and Ning2004). Moreover, linear regression can be sensitive to outliers and may not perform well with noisy data (Jakowski et al., Reference Jakowski, Stankov, Schlueter and Klaehn2003). Therefore, due to these limitations, other more complex models, such as decision trees, random forests, and neural networks, are often used instead. Decision trees have been used in ionospheric prediction to classify ionospheric disturbances, estimate ionosphere parameters, and predict the occurrence of geomagnetic storms. Decision trees can handle categorical, numerical, missing data, and outliers (Liemohn et al., Reference Liemohn, McCollough, Jordanova, Ngwira, Morley, Cid and Vasile2018; Twala, Reference Twala2009). They are both fast and efficient regarding training, and prediction (Patel et al., Reference Patel, Prajapati and Lakhtaria2012). They can handle high-dimensional data with numerous features (Lin et al., Reference Lin, Shen, Shi, Hengel and Suter2014). However, decision trees have some limitations, such as overfitting, lack of interpretability, instability, and bias toward categorical variables, making them less suitable for specific predictions (Loh, Reference Loh2014).

A multilayer perceptron (MLP) is an artificial neural network (ANN) capable of approximating any continuous function to arbitrary accuracy, handling non-linear relationships, and modeling complex systems (Gardner & Dorling, Reference Gardner and Dorling1998). Several things could be improved by using MLP for prediction tasks. The first is overfitting, which occurs when a model becomes too complex. Another reason is that MLP is computationally more expensive than statistical models (Fukumizu, Reference Fukumizu2001). As well as, MLPs are not suitable for problems with a temporal or spatial structure as they do not have any memory (Ramchoun et al., Reference Ramchoun, Ghanou, Ettaouil and Janati2016). Francis et al. (2001) used a non-linear prediction of the hourly $ foF2 $ time series in conjunction with missing data point interpolation (Francis et al., Reference Francis, Brown, Cannon and Broomhead2010). Many researchers have extensively explored the forecast of ionospheric $ foF2 $ variability for short-term prediction by using long short-term memory (LSTM). LSTMs are particularly well-suited for dealing with sequential data and long-term dependencies, allowing them to make predictions while considering the context and history of the input data (Zhao et al., Reference Zhao, Yang, Yang, Zhu, Meng, Han and Bu2021). LSTMs have been used to predict ionosphere parameters, classify ionospheric disturbances, and predict geomagnetic storm occurrence. For example, $ foF2 $ prediction was developed 1–5 hours ahead (McKinnell & Oyeyemi, Reference McKinnell and Oyeyemi2009; Oyeyemi et al., Reference Oyeyemi, McKinnell and Poole2006; Oyeyemi et al., Reference Oyeyemi, Poole and McKinnell2005) and 1–24 hours ahead (Nakamura et al., Reference Nakamura, Maruyama and Shidama2007; Twala, Reference Twala2000). Li et al. (2021) applied the LSTM model for $ foF2 $ forecasting by using previous values of $ foF2 $ among other input parameters with ionosonde stations covering a part of China and Australia (Li et al., Reference Li, Zhou, Tang, Zhao, Zhang, Xia and Liu2021). Zhao et al. (2019) used a genetic algorithm-based neural network (GA-NN) to forecast $ foF2 $ disturbances (Zhao et al., Reference Zhao, Li, Liu, Wang and Zhou2019). The results showed that predictions 1 hour ahead performed better than 3, 6, 12, and 24 hours ahead.

Bi (2022) created a hybrid neural network consisting of a convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) for forecasting foF2 variations in the low latitude region during low and high solar activity years, with a $ MAPE $ of 22.7% in 2014 (Bi et al., Reference Bi, Wu, Li, Chang and Yong2022). Two different approaches of two different time-series models were used to predict the $ foF2 $ critical frequency over Athens, Greece, between 2004 and 2018, with a $ MAPE $ of 5.56% by Atıcı et al. (Atıcı & Pala, Reference Atıcı and Pala2022). The AdaBoost-BP algorithm is used to predict the foF2 critical frequency one hour ahead by Zhao et al. with the prediction of 0.78 MHz absolute error at Taipei station (Zhao et al., Reference Zhao, Ning, Liu and Song2014).

This paper focuses on developing accurate $ foF2 $ critical frequency prediction using the LSTM model covering mid and high latitudes. The research emphazises the importance of developing reliable forecasting methods for different geographical locations up to 24 hours in advance. As far as existing long-term prediction models of $ foF2 $ cannot provide reliable performance accuracy, the article mainly focuses on short-term prediction, using planetary and geomagnetic indices as input parameters. The paper depicts the prediction results for 1, 2, 6, 12, 18, and 24 hours ahead at the various stations. The results are compared at high and middle latitudes during maximum and minimum solar activity.

In this article, we aim to develop a model that can accurately predict changes in the ionosphere using minimal computational resources. The model’s potential implications extend to enhancing our understanding of the Earth’s upper atmosphere and mitigating space weather effects. Section 1 describes the main problem and previous studies. Section 2 illustrates the developed ionospheric forecasting model, the machine learning process, data description, station selection, and the comparison of the other developed models. Section 3 discusses the prediction results and their dependencies. Finally, Section 4 offers concluding remarks.

2. Methodology

The study’s methodology consists of three main steps: data collection, model development and implementation, and model performance evaluation.

2.1. Dataset description and input parameters

The importance of geomagnetic and ionospheric indexes is emphazised in the introduction section. They provide valuable information about the complex interactions between the Earth’s magnetic field and the ionosphere and aid in improving the accuracy of ionospheric predictions. The geomagnetic and ionospheric information was downloaded from NASA https://omniweb.gsfc.nasa.gov/ow.html (accessed on 15 January 2023) (https://ccmc.gsfc.nasa.gov/) and Data Analysis Center for Geomagnetism and Space Magnetism https://wdc.kugi.kyoto-u.ac.jp/dstdir/ (accessed on 15 January 2023) (https://wdc.kugi.kyoto-u.ac.jp/dstdir/).

The input parameters of the ionospheric foF2 forecasting model, related to the ionospheric variability in time, space, solar, and geomagnetic activity, are described as follows:

  1. 1. To account for the influence of the number of days in a year on ionospheric $ foF2 $ variations (Chen et al., Reference Chen, Liu and Chen2000), (Myles et al., Reference Myles, Feudale, Liu, Woody and Brown2004), a day number ( $ DN $ ) is converted into sine and cosine components as follows:

(2.1) $$ DNS=\sin \left(\frac{2\pi \times DAY}{365}\right) $$
(2.2) $$ DNC=\cos \left(\frac{2\pi \times DAY}{365}\right) $$
  1. 2. The universal time altered the ionospheric storm occurrence, and adjustments are made accordingly for different times (Wintoft & Cander, Reference Wintoft and Cander2000).

(2.3) $$ UT S=\sin \left(\frac{2\pi \times UT}{24}\right) $$
(2.4) $$ UT C=\cos \left(\frac{2\pi \times UT}{24}\right) $$
  1. 3. The solar zenith angle is transformed using the provided formulas to calculate the sine ( $ CHIS $ ) and cosine ( $ CHIC $ ) components:

(2.5) $$ CHI S=\sin \left(\frac{2\pi \times CHI}{360}\right) $$
(2.6) $$ CHI C=\cos \left(\frac{2\pi \times CHI}{360}\right) $$
  1. 4. The $ ap $ index uses a time-weighted accumulation series calculated from the geomagnetic planetary index $ ap $ (Wrenn, Reference Wrenn1987). This study uses a constant factor of attenuation multiplier, called $ \tau $ , ranging from 0 to 1. In this article, $ \tau $ is set to 0.8. The initial value of the magnetic index is $ {ap}_0 $ , while $ {ap}_{-1} $ , $ {ap}_{-2} $ , and $ {ap}_{-11} $ represents the values 3 hours before, 6 hours before, and 33 hours before, respectively.

(2.7) $$ ap\left(\tau \right)=\left(1-\tau \right)\sum \limits_{i=0}^{11}{\left(\tau \right)}^i\times {ap}_{-i} $$
  1. 5. Based on previous research, there is a significant correlation between the geomagnetic index $ Dst $ , $ Kp $ and $ foF2 $ (Kutiev & Muhtarov, Reference Kutiev and Muhtarov2001). As a result, we also consider $ Dst $ and $ Kp $ as input parameters similar to $ ap $ and define them as follows:

(2.8) $$ Dst\left(\tau \right)=\left(1-\tau \right)\sum \limits_{i=0}^{11}{\left(\tau \right)}^i\times {Dst}_{-i} $$
  1. 6. The first differences in the $ Kp $ values are taken to achieve stationarity of the $ Kp $ index. This involves computing the difference between each $ Kp $ value $ \left( Kp(t)\right) $ and its previous value $ \left( Kp\left(t-1\right)\right) $ , denoted as $ \Delta Kp(t)= Kp(t)- Kp\left(t-1\right) $ .

(2.9) $$ \Delta Kp(t)={\beta}_0+{\beta}_1\Delta Kp\left(t-1\right)+{\beta}_2\Delta Kp\left(t-2\right)+\dots +{\beta}_{18}\Delta Kp\left(t-18\right)+\varepsilon (t) $$

Here, $ \Delta Kp(t) $ refers to the first differences of the $ Kp $ index at time $ t $ and $ \Delta Kp\left(t-1\right) $ to $ \Delta Kp\left(t-18\right) $ represents the lagged first differences of the $ Kp $ index up to 18-time steps. The coefficients $ {\beta}_0 $ to $ {\beta}_{18} $ in the model capture the impact of these lagged differences on the current value $ \Delta Kp(t) $ , and $ \varepsilon (t) $ represents the error term. To estimate the coefficients $ \varepsilon (t) $ , we apply the least squares method using a matrix ( $ X $ ) containing the lagged first differences and a vector ( $ Y $ ) with the first differences of the $ Kp $ index. The formula to estimate the coefficients is $ \beta ={\left({X}^{\prime }X\right)}^{\left(-1\right)}{X}^{\prime }Y $ . In this work, set $ T $ = 18 hours (Wang et al., Reference Wang, Shi, Wang, Zherebtsov and Pirog2008).

  1. 7. The characteristics of the ionospheric $ F $ layer are affected by the wind in the thermosphere. According to the vertical ion drift equation (Oyeyemi et al., Reference Oyeyemi, Poole and McKinnell2005), this is the case.

(2.10) $$ W=U\times \sin \left(\theta -D\right)\times \cos I\times \sin I $$

$ W $ represents the vertical ion drift velocity, $ U $ represents the horizontal wind velocity, and $ \theta $ is the geographic azimuth angle. Additionally, we denote magnetic declination and inclination as $ D $ and $ I $ , respectively. To decompose $ D $ into sine and cosine components, we use $ DS $ and $ DC $ . Similarly, $ I $ is transformed into its sine component, $ IS $ .

(2.11) $$ DS=\sin \left(\frac{2\pi \times D}{360}\right) $$
(2.12) $$ DC=\cos \left(\frac{2\pi \times D}{360}\right) $$
(2.13) $$ IS=\sin \left(\frac{2\pi \times I}{360}\right) $$
  1. 8. According to studies on the $ F2 $ layer, $ foF2 $ values are significantly influenced by previous hour values (Oyeyemi et al., Reference Oyeyemi, Poole and McKinnell2005). Our model uses the relative deviation formula to account for this dependence.

(2.14) $$ {\Delta}_1(h)={f}_0{F}_0(h)+{f}_0{F}_2\left(h-1\right) $$
(2.15) $$ \Delta R(h)=\frac{\Delta_1(h)}{f_0{F}_2(h)} $$
  1. 9. The sunspot number, also known as $ Rz $ , is widely used as an input in various ionosphere models, such as the International Radio Consultative Committee $ foF2 $ model (Wrenn, Reference Wrenn1987). This is because $ Rz $ can effectively map the ionospheric $ foF2 $ response to changes in solar output. Therefore, we have included $ Rz $ as an input factor in our models.

This research focuses on ionospheric $ foF2 $ critical frequency forecasting at mid and high latitudes, using hourly time-series data from 5 stations worldwide (Table 1 and Figure 1). The data is collected for 2012, 2015, and 2009, 2019, as these years represent the maximum and minimum solar activity, correspondingly (Barta et al., Reference Barta, Sátori, Berényi, Kis and Williams2019). Solar activity variations of 11 years are essential for decadal variations in the solar-terrestrial environment (Gnevyshev, Reference Gnevyshev1977; Ohmura, Reference Ohmura2009). This article investigates the critical frequency variations based on the 24th solar cycle from 2008 to 2019.

Table 1. The geographical locations and coordinates of the selected stations

Figure 1. A map of the stations’ spatial distribution, marked by red dots.

2.2. The developed LSTM model

Working with time-series data requires understanding a system’s dynamics, such as its periodic cycles, how the data changes over time, its regular variations, and its sudden changes. LSTM networks are recurrent neural networks designed to avoid the long-term dependency problem by learning order vulnerability in sequence from time-series data (Hochreiter & Schmidhuber, Reference Hochreiter and Schmidhuber1997).

The LSTM network consists of the following stages: the forget gate, cell state, input gate, and output gate (Figure 2). One of the essential properties of the LSTM is its ability to memorize and recognize information that enters the network and to discard information that is not required by the network to learn and make predictions (Yu et al., Reference Yu, Si, Hu and Zhang2019). The forget gate determines whether or not information can pass through the network’s layers. It expects two types of input: the information from the previous layers and information from the current layer.

Figure 2. The LSTM block diagram, the repeating module in LSTM, with four interacting layers.

Furthermore, LSTMs have a chain structure with four neural network layers interacting uniquely (Gonzalez & Yu, Reference Gonzalez and Yu2018). The LSTM algorithm relies on the cell state, represented by the horizontal line running across the top of the diagram, and is crucial in information transmission (Gers et al., Reference Gers, Schmidhuber and Cummins2000). The LSTM can remove or add information to the cell state, which is carefully controlled by gate structures. Gates allow information to pass through if desired (Yu et al., Reference Yu, Si, Hu and Zhang2019). The “forget gate layer” is a sigmoid layer that removes information that is no longer relevant or useful for further predictions. It examines $ {h}_{t-1} $ (a hidden state at the timestamp $ t-1 $ ) and $ {x}_t $ (the input vector at the timestamp $ t $ ) and returns a number between 0 and 1 for each number in the cell state $ {C}_{\left(t-1\right)} $ . Number 1 indicates “completely keep,” while number 0 indicates “completely remove” where $ {h}_t $ represents a hidden state at the current timestamp $ t $ . The updated cell from the cell state is passed to the $ \tanh $ , an activation function, which is then multiplied by the output state’s sigmoid function. After calculating the hidden state at the timestamp $ t $ , the value is returned to the recurrent unit and combined with the input at the timestamp $ t+1 $ . The same procedure is repeated for $ t+2,t+3,\dots, t+n $ timestamps until the desired number $ n $ of timestamps is reached.

(2.16) $$ {f}_t=\sigma \left({x}_t\times {u}_t+{h}_{t-1}\times {w}_t\right) $$
(2.17) $$ {\hat{c}}_t=\tanh \left({x}_t\times {u}_c+{h}_{t-1}\times {w}_c\right) $$
(2.18) $$ {i}_t=\sigma \left({x}_t\times {u}_i+{h}_{t-1}\times {w}_i\right) $$
(2.19) $$ {o}_t=\sigma \left({x}_t\times {u}_o+{h}_{t-1}\times {w}_o\right) $$
(2.20) $$ {c}_t={f}_t\times {u}_{t-1}+{i}_t\times {\hat{c}}_t $$
(2.21) $$ {h}_t={o}_t\times \tanh \left({c}_t\right) $$

Where: $ {x}_t $ is the input vector, $ {h}_{t-1} $ is the previous cell output, $ {C}_{t-1} $ is the previous cell memory, $ {h}_t $ is the current cell output, $ {c}_t $ is the current cell memory and $ w $ , $ u $ are weight vector for forget gate ( $ f $ ), candidate ( $ c $ ), $ i/p $ is the gate ( $ i $ ) and $ o/p $ is the gate ( $ o $ ).

2.3. Network configuration

Figure 3 depicts the input parameters of the LSTM model. The developed LSTM model has 14 input parameters fed into the LSTM neural network, as described in Section 2.1. The proposed model has 14 input variables connected to 36 LSTM units, 12 units of dense layer, and one output layer. The objective of the LSTM model is to extract significant and representative features from the historical data. Various experiments are conducted using different hyperparameters and LSTM units to obtain an optimal architecture, taking inspiration from previous research studies (Reimers & Gurevych, Reference Reimers and Gurevych2017). A developed configuration for the LSTM model is derived after a meticulous evaluation process. The activation function is set to $ Relu $ to handle non-linear relationships effectively (Yadav et al., Reference Yadav, Jha and Sharan2020). A batch size of 15 is found to balance efficiency and capture meaningful patterns. Through iterative fine-tuning of parameters and learning, the model is trained for 200 epochs. This LSTM configuration has demonstrated superior performance and is anticipated to produce accurate predictions for the given task.

Figure 3. A block diagram of the proposed LSTM model with input parameters.

2.4. Output parameters

The current study focuses primarily on the storm-time $ foF2 $ forecast. As output parameters, 1, 2, 6, 12, 18, and 24 hours ahead prediction results are demonstrated for years with maximum and minimum solar activity.

2.5. The machine learning process

Figure 4 presents a comprehensive overview of the methodology utilized in constructing a machine-learning model. The process entails sequential steps, including pre-processing, normalization, model training, model testing, and model evaluation (El Naqa & Murphy, Reference El Naqa and Murphy2015). The article uses the temporal segmentation technique to address seasonal changes in time-series data, which involves dividing the data into segments corresponding to different months or recurring temporal patterns. Segmenting data by several months and time ensures that each season is equally represented, reducing the risk of skewed model performance. Additionally, splitting the data optimizes model performance by aligning with the network’s strengths, mitigating biases, enhancing generalization, and facilitating comprehensive evaluation (Lovrić et al., Reference Lovrić, Milanović and Stamenković2014).

Figure 4. A generalized diagram depicting the process of developing a machine learning model.

The dataset is divided into three subsets: 80% for training, 10% for validation, and 10% for testing. However, a non-shuffled approach is adopted to preserve temporal information and mitigate potential seasonal bias errors. It is important to note that a non-shuffled approach preserves temporal information. To improve the accuracy of our methodology, we use a refined approach. Firstly, the data is divided into four separate segments, each representing a three-month period. These segments are then assigned to training, validation, and test subsets based on predetermined percentages. This process is repeated every three months, ensuring that the training, validation, and test datasets are evenly distributed. The reason for using this approach is based on the robust capabilities of LSTM networks in dealing with sequential data. This method capitalizes on LSTM’s ability to capture temporal patterns, thus enhancing model performance. Essentially, the sequential partitioning method not only utilizes the power of LSTM but also effectively eliminates any potential seasonal distortions. This meticulous approach results in improved overall quality in model development and outcomes.

Data pre-processing is an important step that involves cleaning, transforming, normalizing, and organizing raw data into a format suitable for analysis (Károly et al., Reference Károly, Galambos, Kuti and Rudas2020; Rinnan et al., Reference Rinnan, Berg, Thygesen, Bro and Engelsen2009; Rinnan et al., Reference Rinnan, Berg, Thygesen, Bro and Engelsen2020). Pre-processing’s primary function is handling missing data and inconsistencies in raw data. After clean data is performed, selected parameters are used as model inputs; parameter preparation is described in Section 2.1.

The time-series data, which is associated with ionospheric deviations, is inputted into the network. The input sequence is gradually shifted to predict the ionospheric behavior at various future time points. This involves incrementally shifting the data from 1 hour up to 24 hours for each testing instance. The result of this shifting procedure is a set of input-output pairs, where the input consists of past observations up to a particular time, and the output is the prediction for a specific time point in the future. The generalized process can be described as follows:

(2.22) $$ foF{2}_{\left(t+i\right)}=f\left({UTS}_t,{UTC}_t,{HIS}_t,{HIC}_t,{DNS}_t,{DNC}_t,{DS T}_t,{ap}_t,{Kp}_t,{DS}_t,{DC}_t,{IS}_t,{Rz}_t,\Delta {R}_t\right) $$

where $ t $ indicates time and $ i $ depicts prediction hour (up to 24 hours).

The mean absolute percentage error ( $ MAPE $ ), root mean square error ( $ RMSE $ ) and mean absolute error ( $ MAE $ ) were chosen to measure the predictive accuracy of the developed model. (Jakowski et al., Reference Jakowski, Stankov, Schlueter and Klaehn2017). The $ foF2 $ forecasting performance was compared to the other three algorithms. The $ MAPE $ , $ RMSE $ , and $ MAE $ are defined as follows:

(2.23) $$ MAPE=\frac{1}{n}\sum \limits_i^n\left|\frac{\ foF{2}_{act}- foF{2}_{pred}}{foF{2}_{act}}\times 100\right| $$
(2.24) $$ RMSE=\sqrt{\sum \limits_i^n\frac{{\left(\ foF{2}_{act}- foF{2}_{pred}\right)}^2}{N}} $$
(2.25) $$ MAE=\sum \limits_i^n\left|\frac{\ foF{2}_{act}- foF{2}_{pred}}{N}\right| $$

where n is the number of summation iterations, $ foF{2}_{act} $ is the actual value, $ foF{2}_{pred} $ is the predicted value. A low accuracy rate value indicates that the model’s predictions are close to the actual values, whereas a high accuracy rate value indicates that the model’s predictions are far off.

2.6. Compared models

To evaluate the developed model performance, LSTM was compared to linear regression, decision trees, and MLP algorithms. Linear regression is a widely used method for data analysis (Montgomery et al., Reference Montgomery, Peck and Vining2021). It is good at finding the linear relationship in data sets by establishing the relationship between independent and dependent variables when performing a regression task (finding the best-fitting line). It denotes the existence of a linear relationship between dependent and independent variables. In a linear regression model, the dependent variable is modeled as a linear combination of the independent variables plus an error term (Weisberg, Reference Weisberg2005). The equation is written as follows:

(2.26) $$ y={\beta}_0+{\beta}_1\times {x}_1+{\beta}_2\times {x}_2+\cdots +{\beta}_n\times {x}_n+\varepsilon $$

The dependent variable is y, and the independent variables are $ {x}_1,{x}_2,\cdots, {x}_n $ . The coefficients ( $ {\beta}_0,{\beta}_1,{\beta}_2,\cdots, {\beta}_n $ ) describe the strength of each independent variable’s relationship with the dependent variable. $ \varepsilon $ is the error, representing the difference between the predicted and actual value of the dependent variable.

Decision Tree (DT), an effective tool that employs a tree-like model of decisions and their potential outcomes, was implemented for $ foF2 $ critical frequency forecasting up to 24 hours in advance (Iban & Şentürk, Reference Iban and Şentürk2022). A series of branches represent possible decisions and outcomes (Myles et al., Reference Myles, Feudale, Liu, Woody and Brown2004). A decision tree model’s prediction is based on the probabilities or scores associated with the leaf nodes where the data ends up (Lan et al., Reference Lan, Zhang, Jiang, Yang and Zha2018).

MLP was developed for $ foF2 $ ionospheric disturbances forecasting, composed of multiple layers of interconnected “neurons.” MLP’s ability to have many hidden layers between the input and output increases complexity and density. The MLP models have been used to forecast air quality, daily solar radiation, and solar activity (Elizondo & McClendon, Reference Elizondo and McClendon1994). A comprehensive description of the developed Decision Tree and MLP models, along with their corresponding block diagrams and hyperparameters, is provided in Supplementary Material.

3. Obtained results discussion

The developed algorithms generate comprehensive evaluations encompassing $ RMSE $ , $ MAE $ and $ MAPE $ . The definitions of these criteria are expressed in the Formulas above. The LSTM model’s effectiveness is evaluated by comparing the obtained results to the other three machine learning algorithms. Table 2 shows the average accuracy of the $ foF2 $ predicted values across all selected stations and years, according to maximum and minimum solar activity for the 1st, 12th, and 24th hour. The complete results of $ RMSE $ errors for all models presented during the specified period for each station are available in Appendix 8. Please refer to Appendix 8 for further details. Additionally, all research findings have been made publicly accessible via the project’s GitHub page, ensuring open access to the results for interested parties.

Table 2. The average errors of $ foF2 $ predicted values across all the stations for the 1st prediction hour

The complex relationship between the input variables and the $ foF2 $ values is effectively captured by the LSTM model, as evidenced by its performance metrics. Specifically, the LSTM model yields average MAPE errors of 3.27% and 3.07%, average RMSE errors of 0.91 and 0.72 (in MHz), and average MAE errors (in MHz) of 0.20 and 0.29 for the first prediction hour across all stations during both solar minimum and maximum periods (refer to Table 2). Notably, for each station, the LSTM model consistently demonstrates lower MAPE values, particularly evident for the 12th prediction hour, where average errors are recorded as 4.70% and 4.68%, while average RMSE errors are 0.33 (in MHz) and 0.55 (in MHz), with corresponding average MAE errors (in MHz) of 0.16 and 0.24 during solar minimum and maximum, respectively (Table 3). Furthermore, during the 24th hour of prediction, the LSTM model yields average MAPE errors of 5.88% and 5.87% for solar minimum and maximum periods, respectively, with corresponding RMSE values of 0.91 (in MHz) and 0.72 (Table 4). The average MAE (in MHz) values are also observed to be 0.20 and 0.29 for solar minimum and maximum, respectively. The decision tree algorithm had a lower performance compared to LSTM and MLP models, indicating a complex relationship between models and data. This trend was consistently observed across nearly all stations and years, with only a negligible 3% of the results deviating from this observed pattern.

Table 3. The average errors of $ foF2 $ predicted values across all the stations for the 12th prediction hour

Table 4. The average errors of $ foF2 $ predicted values across all the stations for the 24th prediction hour

Figures 5, 6, 7, and 8 display the error plots of four developed models - LSTM, DT, MP, and LR during the years 2009, 2012, 2015, and 2019. For instance, at the Istanbul station, the LSTM model demonstrates the most favorable predictive accuracy, as indicated by the minimal error observed in the error-box plot, across all prediction hours (1, 2, 6, 12, 18, and 24). The LSTM error plots show narrower error bands and smaller deviations from the observed values, indicating its superior predictive capability.

Figure 5. Error plots of compared machine learning models for the Istanbul station for the 2009 year.

Figure 6. Error plots of compared machine learning models for the Istanbul station for the 2012 year.

Figure 7. Error plots of compared machine learning models for the Istanbul station for the 2015 year.

Figure 8. Error plots of compared machine learning models for the Istanbul station for the 2019 year.

The error plot for the decision tree model, positioned as the second runner-up among the models tested, might reveal fluctuations or irregular patterns. Moreover, while the MLP model’s error plot may exhibit moderate improvements compared to the decision tree model, it still shows notable errors. This suggests challenges in capturing the intricate temporal dependencies present in the $ foF2 $ data, even though it performs better than the decision tree model. Conversely, the linear regression model’s error plot may display systematic biases or larger errors, particularly in capturing the non-linear relationships and temporal dynamics inherent in the $ foF2 $ data. All obtained results and associated plots, encompassing data from various stations and years, are readily accessible through the project’s GitHub page.

Overall, the error plots consistently indicate that the LSTM model outperforms the DT, MLP, and LR models in forecasting $ foF2 $ values for all the stations and across the years 2009, 2012, 2015, and 2019. The narrower error bands, smaller errors, and closer alignment with observed values in the LSTM error plot underscore its superiority in capturing the complex temporal patterns and dependencies present in the foF2 data, highlighting its efficacy for accurate forecasting in ionospheric parameter predictions.

Figures 9, 10, 11, and 12, the box and whisker plots provide a comprehensive visualization of the distribution of prediction values across different models and years for predicting $ foF2 $ values alongside the real value. Across the years 2009, 2012, 2015, and 2019, these plots show the central tendency, spread, and variability of forecast errors, making, enabling a comparative assessment of the model performance. Furthermore, the box and whisker plots allow for the identification of potential outliers and extreme forecast errors, which may be more prevalent in certain models or years. These outliers can provide valuable insights into the limitations and weaknesses of specific models, as well as potential areas for improvement in forecasting methodologies.

Figure 9. Box and whisker plots illustrating $ foF2 $ prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2009.

Figure 10. Box and whisker plots illustrating foF2 prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2012.

Figure 11. Box and whisker plots illustrating foF2 prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2015.

Figure 12. Box and whisker plots illustrating foF2 prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2019.

Notably, the LSTM model consistently demonstrates smaller median errors and narrower interquartile ranges compared to the other models. This suggests that the LSTM model consistently outperforms the DT, MLP, and LR models in terms of forecast accuracy across different years.

Conversely, the box and whisker plots for the DT, MLP, and LR models may exhibit larger median errors and wider interquartile ranges, indicating greater variability and less accuracy in their forecasts. These models may struggle to capture the underlying patterns and dynamics in the $ foF2 $ data, resulting in less consistent performance across different years.

Furthermore, in this section, an overview of the results is presented based on the Taylor Diagram analysis (Rochford, Reference Rochford2016). The performance of the four models in predicting foF2 values for various (1, 2, 6, 12, 18, and 24 hours) are comprehensively evaluated (Figures 13, 14, 15, and 16). The Taylor diagram provides a visual representation of the agreement between the observed and predicted data, considering key metrics such as correlation, centered root mean square error (CRMSE), and standard deviation, providing a holistic view of model performance (Jolliff et al., Reference Jolliff, Kindle, Shulman, Penta, Friedrichs, Helber and Arnone2009). In this study, the Taylor diagram is employed to determine the degree of alignment between the predicted values of each model and the actual values observed (Taylor, Reference Taylor2001). In this context, the results indicate that the LSTM model outperforms the other models, as evidenced by its placement closer to the baseline observed point on the Taylor diagram. This proximity signifies a high level of similarity between the predicted and observed data in terms of standard deviation, with a correlation approaching 90% and a CRMSE close to 0.25. Following the LSTM model, the decision tree model demonstrates the second-best performance among the tested models. While not as close to the observed point as the LSTM, the decision tree model still exhibits favorable agreement with the observed data, indicating its effectiveness in predicting foF2 values across different time intervals. The MLP model follows, showing slightly lesser performance accuracy compared to the decision tree model. Although it falls further from the observed point on the Taylor diagram, it still demonstrates a reasonable level of agreement with the observed data, albeit with a slightly lower correlation and potentially higher CRMSE. Lastly, the linear regression model appears to have the least favorable performance among the tested models, as indicated by its placement on the Taylor diagram. It may exhibit lower correlation and higher CRMSE compared to the other models, suggesting limitations in capturing the underlying patterns and complexities in the foF2 data.

Figure 13. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2009 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

Figure 14. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2012 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

Figure 15. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2015 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

Figure 16. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2019 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

4. Conclusion

In conclusion, the accurate prediction of ionospheric parameters is of utmost importance due to their significant impact on radio wave propagation and weather forecasting. The present study aims to investigate the performance of several popular machine learning models in predicting the critical frequency of the ionospheric $ F2 $ layer, known as $ foF2 $ , up to 24 hours in advance using the single-station approach, with a particular focus on mid and high latitudes. The reliability of the LSTM model is analyzed in forecasting daily variations in $ foF2 $ during the 24th solar cycle and is compared to other machine learning algorithms. The findings of this study hold promise for improving the accuracy of $ foF2 $ predictions, with potential implications for enhancing communication systems and weather forecasting capabilities. The LSTM model outperforms other machine learning algorithms in forecasting daily variations of foF2, as evidenced by its placement closer to the baseline observed point on the Taylor diagram. This proximity signifies a high level of similarity between the predicted and observed data regarding standard deviation, with a correlation approaching 90% and a CRMSE close to 0.25. Furthermore, on average, LSTM is 1.2 times better than the second-best model (DT), 1.6 times as effective as the MLP, and three times more accurate than linear regression. An in-depth analysis of changes in the ionosphere is fundamental to ascertaining their impact on terrestrial weather patterns and anticipating space weather, which can have dire consequences for a society reliant on technology. Ensemble learning, which integrates multiple deep learning models, will be employed to comprehensively understand ionospheric behavior during turbulent and quiet periods. Furthermore, future research endeavors will focus on the low, equatorial, mid, and high regions, concentrating on short-term or real-time forecasting.

Abbreviations

LSTM

a long short-term memory method

foF2

critical frequency of the F2 layer of the ionosphere.

F2

the highest layer of the ionosphere.

MAPE

the mean absolute percentage error.

MAE

the mean absolute error.

RMSE

the root to mean square error.

TEC

the total electron content.

F

a layer of the ionosphere.

MLP

a multilayer perceptron.

DT

a decision tree.

LR

a linear regression.

ANN

an artificial neural network.

CNN

a convolutional neural network.

BiLSTM

a bidirectional long short-term memory.

AdaBoost-BP

a parallel Adaboost-Backpropagation neural network

ap

a daily average level for geomagnetic activity.

Kp

a planetary index characterizing the global disturbance of the Earth’s magnetic field.

Rz

a sunspot number.

DHS, DHC

a day number (sine and cosine components).

UTS, UTC

universal time (sine and cosine components).

DNS, DNC

a solar zenith angle (sine and cosine components).

DS, DC

magnetic declination (sine and cosine components).

IS

magnetic inclination (a sine component).

$ \Delta $ R(h)

relative deviation of foF2.

HF

high-frequency.

F10.7

solar flux.

IRI

the International Reference Ionosphere.

GITM

the Global Ionosphere -Thermosphere Model.

NASA

National Aeronautics and Space Administration.

GANN

Genetic Algorithm Neural Network.

CRMSE

the Centered Root-Mean-Squared Error.

ReLU

the Rectified Linear Unit.

MSE

the mean squared error criterion.

Acknowledgments

The authors thank Serkan Macit for support in data gathering and technical assistance. The authors also thank the Community Coordinated Modeling Center for providing data.

Data availability statement

The data and materials are readily accessible on the corresponding author’s GitHub page. Replication data and code can be found on the GitHub website: (https://github.com/AlexaDenisenko/Ionosphere/).

Author contribution

Conceptualization: Alexandra Denisenko-Floyd and Meric Yucel. Methodology: Alexandra Denisenko-Floyd and Meric Yucel. Validation: Alexandra Denisenko-Floyd, Meric Yucel and Burak Berk Ustundag. Data curation: Alexandra Denisenko-Floyd and Meric Yucel. Data visualization: Alexandra Denisenko-Floyd. Writing original draft: Alexandra Denisenko-Floyd. Supervision, Burak Berk Ustundag. All authors approved the final submitted draft.

Funding statement

This study was funded by the Scientific and Technological Research Council of Turkiye as part of Research Project 121E88.

Competing interest

The authors declared no conflict of interest.

Ethical standards

The research meets all ethical guidelines, including adherence to the legal requirements of the study country.

References

Atıcı, R and Pala, Z (2022) Prediction of the Ionospheric foF2 parameter using R language forecast hybrid model library convenient time series functions. Wireless Personal Communications 122, 3293–2022.CrossRefGoogle Scholar
Barta, V, Sátori, G, Berényi, K, Kis, Á and Williams, E (2019) Effects of solar flares on the ionosphere as shown by the dynamics of ionograms recorded in Europe and South Africa. Annales Geophysica 37, 747761.CrossRefGoogle Scholar
Benoit, A and Petry, A (2021) Evaluation of F10. 7, sunspot number and photon flux data for ionosphere TEC modeling and prediction using machine learning techniques. Atmosphere 12, 12021209.CrossRefGoogle Scholar
Bi, Z, Wu, X, Li, Z, Chang, D and Yong, X (2022) DeepISMNet: three-dimensional implicit structural modeling with convolutional neural network. Geoscientific Model Development. Geoscientific Model Development 15, 68416861.CrossRefGoogle Scholar
Bilitza, D (1986) International reference ionosphere: recent developments. Radio Science 21, 343346.CrossRefGoogle Scholar
Bilitza, D, Altadill, D, Truhlik, V, Shubin, V, Galkin, I, Reinisch, B and Huang, X (2017) International Reference Ionosphere 2016: from ionospheric climate to real-time weather predictions. Space Weather 15, 481–429.CrossRefGoogle Scholar
Bilitza, D, McKinnell, LA, Reinisch, B, and Fuller-Rowell, T (2011) The international reference ionosphere today and in the future. Journal of Geodesy, 85, 909920.CrossRefGoogle Scholar
Blagoveshchensky, D. V., Sergeeva, MA, and Kozlovsky, A (2017) Ionospheric parameters as the precursors of disturbed geomagnetic conditions. Advances in Space Research 60 (11), 24372451.CrossRefGoogle Scholar
Cander, L (2015) Forecasting foF2 and MUF (3000) F2 ionospheric characteristics–A challenging space weather frontier. Advances in Space Research 56, 19731981.CrossRefGoogle Scholar
Chen, YI, Liu, JY, and Chen, SC (2000) Statistical investigation of the saturation effect of sunspot on the ionospheric foF2. Physics and Chemistry of the Earth, Part C: Solar, Terrestrial and Planetary Science, 25 (4), 359362.Google Scholar
El Naqa, I and Murphy, M (2015) What is machine learning? Machine Learning in Radiation Oncology: Theory and Applications 1, 311.CrossRefGoogle Scholar
Elizondo, D. Hoogenboom G and McClendon, R (1994) Development of a neural network model to predict daily solar radiation. Agricultural and Forest Meteorology 71 (1-2), 115132.CrossRefGoogle Scholar
Francis, N, Brown, A, Cannon, P and Broomhead, D (2010) Prediction of the hourly ionospheric parameter f0 F 2 using a novel nonlinear interpolation technique to cope with missing data points. Journal of Geophysical Research: Space Physics 106, 3007730083.CrossRefGoogle Scholar
Fukumizu, K (2001) Statistical active learning in multilayer perceptions. IEEE Transactions On Neural Networks 11(1), 1726.CrossRefGoogle Scholar
Gardner, M and Dorling, S (1998) Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmospheric Environment 32(14-15), 26272636.CrossRefGoogle Scholar
Gers, F, Schmidhuber, J and Cummins, F (2000) Learning to forget: continual prediction with LSTM. Neural Computation 12(13), 24512471.CrossRefGoogle ScholarPubMed
Gnevyshev, M (1977) Essential features of the 11-year solar cycle. Solar Physics 51(1), 175183.CrossRefGoogle Scholar
Gonzalez, J and Yu, W (2018) Non-linear system modeling using LSTM neural networks. IFAC-PapersOnLine 51(1), 485489.CrossRefGoogle Scholar
Gruet, M, Chandorkar, M, Sicard, A and Camporeale, E (2018) Multiple-hour-ahead forecast of the Dst index using a combination of long short-term memory neural network and Gaussian process. Space Weather 16, 18821896.CrossRefGoogle Scholar
Gulyaeva, T. (2011) Storm time behavior of topside scale height inferred from the ionosphere–plasmasphere model driven by the F2 layer peak and GPS-TEC observations. Advances in Space Research 47(6), 913920.CrossRefGoogle Scholar
Gulyaeva, T (2016) Modification of solar activity indices in the international reference ionosphere IRI and IRI-PLAS models due to recent revision of sunspot number time series. Solar-Terrestrial Physics 2(3), 8798.CrossRefGoogle Scholar
Hao, Y, Shi, H, Xiao, Z and Zhang, D (2014) Weak ionization of the global ionosphere in solar cycle 24. Annales Geophysica 32 (7), 809816.CrossRefGoogle Scholar
Hochreiter, S and Schmidhuber, J (1997) Long short-term memory. Neural Computation 9 (8), 1731780.CrossRefGoogle ScholarPubMed
Iban, M. and Şentürk, EJ (2022) Machine learning regression models for prediction of multiple ionospheric parameters. Advances in Space Research 69 (3), 13191334.CrossRefGoogle Scholar
Ismail Fawaz, H, Forestier, G, Weber, J, Idoumghar, L and Muller, P (2022) Deep learning for time series classification: a review. Data Mining and Knowledge Discovery 33 (4), 917963.CrossRefGoogle Scholar
Jakowski, N, Stankov, S, Schlueter, S and Klaehn, D (2003) Machine learning algorithms: a study on noise sensitivity. Proceedings of The 1st Balcan Conference in Informatics 1, 356365.Google Scholar
Jakowski, N, Stankov, S, Schlueter, S and Klaehn, D (2006) On developing a new ionospheric perturbation index for space weather operations. Advances in Space Research 38 (11), 25962600.CrossRefGoogle Scholar
Jakowski, N, Stankov, S, Schlueter, S and Klaehn, D (2017) Forecasting error calculation with mean absolute deviation and mean absolute percentage error. Journal of Physics: Conference Series 930 (1), 012002.Google Scholar
Jolliff, JK, Kindle, JC, Shulman, I, Penta, B, Friedrichs, MA, Helber, R, and Arnone, RA (2009) Summary diagrams for coupled hydrodynamic-ecosystem model skill assessment. Journal of Marine Systems, 76 ( 1-2), 6482.CrossRefGoogle Scholar
Károly, A, Galambos, P, Kuti, J, and Rudas, I (2020) Deep learning in robotics: survey on model structures and training strategies. IEEE Transactions On Systems, Man, And Cybernetics: Systems 51 (1), 266279.CrossRefGoogle Scholar
Kutiev, I, and Muhtarov, P (2001) Modeling of midlatitude F region response to geomagnetic activity. Journal of Geophysical Research: Space Physics, 106 (A8), 1550115509.CrossRefGoogle Scholar
Lan, T, Zhang, Y, Jiang, C, Yang, G, and Zha, Z (2018) Automatic identification of spread F using decision trees. Journal of Atmospheric and Solar-Terrestrial Physics 179, 389395.CrossRefGoogle Scholar
Laštovička, J (2013) Trends in the upper atmosphere and ionosphere: Recent progress. Journal of Geophysical Research: Space Physics, 118 (6), 39243935.CrossRefGoogle Scholar
Laštovička, J, Akmaev, R, Beig, G, Bremer, J, Emmert, J, Jacobi, C, Jarvis, M, Nedoluha, G, Portnyagin, Y, and Ulich, T (2008 Emerging pattern of global change in the upper atmosphere and ionosphere. Annales Geophysica 26 (5), 12551268.CrossRefGoogle Scholar
Laštovička, J, Akmaev, RA, Beig, G, Bremer, J, Emmert, JT, Jacobi, C, and Ulich, T. (2008) Emerging pattern of global change in the upper atmosphere and ionosphere. Annales Geophysicae, 26 (5), 12551268.CrossRefGoogle Scholar
Li, X, Zhou, C, Tang, Q, Zhao, J, Zhang, F, Xia, G and Liu, Y (2021) Forecasting ionospheric foF2 based on deep learning method. Remote Sensing 13 (19), 3849.CrossRefGoogle Scholar
Liemohn, MW, McCollough, JP, Jordanova, VK, Ngwira, CM., Morley, SK, Cid, C, and Vasile, R (2018) Model evaluation guidelines for geomagnetic index predictions. Space Weather, 16 (12), 20792102.CrossRefGoogle Scholar
Lin, G, Shen, C, Shi, Q, Hengel, A, and Suter, D (2014) Fast supervised hashing with decision trees for high-dimensional data. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 19631970.Google Scholar
Liu, L, Wan, W and Ning, B. (2004) Statistical modeling of ionospheric foF2 over Wuhan. Radio Science 39 (1), 110.CrossRefGoogle Scholar
Loh, W (2014) Fifty years of classification and regression trees. International Statistical Review 82 (3), 329348.CrossRefGoogle Scholar
Lovrić, M, Milanović, M, and Stamenković, M (2014) Algoritmic methods for segmentation of time series: an overview. Journal of Contemporary Economic and Business Issues, 1 (1), 3235.Google Scholar
McKinnell, L and Oyeyemi, E (2009) Progress towards a new global foF2 model for the International Reference Ionosphere (IRI). Advances in Space Research 43 (11), 17701775.CrossRefGoogle Scholar
Montgomery, DC, Peck, EA and Vining, GG (2021) Introduction to linear regression analysis. John Wiley and Sons, 16, 69109.Google Scholar
Myles, A, Feudale, R, Liu, Y, Woody, N, and Brown, S (2004) An introduction to decision tree modeling. Journal of Chemometrics: A Journal of the Chemometrics Society 18 (6), 275285.CrossRefGoogle Scholar
Nakamura, M, Maruyama, T and Shidama, Y (2007) Using a neural network to make operational forecasts of ionospheric variations and storms at Kokubunji, Japan. Earth, Planets and Space 59 (12), 12311239.CrossRefGoogle Scholar
Ohmura, A. (2009) Observed decadal variations in surface solar radiation and their causes. Journal of Geophysical Research: Atmospheres 114 (D10), D00D05.CrossRefGoogle Scholar
Oyeyemi, E, McKinnell, L and Poole, A (2006) Near-real time foF2 predictions using neural networks. Journal of Atmospheric and Solar-terrestrial Physics 68 (16), 18071818.CrossRefGoogle Scholar
Oyeyemi, E, Poole, A and McKinnell, L (2005) On the global short-term forecasting of the ionospheric critical frequency foF2 up to 5 hours in advance using neural networks. Radio Science 40 (6), 112.CrossRefGoogle Scholar
Oyeyemi, EO, Poole, AWV, and McKinnell, LA (2005) On the global model for foF2 using neural networks. Radio Science, 40 (6), RS6011.CrossRefGoogle Scholar
Oyeyemi, EO, Poole, AWV, and McKinnell, LA (2005) On the global short-term forecasting of the ionospheric critical frequency f o F 2 up to 5 hours in advance using neural networks. Radio Science, 40 (06), 112.CrossRefGoogle Scholar
Patel, B, Prajapati, S and Lakhtaria, K (2012) Efficient classification of data using decision tree. Bonfring International Journal of Data Mining 2 (1), 112.CrossRefGoogle Scholar
Paziewski, J. and Sieradzki, R. (2020) Enhanced wide-area multi-GNSS RTK and rapid static positioning in the presence of ionospheric disturbances. Earth, Planets and Space 72, 116.CrossRefGoogle Scholar
Perrone, L, Pietrella, M and Zolesi, B. (2007) A prediction model of foF2 over periods of severe geomagnetic activity. Advances in Space Research 39 (5), 674680.CrossRefGoogle Scholar
Ramchoun, H, Ghanou, Y, Ettaouil, M and Janati, Idrissi M (2016) Multilayer perceptron: architecture optimization and training. International Journal of Interactive Multimedia.Google Scholar
Reimers, N and Gurevych, I (2017) Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. ArXiv Preprint ArXiv:1707.06799. 136.Google Scholar
Ridley, AJ, Deng, Y, and Toth, G (2006) The global ionosphere–thermosphere model. Journal of Atmospheric and Solar-Terrestrial Physics, 68, 8, 839864.CrossRefGoogle Scholar
Rinnan, Nørgaard L, Berg, F, Thygesen, J, Bro, R and Engelsen, S (2009) Data pre-processing. Infrared Spectroscopy for Food Quality Analysis and Control, 5, 2950.Google Scholar
Rinnan, Nørgaard L, Berg, F, Thygesen, J, Bro, R and Engelsen, S (2020) Investigating the impact of data normalization on classification performance. Applied Soft Computing 97, 105524.Google Scholar
Rochford, Peter A (2016) A Python package for calculating the skill of model predictions against observations, https://github.com/PeterRochford/SkillMetrics.Google Scholar
Secan, JA. and Wilkinson, PJ (1997) Statistical studies of an effective sunspot number. Radio Science, 32 (4), 17171724.CrossRefGoogle Scholar
Taylor, KE (2001) Summarizing multiple aspects of model performance in a single diagram. Journal of Geophysical Research: Atmospheres, 106(D7), 71837192.CrossRefGoogle Scholar
Twala, B (2000) Ionospheric foF2 storm forecasting using neural networks. Physics and Chemistry of the Earth, Part C: Solar, Terrestrial & Planetary Science 23 (10), 267273.Google Scholar
Twala, B. (2009) An empirical comparison of techniques for handling incomplete data using decision trees. Applied Artificial Intelligence 23, 373405.CrossRefGoogle Scholar
Wang, X, Shi, JK, Wang, GJ, Zherebtsov, GA, and Pirog, OM (2008) Responses of ionospheric foF2 to geomagnetic activities in Hainan. Advances in Space Research, 41 (4), 556561.CrossRefGoogle Scholar
Weisberg, S (2005) Applied Linear Regression. John Wiley & Sons, 3, 4765.CrossRefGoogle Scholar
Wintoft, P and Cander, LR (2000) Ionospheric foF2 storm forecasting using neural networks. Physics and Chemistry of the Earth, Part C: Solar, Terrestrial and Planetary Science, 25 (4), 267273.CrossRefGoogle Scholar
Wrenn, G. L. (1987) Time-weighted accumulations ap( $ \tau $ ) and Kp( $ \tau $ ). Journal of Geophysical Research: Space Physics, 92 (A9), 1012510125.CrossRefGoogle Scholar
Yadav, A, Jha, C and Sharan, A (2020) Optimizing LSTM for time series prediction in Indian stock market. Procedia Computer Science 67, 20912100.CrossRefGoogle Scholar
Yu, Y, Si, X, Hu, C, and Zhang, J (2019) A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation 31 (7), 12351270.CrossRefGoogle ScholarPubMed
Zhang, X and Tang, L (2015) Detection of ionospheric disturbances driven by the 2014 Chile tsunami using GPS total electron content in New Zealand. Journal of Geophysical Research: Space Physics 120 (9), 79187925.CrossRefGoogle Scholar
Zhang, W, Zhao, X, Feng, X, Liu, C, Xiang, N, Li, Z, and Lu, W (2022) Predicting the daily 10.7-cm solar radio flux using the long short-term memory method. Universe 8 (1), 30.CrossRefGoogle Scholar
Zhao, J, Li, X, Liu, Y, Wang, X and Zhou, C (2019) Ionospheric foF2 disturbance forecast using neural network improved by a genetic algorithm. Advances in Space Research 63 (12), 40034014.CrossRefGoogle Scholar
Zhao, X, Ning, B, Liu, L, and Song, G (2014) A prediction model of short-term ionospheric foF2 based on AdaBoost. Advances in Space Researchh 53 (3), 387394.CrossRefGoogle Scholar
Zhao, F, Yang, G, Yang, H, Zhu, Y, Meng, Y, Han, S, and Bu, X (2021) Short and medium-term prediction of winter wheat NDVI based on the DTW–LSTM combination method and MODIS time series data. Remote Sensing 13 (22), 4660.CrossRefGoogle Scholar
Figure 0

Table 1. The geographical locations and coordinates of the selected stations

Figure 1

Figure 1. A map of the stations’ spatial distribution, marked by red dots.

Figure 2

Figure 2. The LSTM block diagram, the repeating module in LSTM, with four interacting layers.

Figure 3

Figure 3. A block diagram of the proposed LSTM model with input parameters.

Figure 4

Figure 4. A generalized diagram depicting the process of developing a machine learning model.

Figure 5

Table 2. The average errors of $ foF2 $ predicted values across all the stations for the 1st prediction hour

Figure 6

Table 3. The average errors of $ foF2 $ predicted values across all the stations for the 12th prediction hour

Figure 7

Table 4. The average errors of $ foF2 $ predicted values across all the stations for the 24th prediction hour

Figure 8

Figure 5. Error plots of compared machine learning models for the Istanbul station for the 2009 year.

Figure 9

Figure 6. Error plots of compared machine learning models for the Istanbul station for the 2012 year.

Figure 10

Figure 7. Error plots of compared machine learning models for the Istanbul station for the 2015 year.

Figure 11

Figure 8. Error plots of compared machine learning models for the Istanbul station for the 2019 year.

Figure 12

Figure 9. Box and whisker plots illustrating $ foF2 $ prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2009.

Figure 13

Figure 10. Box and whisker plots illustrating foF2 prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2012.

Figure 14

Figure 11. Box and whisker plots illustrating foF2 prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2015.

Figure 15

Figure 12. Box and whisker plots illustrating foF2 prediction results using the LSTM, DT, MLP, and LR models for the Istanbul station in 2019.

Figure 16

Figure 13. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2009 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

Figure 17

Figure 14. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2012 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

Figure 18

Figure 15. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2015 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).

Figure 19

Figure 16. Taylor plots with the predicted and observed foF2 values for the Istanbul station in 2019 utilizing LSTM, DT, MLP, and LR models over different time intervals (1, 2, 6, 12, 18, and 24 hours).