Lightning Detection from Weather Radar - Sites at Penn State

Page created by Christopher Morrison
 
CONTINUE READING
Lightning Detection from Weather Radar - Sites at Penn State
Lightning Detection from Weather Radar
                           Da Fan                                                                     Drew Polasky
                       dxf424@psu.edu                                                                 adp29@psu.edu
      Department of Meteorology and Atmospheric Science,                             Department of Meteorology and Atmospheric Science,
                Pennsylvania State University                                                  Pennsylvania State University
                      State College, PA                                                              State College, PA

                     Sree Sai Teja Lanka                                                              Sumedha Prathipati
                        szl577@psu.edu                                                                  sjp6046@psu.edu
        Department of Computer Science and Engineering,                                  Department of Computer Science and Engineering,
                Pennsylvania State University                                                    Pennsylvania State University
                       State College, PA                                                                State College, PA

ACM Reference Format:                                                                data, with an F1 score of 0.29.
Da Fan, Drew Polasky, Sree Sai Teja Lanka, and Sumedha Prathipati.
2019. Lightning Detection from Weather Radar. In IST 597 Fall’19:                      Keywords: Geostationary Lightning Mapper, Next Gen-
Deep Learning, December 16, 2019, State College, PA. ACM, New York,                  eration Weather Radar, UNet, ResNet, Inception V3, Data
NY, USA, 8 pages. https://doi.org/10.1145/1122445.1122456                            Augmentation, Data Downsampling
1    ABSTRACT
                                                                                     2    INTRODUCTION
In this study, we use Radar Images in deep learning algo-
rithm to detect lightning. Radar reflectivity represents the                         Lightning is a significant and difficult to predict weather
quantity and size of water and ice particles in the atmosphere.                      hazard, causing an average of about 50 deaths and 9000 wild-
This value does not directly relate to the presence of light-                        land fires annually in the United States
ning, but similar processes that produce high reflectivity                           (https://www.weather.gov/safety/lightning-victims). Light-
values also lead to a greater probability of lightning. We                           ning is often accompanied by heavy rainfall, hail, and strong
use radar data along with lightening labels from the Geo-                            winds. It’s important to predict lightning and provide timely
stationary Lightning Mapper to train deep learning models                            alerts about the possible lightning strikes. However, it’s still
for lightning detection. The radar image was captured once                           a hard task to give precise information about their timing
every 5 minutes.The lightning strikes were captured once                             and location. Current methods for predicting lightning in
every 20 seconds and combined into one lightning label ev-                           operational settings rely on simple thresholds from radar
ery 5 minute. These data are available from Mar, 2018 to Oct,                        images [13].
2019, giving around 150,000 images in total. We use data                                Artificial Intelligence can be used to predict various weather
augmentation and downsampling to overcome the unbal-                                 phenomenon [5]. One possible way to predict lightning is to
anced nature of the data and reduce the memory demands                               use machine learning algorithms, which has already been ap-
of the model. We test UNet, Google Inception v3 and ResNet                           plied to weather prediction. Logistic regression and random
architectures initially. In the initial testing, UNet performed                      forest models were employed by Ruiz and Villa (2007)[8] to
the best. Training a new UNet model from scratch we find                             distinguish convective and non-convective systems based on
that it can reasonably predict lightning locations from radar                        features extracted from satellite images. Similarly, Veillette
                                                                                     et al.(2013)[12] applied decision tree and neural networks to
Permission to make digital or hard copies of all or part of this work for            predict convection initiation from various features including
personal or classroom use is granted without fee provided that copies                satellite image and radar data.
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights             Deep learning methods [4] offer the ability to encode spa-
for components of this work owned by others than the author(s) must                  tial features at multiple scales and levels of abstraction with
be honored. Abstracting with credit is permitted. To copy otherwise, or              the explicit goal of encoding the features that maximize pre-
republish, to post on servers or to redistribute to lists, requires prior specific   dictive skill. Schon et al. (2018)[10] trained tree classifiers
permission and/or a fee. Request permissions from permissions@acm.org.               and neural networks with optical flow error based on satellite
IST 597 Fall’19, December 16, 2019, State College, PA
                                                                                     images to predict lightning, and achieve a high accuracy but
© 2019 Copyright held by the owner/author(s). Publication rights licensed
to ACM.
                                                                                     also a high false alarm rate. Yunish et al. (2019)[14] applied
ACM ISBN 978-1-4503-9999-9/18/06. . . $15.00                                         also artificial neural networks with storm parameters from
https://doi.org/10.1145/1122445.1122456                                              polarimetric radar to predict and nowcast lightning.
Lightning Detection from Weather Radar - Sites at Penn State
IST 597 Fall’19, December 16, 2019, State College, PA                                               Da, Polasky, Lanka, Prathipati

   In this work, we used three different Convolutional Neural
Network (CNN) models, including UNet, ResNet, and Incep-
tion V3, to predict lightning based on NEXRAD radar images
and lightning labels from Geostationary Lightning Mapper
(GLM) data.

3   DATA
Dataset
In this analysis, the following datasets were used: 1) the
lightning data from the Geostationary Lightning Mapper                Figure 2: Sample lightning label. The green dot indicates
(GLM), 2) the composite radar reflectivity fields from the            lightning strike at the grid box.
National Severe Storms Laboratory (NSSL) 3D mosaic Next
Generation Weather Radar (NEXRAD). Our analysis covers
the period from 1 March 2018 until 31 October 2019, during            Unbalanced Data
which GLM data are available.                                         Lightning is relatively rare, present in about 0.000072 of the
   Radar reflectivity images come from the National Severe            pixels in our dataset. This presents a challenge for training
Storms Laboratory (NSSL) 3D mosaic Next Generation Weather            models, as the accuracy of model that never predicts light-
Radar (NEXRAD). The radar mosaic data have a 1-km hori-               ning will be highly accurate. To overcome this issue, we
zontal resolution, a 5-min temporal resolution, with 2600x6000        tune the data, and select training cases with more lightning
pixels per image, covering the Continental US (CONUS). The            present. For this reason, we do not focus on accuracy as the
sample radar image is shown in figure 1.                              primary metric for evaluating these models.
The lightning labels utilized in this work were detected by
the GLM instrument aboard the GOES-16 satellite. GLM                  Data Downsampling
camera pixels detect lightning flashes day and night with a           Downsampling is used to reduce the total size of the images,
horizontal resolution ranging between 8 and 12 km, with an            in order to more easily train the models. Although shrinking
average detection efficiency nearing 90 percent (Goodman              an image does not require filling in new space as in the case
et al. 2013[2]).                                                      of upsampling, care must still be taken to ensure that minimal
The lightning data centroids initially stored in 20-s intervals       useful information is lost. For example, consider an image
were binned in 5-minute intervals and then projected onto             made up of alternating black and white pixels. If you shrink
a uniform 10-km grid with 260x600 data points within the              this image to half its size by directly sampling the values
Contiguous United States. The sample radar lightning label            of every other pixel, you would end up with a completely
is shown in figure 2.                                                 white or black image. The resolution of the lightning labels
                                                                      is one tenth that of the radar data. We downsample the radar
                                                                      images to match the resolution of the lightning images, to
                                                                      reduce the requirements on the model, while losing as little
                                                                      useful information as possible.

                                                                      Data Augmentation
                                                                      Using the augmentation techniques likes resize, rotate, zoom,
                                                                      to create the different angles of lightening if needed as the
                                                                      lightening percentage is 1 compared to non lightening. So we
                                                                      have tried to add the labels noise to increase the number of 1s
                                                                      compared to 0 which is lightning vs non lightning proportion.
Figure 1: Sample radar image. The shading in the radar im-            We followed some techniques of resizing, zooming, scaling
age indicates radar reflectivity.                                     and scaling for adding the noise to crop the images with
                                                                      more portion of lightning, which in other terms is called as
                                                                      ’Adding Adversarial Label Noise’.
                                                                      Adding Adversarial Label Noise. We have tried to resize the
                                                                      image and increase the fraction of lightning and non light-
                                                                      ning zooming the image to match the fraction to 3:4. Here
                                                                      are the sample of an image when the image is original(Figure
                                                                  2
Lightning Detection from Weather Radar - Sites at Penn State
IST 597 Fall’19, December 16, 2019, State College, PA                                                Da, Polasky, Lanka, Prathipati

3) and when the image is cropped(Figure 4) and we can visu-            3X3 CNN layer with the number of feature maps equal to
alise the dark portion of original to the dark portion in the          the number of segments desired. [9]
cropped image.
                                                                          To detect the lightning in the radar images, the UNet model
                                                                       is trained from scratch on the GLM data to produce the seg-
                                                                       mented images as the output. The yellow segmented regions
                                                                       in the output denote the presence of lightning in the image.

               Figure 3: Original radar image

                                                                                  Figure 5: Architecture of UNet Model

Figure 4: Image obtained after adding adversarial label noise          ResNet
                                                                       Training deep neural networks with gradient based optimiz-
                                                                       ers and learning methods can cause vanishing and exploding
4   TRANSFER LEARNING AND MODELS                                       gradients during backpropagation. With the help of residual
Three image deep learning models (UNet, Resnet, and Google             blocks, we can increase the number of hidden layers without
Inception V3), are evaluated for use on radar data.                    worrying about this problem. Residual blocks enables the net-
                                                                       work to preserve what it had learnt previously by having an
UNet                                                                   identity mapping weight function where the output is equal
This architecture [7] looks like a ‘U’ which justifies its name.       to the input, preserving what the neural network has learnt
This architecture consists of three sections: the contraction,         by not applying diminishing transformations. Residual block
the bottleneck, and the expansion section. Each contraction            can be used along with the convolutional layers to maximize
block takes an input and applies two 3X3 convolution layers            the accuracy.[6] The ResNet 50 architecture [3] consists of 5
followed by a 2X2 max pooling. The number of kernels or                stages, with each stage including a series of convolutional
feature maps after each block doubles so that architecture             operations. The input is initially zero padded and then passed
can learn the complex structures effectively. The bottom-              on to the first stage which includes the convolutional layer,
most layer mediates between the contraction layer and the              batch norm and ReLU functions. The subsequent stages con-
expansion layer. It uses two 3X3 CNN layers followed by 2X2            tain series of convolutional layers which are connected to
up convolution layer. But the heart of this architecture lies          the fully connected dense layers at the output end.
in the expansion section. Similar to contraction layer, it also
consists of several expansion blocks. Each block passes the               For the problem of predicting the lightning in the radar
input to two 3X3 CNN layers followed by a 2X2 upsampling               images, the ResNet model is trained using transfer learning
layer. Also after each block number of feature maps used by            and is then fine tuned. The last fully connected layer is re-
convolutional layer get half to maintain symmetry. However,            placed with two fully connected layers and a final softmax
every time the input is also get appended by feature maps              layer for the output. In this architecture, only the last three
of the corresponding contraction layer. This action would              layers are trained while the weights of the previous layers
ensure that the features that are learned while contracting            in the model are carried forward from the pretrained ResNet
the image will be used to reconstruct it. The number of ex-            model.
pansion blocks is as same as the number of contraction block.
After that, the resultant mapping passes through another
                                                                   3
Lightning Detection from Weather Radar - Sites at Penn State
IST 597 Fall’19, December 16, 2019, State College, PA                                                  Da, Polasky, Lanka, Prathipati

                                                                      5   APPROACH FOLLOWED
                                                                      The three models mentioned above- UNet, ResNet and the
                                                                      Google Inception V3 models are first trained on the down-
                                                                      sampled and augmented images.
                                                                      This is to get a fair idea of how the models are performing
                                                                      on the augmented lightning data which does not have much
     Figure 6: Architecture of fine tuned ResNet Model
                                                                      of an imbalance between the number of 0’s and 1’s in the
                                                                      labels. The resulting 1/0 ratio in the labels obtained after
                                                                      data augmentation is approximately 0.25 as opposed to the
Google Inception V3                                                   earlier ratio which was approximately 0.000072.
The Inception V3 [11] model builds on the idea that most of           The next step is to identify the model which performed the
the activations in a deep network are redundant because of            best as compared to the other models and train this model
correlations between them. Therefore an efficient architec-           on the original NEXRAD radar images and lightning labels
ture of a deep network will have a sparse connection between          from Geostationary Lightning Mapper (GLM) data.
the activations, which implies that all the output channels
                                                                      6   RESULTS
will not have a connection with all the input channels. The
image recognition model consists of two parts: the feature            UNet
extraction part with the a convolutional neural network and           The UNet model, which has the CNN architecture for image
the classification part with the fully connected and softmax          segmentation was trained from scratch on the augmented
layers.                                                               images from the previous step.
This model approximates a sparse CNN with a normal dense              The output from this model is predicting the locations of
construction and uses convolutions of different sizes to cap-         lightning at a particular time from the radar image. The
ture details at varied scales. Another salient point about this       UNet model was trained on input images of size 260x600 and
architecture is the bottleneck layer which reduces the num-           correspondingly sized labels.
ber of features, and thus operations, at each layer, so the           The hyper parameters which were used to train this model
inference time could be kept low. Additionally, the number            are:
of features is reduced before passing data to the expensive
convolution modules which in turn leads to large savings in                  Hyperparameters             Value
computational cost.[1]                                                       Number of epochs            15
                                                                             Learning rate               0.001
   For the problem of predicting the lightning in the radar                  Train, Validation split     0.4
images, the Inception V3 model is trained using transfer                     Batch size                  32
learning and is then fine tuned. The feature extraction part                 Optimizer                   RMS Prop
of the model is reused while the classification part is re-                  Loss                        Binary cross entropy
trained using the radar images data. The last output layer of         Table 1: Hyperparameters values of the UNet model trained
the pre-trained Inception V3 model is replaced by two fully           on the augmented lightning data
connected layers and a final Softmax layer for the output. In
this architecture, only the last three layers are trained while
the weights of the previous layers in the model are carried             The UNet model was trained for a total of 15 epochs with
forward from the pre-trained Inception V3 model.                      the learning rate 0.001 and a batch size of 32. The train vali-
                                                                      dation data split was 0.4. This model was trained over the
                                                                      augmented data images using the RMS prop optimizer to
                                                                      compute the Binary cross entropy loss values at each epoch.

  Figure 7: Architecture of fine tuned Inception V3 Model

                                                                  4
Lightning Detection from Weather Radar - Sites at Penn State
IST 597 Fall’19, December 16, 2019, State College, PA                                                   Da, Polasky, Lanka, Prathipati

                                                                              Hyperparameters            Value
                                                                              Number of epochs           20
                                                                              Learning rate              0.001
                                                                              Train, Validation split    0.4
                                                                              Batch size                 64
                                                                              Optimizer                  RMS Prop
                                                                              Loss                       Binary cross entropy
                                                                       Table 3: Hyperparameters values of the ResNet model
                                                                       trained on the augmented lightning data

                                                                         The ResNet model was trained for a total of 20 epochs
                                                                       with the learning rate 0.001 and a batch size of 64. The train
                                                                       validation data split was 0.4.

Figure 8: Example prediction for the UNet lightning model.
                                                                           (a) Actual Image                   (b) Predicted Image
Yellow areas in the predicted and observed images represent
areas with lightning.

  The performance metrics obtained are:

               Metric                  Value
               Training accuracy       90.78 %
               Validation accuracy     92.64 %
                                                                       Figure 9: Example prediction for the ResNet lightning
               Validation loss         0.391
                                                                       model. Yellow areas in the predicted image represents areas
               Cohen’s Kappa           0.47                            with lightning.
               AUROC                   0.834
               F1 score                0.676
Table 2: Performance metrics of the UNet model trained on                The performance metrics obtained are:
the augmented lightning data
                                                                                      Metric                   Value
                                                                                      Training accuracy        91.78 %
   The accuracy values, F1 score and the Cohen’s Kappa score                          Validation accuracy      93.64 %
indicate that the model was able to perform satisfactorily on                         Validation loss          0.343
the lightning images. From the actual and predicted images                            Cohen’s Kappa            0.32
obtained, it can be inferred that the predicted image is able to                      AUROC                    0.77
correctly interpret the lightning in the original radar image                         F1 score                 0.366
to some extent.                                                        Table 4: Performance metrics of the ResNet model trained
                                                                       on the augmented lightning data
ResNet
This model is obtained by fine tuning the ResNet model. It
was trained on input images and labels of size 128x128.
                                                                         It is observed that even though the accuracies are around
                                                                       the same range, the Cohen’s Kappa score, AUROC value and
                                                                       the F1 score is much lesser than that obtained when training
                                                                       the UNet model on the lightning images. It is also evident
                                                                       by taking a look at the predicted images that the lightning
                                                                       was not detecting at the correct positions accurately. From
                                                                       these observations, it can be said that the model is able to
                                                                       perform well to a certain extent but is not able to perform
                                                                       better when compared to the UNet model.
                                                                   5
IST 597 Fall’19, December 16, 2019, State College, PA                                               Da, Polasky, Lanka, Prathipati

Inception V3                                                         to the other two models. It is implied that this model archi-
The model obtained by fine tuning the Inception V3 model.            tecture is not suitable for the problem statement at hand.
This model was trained on input images and labels of size            From the actual and predicted radar images, it is noticed that
128x128.                                                             the lightning is not detected accurately and in fact is present
                                                                     at positions it is not expected to be.
  The hyper parameters which were used to train the model
are:                                                                 In conclusion, the UNet model displays the best performance
                                                                     when compared to the three models trained above. There-
                                                                     fore, the next step is to train a UNet model from scratch on
       Hyperparameters           Value                               the original radar images to predict lightning in the original
       Number of epochs          20                                  GLM data.
       Learning rate             0.001
       Train, Validation split   0.4                                 Lightning Detection Model
       Batch size                64                                  To train a model to predict locations of lightning in a radar
       Optimizer                 RMS Prop                            image, we train a UNet model from scratch on the original
       Loss                      Binary cross entropy                radar and GLM data. In order to help balance the data, so that
Table 5: Hyperparameters values of the Inception model               lightning was more frequent in our training data, only times
trained on the augmented lightning data                              with at least 10 grid cells with lightning present. Even with
                                                                     this balancing, there are far more cells without lightning
                                                                     than with, around 99.7% of all grid cells.
  The Inception V3 model was trained for a total of 20 epochs           To attempt to balance the data further, rather than training
with the learning rate 0.001 and a batch size of 64. The train       on the entirety of the radar image (covering the full CONUS),
validation data split was 0.4.                                       we split the image into smaller sub-images of 520x1200, and
                                                                     apply the same 10 lightning grid cell threshold. This alter-
                                                                     ation improved the imbalance of the data labels, but lightning
     (a) Actual Image               (b) Predicted Image              was still only present in % of the training label grid cells. In
                                                                     both these cases, no threshold was applied to the testing
                                                                     data, so that the performance statistics will most accurately
                                                                     represent the performance on real cases.
                                                                        Training UNet models for both the sub and full images,
                                                                     we find that the models are able to generally capture the
                                                                     areas where lightning will be present. The full-image model
                                                                     performed slightly better than the sub-image model (table
Figure 10: Example prediction for the Inception V3 light-            12) despite the increased balance in the training data.
ning model. Yellow areas in the predicted image represents              The accuracy metrics for these two models are still lower
areas with lightning.                                                than ideal. However, looking at the maps of lightning loca-
                                                                     tions produced by the models provides good evidence that
  The performance metrics obtained are:                              the models are predicting lightning in most of the areas
                                                                     where lightning is present (figs. 11 and 12).
               Metric                 Value
               Training accuracy      92.78 %
               Validation accuracy    90.64 %
               Validation loss        0.771
               Cohen’s Kappa          0.19
               AUROC                  0.71
               F1 score               0.258
Table 6: Performance metrics of the Inception model trained
on the augmented lightning data

  The Cohen’s Kappa score of 0.19 indicates that the fine
tuned Inception V3 model performs poorly when compared
                                                                 6
IST 597 Fall’19, December 16, 2019, State College, PA                                                        Da, Polasky, Lanka, Prathipati

                                                                       convective allowing models could allow for this method to
                                                                       be used for near term predictions of lightning threats, which
                                                                       would be valuable to forecasters and stakeholders for issuing
                                                                       warnings and advisories around a wide variety of outdoor
                                                                       activities.

                                                                       8   ACKNOWLEDGMENTS
                                                                       Foremost, We would like to express our sincere gratitude to
                                                                       our course head Prof. Dr C. Lee Giles for the continuous sup-
                                                                       port and his patience, motivation, enthusiasm, and immense
                                                                       knowledge. We could not have imagined having a better Pro-
                                                                       fessor for the course ’Deep Learning’. Besides our Professor,
                                                                       we would love to thank our TA Ankur for his continuous
Figure 11: Example prediction for the sub-image lightning              help, immediate response, encouragement, insightful com-
model. Yellow areas in the predicted and observed images               ments, and hard questions which helped us to develop and
represent areas with lightning.
                                                                       build the model pretty good. Our sincere thanks also goes to
                                                                       our TA Kaixuan Zhang for leading us throughout the course
                                                                       whenever we need him. We also convey our thanks to the
                                                                       Penn State Institute for Computational and Data Sciences for
                                                                       providing the computation resources used for this project.

                                                                       REFERENCES
                                                                        [1] CV-Tricks.com by KOUSTUBH. [n. d.]. Learn Machine Learning, AI
                                                                            Computer vision. Retrieved Dec 15, 2018 from https://cv-tricks.com/
                                                                            cnn/understand-resnet-alexnet-vgg-inception/
                                                                        [2] Steven J. Goodman, Richard J. Blakeslee, William J. Koshak, Douglas
                                                                            Mach, Jeffrey Bailey, Dennis Buechler, Larry Carey, Chris Schultz,
                                                                            Monte Bateman, Eugene McCaul, and Geoffrey Stano. 2013. The GOES-
                                                                            R Geostationary Lightning Mapper (GLM). Atmospheric Research 125
                                                                            (May 2013), 34–49. https://doi.org/10.1016/j.atmosres.2013.01.006
                                                                        [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep
Figure 12: As in figure 11, but for the full CONUS radar im-                Residual Learning for Image Recognition. CoRR abs/1512.03385 (2015).
ages.                                                                       arXiv:1512.03385 http://arxiv.org/abs/1512.03385
                                                                        [4] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning.
                                                                            Nature 521, 7553 (2015), 436–444.
      Metric             Sub-Images       Full Images                   [5] Amy McGovern, Kimberly L Elmore, David John Gagne, Sue Ellen
                                                                            Haupt, Christopher D Karstens, Ryan Lagerquist, Travis Smith, and
      Precision          32%              25%                               John K Williams. 2017. Using artificial intelligence to improve real-time
      Recall             17%              25%                               decision-making for high-impact weather. Bulletin of the American
      Cohen’s Kappa      0.22             0.29                              Meteorological Society 98, 10 (2017), 2073–2090.
      F1 score           0.23             0.29                          [6] Aggregated news around AI and co. [n. d.]. Resnet architecture
                                                                            explained.        Retrieved september 26, 2019 from https://mc.ai/
Table 7: Performance metrics for the full radar image UNet                  resnet-architecture-explained/
model trained on the original lightning data.                           [7] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net:
                                                                            Convolutional Networks for Biomedical Image Segmentation. CoRR
                                                                            abs/1505.04597 (2015). arXiv:1505.04597 http://arxiv.org/abs/1505.
                                                                            04597
7   CONCLUSION                                                          [8] Anne Ruiz and Nathalie Villa. 2008. Storms prediction : Logistic regres-
                                                                            sion vs random forest for unbalanced data. arXiv:stat.AP/0804.0650
Lightning is a dangerous and difficult to predict weather haz-          [9] Heet Sankesara. [n. d.]. UNet - introducing Symmetry in Segmenta-
ard. Deep learning methods can provide an effective method                  tion. Retrieved Jan 23, 2019 from https://towardsdatascience.com/
for predicting locations where lightning is likely to be present            u-net-b229b32b4a71
from readily available radar data. The UNet model we trained           [10] Christian Schön, Jens Dittrich, and Richard Müller. 2019. The Error
                                                                            is the Feature: How to Forecast Lightning Using a Model Prediction
here generally captures the areas in which lightning is likely              Error. In Proceedings of the 25th ACM SIGKDD International Conference
to be present in the radar images. Extending this method                    on Knowledge Discovery & Data Mining (KDD ’19). ACM, New York,
to test with synthetic radar images produced by short term,                 NY, USA, 2979–2988. https://doi.org/10.1145/3292500.3330682
                                                                   7
IST 597 Fall’19, December 16, 2019, State College, PA                                                                Da, Polasky, Lanka, Prathipati

[11] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens,            initiation over southern Ontario. Weather and Forecasting 25, 4 (2010),
     and Zbigniew Wojna. 2015. Rethinking the Inception Architecture                 1235–1248.
     for Computer Vision. CoRR abs/1512.00567 (2015). arXiv:1512.00567          [14] R J Doviak P W Chan Yunish Shrestha, Y. Zhang. 2019. Application
     http://arxiv.org/abs/1512.00567                                                 of Artificial Intelligence in Lightning Detection and Nowcasting Using
[12] Mark S. Veillette. 2013. Convective Initiation Forecasts Through the            Polarimetric RADAR Data. American Meteorological Society 99th An-
     Use of Machine Learning Methods.                                                nual Meeting. https://ams.confex.com/ams/2019Annual/meetingapp.
[13] Y Helen Yang and Patrick King. 2010. Investigating the potential of             cgi/Paper/353625.
     using radar echo reflectivity to nowcast cloud-to-ground lightning

                                                                            8
You can also read