Melanoma Detection using Convolution Neural Networks - IJIREEICE

Page created by Sylvia Howell
 
CONTINUE READING
Melanoma Detection using Convolution Neural Networks - IJIREEICE
ISSN (Online) 2321-2004
                                                        IJIREEICE                                           ISSN (Print) 2321-5526

                                     International Journal of Innovative Research in
                         Electrical, Electronics, Instrumentation and Control Engineering
                                                    Vol. 8, Issue 03, March 2020

                          Melanoma Detection using
                         Convolution Neural Networks
                     Laxminarayanan G1, Mohan kumar T2, Bevin I3, Sathya Prakash T4
                             Assistant Professor, Instrumentation and Control Engineering,
                            Sri Manakula Vinayagar Engineering College, Puducherry, India1
Student, Instrumentation and Control Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, India2,3,4

Abstract: In today’s modern world, Skin cancer is the most common cause of death amongst humans. Skin cancer is
abnormal growth of skin cells most frequently develops on body exposed to the daylight but can occur anywhere on the
body. Most of the skin cancers are curable at early stages. So, an early and fast detection of skin cancer can save the
patient’s life. With the new technology, early detection of carcinoma is feasible at initial stage. Formal method for
diagnosis skin cancer detection is Biopsy method. It is done by removing skin cells which sample goes to varied
laboratory testing. It is painful and time-consuming process. The skin cancer detection system using convolution neural
network for early detection of skin cancer disease is proposed. It is more advantageous to patients. The diagnosing
methodology uses Image processing methods algorithm. The dermoscopy image of skin cancer is taken and it goes
under various pre-processing technique for noise removal and image enhancement. Then the image is undergone to
segmentation. These features are given as the input to classifier. Convolution neural network is used for classification
purpose. It classifies the given image into cancerous or non-cancerous.

Keywords: Skin Cancer, Deep Convolution Neural Networking, Image Processing, MATLAB

                                               I.          INTRODUCTION

A biopsy is a method to remove a piece of tissue or a sample of cells from patient body so that it can be analysed in a
laboratory. It is uncomfortable method. Biopsy Method is time consuming for patient as well as doctor because it takes
lot of time for testing. Biopsy is done by removing skin tissues (skin cells) and that sample undergoes series of
laboratory testing. There is possibility of spreading of disease into other part of body. It is riskier. Considering all the
cases mentioned above, So Skin cancer detection using CNN is proposed. This methodology uses digital image
processing technique and CNN for classification. This technique has inspired the early detection of skin cancers and
requires no oil to be applied to your skin to achieve clear sharp images of your moles. In this way, it's quicker and
cleaner approach. But, most importantly, due to its higher magnification, Skin Cancer Detection Using CNN can
prevent the unnecessary excision of perfectly harmless moles and skin lesions. A painless medical technique being
used for early detection of melanoma is epiluminescence microscopy, or dermoscopy. Using a handheld device, a
doctor can evaluate the patterns of size, shape, and pigmentation in pigmented skin lesions. Among trained,
experienced medical professionals, dermoscopy may reduce the number of biopsies.

                                              II.          EXISTING WORK

Content-based image search or retrieval has been a core problem in multimedia for years. In recent literature survey,
many approaches adopt invariant local features to represent images, which changes the bag-of-visual-words model and
also the classic inverted index structure for scalable image search. Generally, such a picture search framework consists
of our 4 necessary key modules, which includes feature extraction, feature quantization, picture indexing, and ranking.
For feature extraction, the foremost popular and effective local descriptor is that the SIFT, which is extracted on key
points or regions detected by Difference of Gaussian (DoG), MSER, or Hessian affine detector, etc. Later on, there are
several efforts on designing local descriptors with a better efficiency and comparable discriminability, e.g., the SURF
and edge-SIFT. At feature quantization, each local descriptor is mapped or hashed to one or multiple visual words and
then an image is represented by a group of visual words. After that, inverted index structures are readily adopted to
index large scale image databases for image search. At the online retrieval stage, the shared visual words between a
query image and database images can be easily identified by looking up the inverted index lists. The similarity between
the query and database images is measured by a weighted formulation based on those shared visual words. Finally,
those relevant database images are ranked by their similarity scores and presented to users. The initial retrieval results
may be re-ranked by some post-processing techniques, such as the query expansion, feature augmentation, or geometric
verification.
Copyright to IJIREEICE                                 DOI 10.17148/IJIREEICE.2020.8313                                       72
Melanoma Detection using Convolution Neural Networks - IJIREEICE
ISSN (Online) 2321-2004
                                                       IJIREEICE                                           ISSN (Print) 2321-5526

                                     International Journal of Innovative Research in
                         Electrical, Electronics, Instrumentation and Control Engineering
                                                   Vol. 8, Issue 03, March 2020

                                            III.        PROPOSED SYSTEM

The System is that the design of an entire mobile imaging system to detect melanoma. The system uses a standard
smartphone because the platform. Thus, the proposed system is remarkably accessible. The system employs state-of-
the-art picture analysis algorithms to enable automatic assessment of the malignancy of the skin lesion. The System
goal is that the public can use our proposed mobile health (mhealth) system to perform preliminary assessment
frequently and detect any anomalous skin lesion in their early stage itself. As are going to be further discussed, the
proposed system has four major components. the primary component may be a fast and light-weight segmentation
algorithm for skin lesion localization. System use novel colour and border features to quantify the color variation and
therefore the border irregularity of skin lesions. System evaluate 116 computational features to quantify the color,
border, asymmetry and texture of a skin lesion, including our proposed features that are suitable for light images
captured under loosely-controller lighting conditions. Project investigate feature selection to identify a small set of the
most discriminative features to be used in the smartphone. Using a small set of discriminative features not only reduces
the storage and computation overhead but also improves the classification performance, as low dimensional feature
vector is more robust to over-fitting. The focus on the framework using normalized mutual information and propose an
improvement that takes into account the feature coordinates. The system proposes several methods to fuse the
classification results of individual category classifiers. System evaluate our system using a dataset from National Skin
Centre (NSC) of Singapore. The case study the Human Computer Interface (HCI) aspect of the proposed system. The
remaining sections of the paper are structured as follows. Section II reviews melanoma analysis methods.

                            IV. DETAILED DESCRIPTION ON SIMULATION TOOL

Matrix laboratory may be a multi-paradigm numerical computing environment and proprietary programming language
developed by Math Works. Matrix laboratory allows matrix manipulations, plotting of functions and data,
implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages,
including C, C++, C#, Java, Fortran and Python. Although Matrix laboratory is meant primarily for numerical
computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing abilities. An
additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and
embedded systems. Cleve Moler, the chairman of the pc science department at the University of latest Mexico, started
developing MATLAB within the late 1970s. He designed it to offer his students access to LINPACK and EISPACK
without them having to find out Fortran. It soon spread to other universities and located a robust audience within the
applied math community. Jack Little, an engineer, was exposed thereto during a visit Moler made to Stanford
University in 1983. Recognizing its commercial potential, he joined with Moler and Steve Bangert. They rewrote
MATLAB in C and founded Math Works in 1984 to extend its development. These rewritten libraries were referred to
as JACKPAC. In 2000, MATLAB was rewritten to use a more moderen set of libraries for matrix manipulation,
LAPACK.MATLAB was first adopted by researchers and practitioners on top of things engineering, little’s specialty,
but quickly spread to several other domains. it's now also utilized in education, especially the teaching of algebra and
numerical analysis, and is popular amongst scientists involved in image processing. Variables are defined using the
assignment operator. MATLAB may be a weakly typed programing language because types are implicitly converted.
it's an inferred typed language because variables are often assigned without declaring their type, except if they're to be
treated as symbolic objects, which their type can change. Datas can come from constants, from computation involving
values of other variables, or from the output of a function.

                                                     V. MOTIVATION

In today's world our industry faces the junction of two rapidly developing markets: healthcare and emerging mobile
computing. The increasing availability of mobile devices equipped with multi-core CPUs and high resolution image
sensors have the potential to empower people to become more proactive and engaged in their own healthcare processes.
This creates the chance to style a good sort of mobile image applications, e.g., mobile image search, land mark
recognition, mobile video type classification and 3-D scene video. Among many imaging applications, healthcare
applications have drawn tons of attentions recently. Several methods are proposed to support efficient and timely
image-related diagnosis. aside from normal imaging healthcare applications, mobile imaging healthcare applications
have the benefits of being practical, low-cost and easily accessible. The work focuses on accessible detection of
melanoma (MM) using mobile image analysis. MM may be a sort of carcinoma arising from the pigment cells of the
epidermis. There are three main sorts of skin cancers: MM, basal cell carcinoma and epithelial cell carcinomas. The
MMDermoscopic images are crazy the help of liquid medium or non-polarized light and magnifiers, and they include
features under the surface of the skin (e.g., pigment network, aggregated globules).

Copyright to IJIREEICE                                DOI 10.17148/IJIREEICE.2020.8313                                       73
Melanoma Detection using Convolution Neural Networks - IJIREEICE
ISSN (Online) 2321-2004
                                                    IJIREEICE                                             ISSN (Print) 2321-5526

                                     International Journal of Innovative Research in
                         Electrical, Electronics, Instrumentation and Control Engineering
                                               Vol. 8, Issue 03, March 2020

                  2D visualization of SVM output of the LCF after dimension reduction for TEST SET

On the contrary, light images (e.g., crazy smartphones) don't include these features. This work focuses on the analysis
of light images. Source: www.dermoscopy.org, NSC is taken into account most hazardous. consistent with an annual
[15], the American Cancer Society projected 87,110 new cases of melanoma within the us by the top of 2017, with
almost 9,730 estimated deaths. MM could also be treated successfully, yet the curability depends on its early detection
and removal when the tumour remains relatively small and thin.

                                                VI.SYSTEM MODEL

1.Gray Scale Conversion
Grayscale image contains only brightness information. Each pixel value in grayscale image corresponds to an amount
or quantity of light. The brightness graduation can be differentiated in grayscale image. Grayscale image measures only
light intensity. 8-bit image will have brightness variation from 0 to 255 where ‘0’ represents black and ‘255’ represents
white. In grayscale conversion colour image is converted into grayscale image. Grayscale images are easier and faster
to process than coloured images. All image processing technique are applied on grayscale image.
Grayscale intensity= 0.299 R + 0.587 G + 0.114 B (1)

2.Noise Removal
The objective of noise removal is to detect and removed unwanted noise from digital image. The difficulty is in
deciding which features of an image are real and which are caused by noise. Noise is random variations in pixel values.
In our proposed system the technique using median filter to remove unwanted noise. Median filter is nonlinear filter, it
leaves edges invariant. Median filter is implemented by sliding window of odd length. Each sample value is sorted by
magnitude, the center most value is median of sample within the window, is a filter output.

3.Segmentation
3.1. Edge-Based
• Assumption: different objects are separated by edges (grey level discontinuities)
• The segmentation is performed by identifying the grey level gradients
• The same approach can be extended to colour channels
3.2. Region-Based
• Assumption: different objects are separated by other kind of perceptual boundaries – neighbourhood features.
• Most often texture-based – Textures are considered as instantiations of underlying stochastic processes and analysed
under the assumptions that stationarity and ergodicity hold.
 • Method – Region-based features are extracted and used to define “classes”

Segmentation evaluation for the Otsu (a), (b), the MST (c), (d), and the proposed (e), (f) methods. The green rectangle
                  represents the ground-truth; the red rectangle denotes the segmentation result.

Copyright to IJIREEICE                             DOI 10.17148/IJIREEICE.2020.8313                                         74
Melanoma Detection using Convolution Neural Networks - IJIREEICE
ISSN (Online) 2321-2004
                                                    IJIREEICE                                             ISSN (Print) 2321-5526

                                     International Journal of Innovative Research in
                         Electrical, Electronics, Instrumentation and Control Engineering
                                                Vol. 8, Issue 03, March 2020

4. MSER (Maximally Stable Extremal Regions)
The MSER extraction implements the following steps:
•         Sweep threshold of intensity from black to white, performing a simple luminance thresholding of the image
•         Extract connected components (“Extremal Regions”)
•         Find a threshold when an extremal region is “Maximally Stable”, i.e. local minimum of the relative growth of
its square. Due to the discrete nature of the image, the region below / above may be coincident with the actual region, in
which case the region is still deemed maximal.
•         Approximate a region with an ellipse (this step is optional)

5.Classifier
• Convolutional neural networks A CNN is a multilayer stack of learning modules well-suited for treating bi-
dimensional dataset (i.e. images). CNNs are subclass of neural networks that combine the nonlinear processing of
hidden layer neurons with essential properties of weight sharing (over customizable sub-images so-called convolutional
filters), pooling and down-sampling. As a consequence, such networks are expected to learn representation of data with
increasing levels of abstraction regrouped by semantic similarities. The canonical structure of a CNN contains:
•1) a given number of convolutional layers, each being divided in four sub-tasks: convolutional filtering, nonlinearity,
pooling and sub-sampling,
• 2) a set of fully connected layers with properties identical to that of classical neural networks
• 3) a soft max layer performing soft max function which outputs posterior probabilities for each class.

                                             The Inverted Index Approach

                                  VII. CONVOLUTION NEURAL ALGORITHM

Convolutional networks may include local or global pooling lay which combine the outputs of neuron clusters at one
layer into one neuron within the next layer. for instance, max pooling uses the utmost value from each of a cluster of
neurons at the prior layer. Another example is average pooling, which uses the typical value from each of a cluster of
neurons at the prior layer. Fully connected layers connect every neuron in one layer to each neuron in another layer. it's
in theory an equivalent because the traditional Multi-Layer Perceptron neural network (MLP). The flattened matrix
goes through a totally connected layer to classify the pictures. Each neuron during a neural network computes an output
value by applying some function to the input values coming from the receptive field within the previous layer. The
function that's applied to the input values is specified by a vector of weights and a bias (typically real numbers).
Learning during a neural network progresses by making incremental adjustments to the biases and weights. The vector
of weights and therefore the bias are called a filter and represents some feature of the input (e.g., a specific shape). A
distinguishing feature of CNNs is that a lot of neurons share an equivalent filter. This reduces memory footprint
because one bias and one vector of weights is employed across all receptive fields sharing that filter, instead of each
receptive field having its own bias and vector of weights.

                               VIII. DIAGNOSIS OF MELANOMA SKIN CANCER

Diagnosis is that the process of checking out the explanation for a ill health. Diagnosing melanoma carcinoma usually
begins with a visit to your general practitioner. Your doctor will ask you about any signs or symptoms you've got and
do a skin exam. supported this information, your doctor may refer you to a specialist, like a dermatologist or surgeon.
the method of diagnosis could seem long and frustrating. It’s normal to stress but attempt to remember that other health
conditions can cause similar signs and symptoms as melanoma carcinoma. It’s important for the healthcare team to rule

Copyright to IJIREEICE                             DOI 10.17148/IJIREEICE.2020.8313                                         75
ISSN (Online) 2321-2004
                                                   IJIREEICE                                            ISSN (Print) 2321-5526

                                     International Journal of Innovative Research in
                         Electrical, Electronics, Instrumentation and Control Engineering
                                              Vol. 8, Issue 03, March 2020

out other reasons for a ill health before making a diagnosis of melanoma carcinoma. the subsequent tests are usually
wont to rule out or diagnose melanoma carcinoma. Many of an equivalent test want to diagnose cancer are wont to
determine the stage (how far the cancer has spread). Convolution Neural Networks (CNN) are used for extracting
features from images. CNN are biologically-inspired variants of MLPs. A CNN consists of an input and an output
layer, as well as multiple hidden layers. The hidden layers of a CNN typically consist of convolutional layers, pooling
layers, fully connected layers and normalization layers. Output of every layer acts as an input to next layer.

                                                   The ROC Curve

In general, an automatic melanoma analysis system is often constructed in four main phases. The first phase is the
image acquisition which can be performed through different devices such as dermatoscope, spectroscope, standard
digital camera or camera phone. The images acquired by these devices exhibit peculiar features and different qualities,
which can significantly change the outcome of the analysis process. The second phase involves skin detection, by
removing artefacts (e.g., ruler, hair), and lesion border localization. The third phase computes a compact set of
discriminative features. Finally, the fourth phase classifies the lesions based on the extracted features. There is a
plethora of computer-aized systems for segmentation and classification of skin lesions. Most of those works
investigated for lesion segmentation of a dermoscopic image by using classic image segmentation.

Simulated Images

                          Image Training                               Global Region –Based Segmentation

Copyright to IJIREEICE                            DOI 10.17148/IJIREEICE.2020.8313                                        76
ISSN (Online) 2321-2004
                                                    IJIREEICE                                            ISSN (Print) 2321-5526

                                     International Journal of Innovative Research in
                         Electrical, Electronics, Instrumentation and Control Engineering
                                               Vol. 8, Issue 03, March 2020

                                                   Grey scale image

                             Discrete view                                    Infected area

                                        Message box which displays the output

                                                 IX. CONCLUSION

An accessible mobile health-care solution for melanoma detection, using mobile image analysis is proposed. The main
characteristics of the proposed system are: an efficient hierarchical segmentation scheme suitable for the resource
constrained platform, a replacement set of features which efficiently capture the color variation and border irregularity
from the smartphone-captured image, and a replacement mechanism for choosing a compact set of the foremost
discriminative features. The experimental results supported 184 camera images demonstrate the efficiency of the
prototype in accurate segmentation and classification of the skin lesion privately images. Several possible usage
scenarios for the current solution is foresee. It might be employed by the overall public for preliminary self-screening
or it can assist the overall physicians during the diagnosis process. In addition to the technical development, attention
also to understand the usability and acceptance challenges. For this purpose, us investigated the HCI design issues
through an exploratory case study and semi structured interviewed. The discovered several important HCI issues that
should be addressed in future work

Copyright to IJIREEICE                            DOI 10.17148/IJIREEICE.2020.8313                                         77
ISSN (Online) 2321-2004
                                                             IJIREEICE                                                      ISSN (Print) 2321-5526

                                        International Journal of Innovative Research in
                            Electrical, Electronics, Instrumentation and Control Engineering
                                                        Vol. 8, Issue 03, March 2020

                                                             REFERENCES

[1]. W. Zhou, M. Yang, H. Li, X. Wang, Y. Lin, and Q. Tian, “Towards codebook-free: Scalable cascaded hashing for mobile image search,” IEEE
      Transactions on Multimedia, pp. 601–611, April 2014.
[2]. H. Li, Y. Wang, T. Mei, J. Wang, and S. Li, “Interactive multimodal visual search on mobile device,” IEEE Transactions on Multimedia, vol.
      15, no. 3, pp. 594–607, April 2013.
[3]. B. Girod, V. Chandrasekhar, D. M. Chen, N.-M. Cheung, R. Grzeszczuk, Y. Reznik, G. Takacs, S. S. Tsai, and R. Vedantham, “Mobile visual
      search,” IEEE Signal Processing Magazine, vol. 28, no. 4, pp. 61–76, 2011.
[4]. T. Chen, K. H. Yap, and D. Zhang, “Discriminative soft bag-ofvisual phrase for mobile landmark recognition,” IEEE Transactions on
      Multimedia, vol. 16, no. 3, pp. 612–622, April 2014.
[5]. W. Min, C. Xu, M. Xu, X. Xiao, and B. K. Bao, “Mobile landmark search with 3d models,” IEEE Transactions on Multimedia, vol. 16,no. 3,
      pp. 623–636, April 2014
[6]. H. Ganster, A. Pinz, R. Röhrer, E. Wildling, M. Binder, and H. Kittler, “Automated melanoma recognition,” IEEE Trans. Med. Imaging, vol.
      20, no. 3, pp. 233–239, 2001.
[7]. M. E. Celebi, H. A. Kingravi, B. Uddin, H. Iyatomi, Y. A. Aslandogan, W. V. Stoecker, and R. H. Moss, “A methodological approach to the
      classification of dermoscopy images,” Comp. Med. Imag. and Graph., vol. 31, no. 6, pp. 362–373, 2007.
[8]. T. Wadhawan, N. Situ, H. Rui, K. Lancaster, X. Yuan, and G. Zouridakis, “Implementation of the 7-point checklist for melanoma detection on
      smart handheld devices,” Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2011.
[9]. K. Ramlakhan and Y. Shang, “A mobile automated skin lesion classification system,” in ICTAI, 2011.
[10]. T.-T. Do, Y. Zhou, V. Pomponiu, N.-M. Cheung, and D. C. I. Koh, “Method and device for analysing an image,” Patent US 20 170 231 550
      A1, August 17, 2017.

Copyright to IJIREEICE                                      DOI 10.17148/IJIREEICE.2020.8313                                                  78
You can also read