Header Ads Widget

Case study of CNN for eg on Diabetic Retinopathy, Building a smart speaker, Self-deriving car etc.

Case Study of CNN for Diabetic retinopathy :

  • Diabetic retinopathy also known as diabetic eye disease, is a medical state in which destruction occurs to the retina due to diabetes mellitus, It is a major cause of blindness in advance countries.
  • Diabetic retinopathy influence up to 80 percent of those who have had diabetes for 20 years or more.
  • The overlong a person has diabetes, the higher his or her chances of growing diabetic retinopathy.
  • It is also the main cause of blindness in people of age group 20-64.
  • Diabetic retinopathy is the outcome of destruction to the small blood vessels and neurons of the retina.
Case Study of CNN for Self-deriving car :

Convolutional neural networks (CNN) are used to model spatial information, such as images. CNNs are very good at extracting features from images, and they’re often seen as universal non-linear function approximators. 

CNNs can capture different patterns as the depth of the network increases. For example, the layers at the beginning of the network will capture edges, while the deep layers will capture more complex features like the shape of the objects (leaves in trees, or tires on a vehicle). This is the reason why CNNs are the main algorithm in self-driving cars. 

The key component of the CNN is the convolutional layer itself. It has a convolutional kernel which is often called the filter matrix. The filter matrix is convolved with a local region of the input image which can be defined as:

  • Where: 

  • the operator * represents the convolution operation,
  • w is the filter matrix and b is the bias, 
  • x is the input,
  • y is the output. 

The dimension of the filter matrix in practice is usually 3X3 or 5X5. During the training process, the filter matrix will constantly update itself to get a reasonable weight. One of the properties of CNN is that the weights are shareable. The same weight parameters can be used to represent two different transformations in the network. The shared parameter saves a lot of processing space; they can produce more diverse feature representations learned by the network.

  • The output of the CNN is usually fed to a nonlinear activation function. The activation function enables the network to solve the linear inseparable problems, and these functions can represent high-dimensional manifolds in lower-dimensional manifolds. Commonly used activation functions are Sigmoid, Tanh, and ReLU, which are listed as follows:

It’s worth mentioning that the ReLU is the preferred activation function, because it converges faster compared to the other activation functions. In addition to that, the output of the convolution layer is modified by the max-pooling layer which keeps more information about the input image, like the background and texture. 

  • The three important CNN properties that make them versatile and a primary component of self-driving cars are:

  • local receptive fields, 
  • shared weights, 
  • spatial sampling
  • These properties reduce overfitting and store representations and features that are vital for image classification, segmentation, localization, and more.

Convolutional neural networks


Case Study of CNN for Building a smart speaker :

Post a Comment

0 Comments