Python and Tensorflow Tutorials

Convolution Neural Network

The Following are the Tensorflow Tutorials:

Tutorial 1: Write a very very simple Tensorflow program code for :

  • A) Add a two matrix .
  • B) Inverse operation of matrix and its way of decomposition.
  • C) Calculate the eigen values and eigen vectors of matrix ?
  • D) Check this: GitHub Tutorial 1 .


...

Tutorial 2: Write a program for basic Matrix operation ?

  • A) Write a tensorflow program that pass two 1-D array of same size. The passed arrays must be have element-wise multiplication. The final result is also multiplied by constant. The program must display output?
  • B) Write a tensorflow program that pass two 1-D array of same size. The passed arrays should have vector wise multiplication and provide the results. The program must display output?
  • C) Write a tensorflow program that pass two 2-D array of same size. The passed arrays should have vector wise multiplication and provide the results.The program must display output?
  • D) Check this: GitHub Tutorial 2

..........

Tutorial 3: Write a program that defines the following Activation Functions in deep learning systems?

  • A) ReLU Activation Function, B) Sigmoid Activatioon Function , C) Tanh Activation Function, D) Softsign Activation Function, E) Softplus Activation Function, F) ELU activation Fuction, G) Check this : GitHub Tutorial 3


Tutorial 4: Write a Tensor-flow code for different loss fuctions. Evaluate and display their performance results ?

  • (A) Regression Loss Functions: (a) L2-norm function (Euclidean loss or mean square loss). (b) L1-norm loss function. (c) Pseudo-Hubber loss fucntion.
  • (B) Classification Loss Functions: (a) Hinge Loss function, (b) Cross-entropy Loss, (c) Sigmoid Cross-entropy Loss, (d) Weighted Cross Entropy, (e) Softmax cross Entropy loss, (f) Sparse Cross-entropy Loss, (g) Check this at GitHub.



Tutorial 6: Write a Tensor-flow code for Linear Regression example. Evaluate and display their performance



Tutorial 8: Write a code for Simple CNN in Tensorflow For MNIST dataset. You should have to use two hidden layers, Relu activation function and max-pooling phenomena. The Output should display the recognized dataset with resultant accuracy? Check this: GitHub CNN MNIST 1 and Github CNN MINIST 2

No comments:

Post a Comment