FIFA-2022 Career Guide Free Tutorials Go to Your University Placement Preparation 
0 like 0 dislike
264 views
in Python Programming by Goeduhub's Expert (3.1k points)

1 Answer

0 like 0 dislike
by Goeduhub's Expert (3.1k points)
edited by
 
Best answer

Tensor Flow Math

Example

#Addition

x = tf.add(5, 2)

sess = tf.Session()

print (sess.run(x))

#Subtraction 

y = tf.subtract(10, 4)

sess = tf.Session()

print (sess.run(y))

#Multiplication 

z = tf.multiply(2, 5)  

sess = tf.Session()

print (sess.run(z))

#Converting Data

with tf.Session() as sess:

    print (sess.run(tf.subtract(tf.cast(tf.constant(5.0), tf.int32), tf.constant(1)))) 

 Output 

7



10 


4

Note:- tf.cast ,Casts a tensor to a new type (Convert one data to another data type, in this example 5.0 a float value is converted into int32 data type )

Tensor Flow Divide

import tensorflow as tf

x = tf.constant(10)

y = tf.constant(2)

z = tf.subtract(tf.divide(x,y), tf.cast(tf.constant(1),tf.float64))

with tf.Session() as sess:

output = sess.run(z)

print (output )
 

Output

4.0

Activation Functions 

Sigmoid function (1 / (1 + exp(-x)))

# 1 / (1 + exp(-x)) sigmoid function expression 

import numpy as np

def sigmoid_(a):

    return 1/(1+np.exp(-a))

#calling function 

a = [1,0,5,-6,7]

for i in a:

    print (sigmoid_(i))

Output

0.7310585786300049 0.5 0.9933071490757153 0.0024726231566347743 

0.9990889488055994 

tanh (hyperbolic tangent function)

tanh(a) = (e^a - e^-a)/e^a + e^-a

def tanh_(a):

    return np.tanh(a)

tanh_(.3)

a = [-4,0,2]

for i in a:

    print(tanh_(i))

Output

-0.999329299739067 0.0 

0.9640275800758169 

ReLU (Rectified Linear Unit )

def relu_(a):

    return np.maximum(0,a)

#relu_(.3)

a = [-4,-2,4,5]

for i in a:

    print (relu_(i))

Ouput

0 0 4 

5

Softmax Function  

# Softmax

Scores = [12, 8,.3]

import numpy as np

def softmax(x):

    return np.exp(x)/np.sum(np.exp(x), axis = 0)

print (softmax(Scores)) #Display Probability 

print (sum(softmax(Scores))) # sum of all probabilities =1

Output

[9.82005792e-01 1.79860635e-02 8.14457845e-06] 

1.0 

How to Increase the efficiency of the Model

  • The way to check the efficiency of any model is to check the loss function for thee model, such as how well the model is giving the correct result on the existing data.
  • Updating model variables and re-calculating loss for better efficiency.
  • Cycling until loss is very low.

 

Loss Function

  • A loss function measures how far apart the current model is from the provided data

Example

# Loss Function

import tensorflow as tf

W = tf.Variable([.3], tf.float32)

b = tf.Variable([-.3], tf.float32)

x = tf.placeholder(tf.float32)

y = tf.placeholder(tf.float32)

linear_model = W*x + b

# sum(actual - preticted)^2

squared_delta = tf.square(linear_model - y)

loss = tf.reduce_sum(squared_delta)

init = tf.global_variables_initializer()

sess = tf.Session()

sess.run(init)

print (sess.run(loss,{x:[1,2,3,4], y:[0,-1,-2,-3]}))

sess.close()

Output

23.66

Model Optimizer

  • Optimizer modifies each variable according to the magnitude of the derivative of loss with respect to that variable.
  • An optimization problem seeks to minimize a loss function.

Example

# Model Optimization

# Loss Function

W = tf.Variable([.5], tf.float32)

b = tf.Variable([.1], tf.float32)

x = tf.placeholder(tf.float32)

y = tf.placeholder(tf.float32)

linear_model = W*x + b

squared_delta = tf.square(linear_model - y)

loss = tf.reduce_sum(squared_delta)

optimizer = tf.train.GradientDescentOptimizer(0.01)

# 0.01 Learning rate

train = optimizer.minimize(loss)

init = tf.global_variables_initializer()

sess = tf.Session()

sess.run(init)

for i in range(3):

 sess.run(train, {x:[1,2,3,4],y:[0,-1,-2,-3]})

print (sess.run([W,b]))

Output

[array([-0.54620796], dtype=float32), array([-0.2057792], dtype=float32)] 

Simplest TensorFlow Neural Network

Example

#import Data science libraries 

import numpy as np

#import tensor flow

import tensorflow as tf 

# Model parameters

W = tf.Variable([.9], tf.float32)

b = tf.Variable([.1], tf.float32)

# Model input and output

x = tf.placeholder(tf.float32)

linear_model = W * x + b

y = tf.placeholder(tf.float32)

#  loss function

loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares

# optimizer

optimizer = tf.train.GradientDescentOptimizer(0.01)

train = optimizer.minimize(loss)

# training data

x_train = [1,2,3,4]

y_train = [0,-1,-2,-3]

# training loop

init = tf.global_variables_initializer()

sess = tf.Session()

sess.run(init) # reset values to wrong

for i in range(10):

  sess.run(train, {x:x_train, y:y_train})

# evaluate training accuracy

curr_W, curr_b, curr_loss  = sess.run([W, b, loss], {x:x_train, y:y_train})

print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

Output

W: [-0.582103]    b: [-0.22859187]    loss: 1.0083919

Learn & Improve In-Demand Data Skills Online in this Summer With  These High Quality Courses[Recommended by GOEDUHUB]:-

Best Data Science Online Courses[Lists] on:-

Claim your 10 Days FREE Trial for Pluralsight.

Best Data Science Courses on Datacamp
Best Data Science Courses on Coursera
Best Data Science Courses on Udemy
Best Data Science Courses on Pluralsight
Best Data Science Courses & Microdegrees on Udacity
Best Artificial Intelligence[AI] Courses on Coursera
Best Machine Learning[ML] Courses on Coursera
Best Python Programming Courses on Coursera
Best Artificial Intelligence[AI] Courses on Udemy
Best Python Programming Courses on Udemy

Related questions

 Important Lists:

Important Lists, Exams & Cutoffs Exams after Graduation PSUs

 Goeduhub:

About Us | Contact Us || Terms & Conditions | Privacy Policy ||  Youtube Channel || Telegram Channel © goeduhub.com Social::   |  | 

 

Free Online Directory

...