TensorFlow: Learning how to make a lineair algebraic network
machinelearning·@chessmasterhex·
0.000 HBDTensorFlow: Learning how to make a lineair algebraic network
<html>
<p><img src="http://www.techcentral.ie/wp-content/uploads/2017/08/TensorFlow_logo_web.jpg" width="620" height="349"/></p>
<p>TensorFlow is an open-source software library for Machine Intelligence. It is so popular even some graphic cards have so-called <a href="http://www.tomshardware.com/news/nvidia-tensor-core-tesla-v100,34384.html">TensorCores</a> in them.</p>
<p>But what can you really do with TensorFlow? The easiest example is solving lineair algebraic equations. And that is what I am going to show today.</p>
<p>If you want to follow along, you must have TensorFlow installed as described <a href="https://www.tensorflow.org/install/">here</a>.</p>
<h2>Solving lineaur algebraics</h2>
<p>First import TensorFlow into your environment</p>
<blockquote>import tensorflow as tf</blockquote>
<p>If you are familiar with Python, you can now access all of TensorFlow's classes, method & symbols. The docs are <a href="https://www.tensorflow.org/api_docs/python/">here</a>.</p>
<p>Since we want to solve a simple y = ax + b, we need to make variables for y, a, x and b.</p>
<blockquote>a = tf.Variable([0.0], dtype=tf.float32)<br>
b = tf.Variable([0.0], dtype=tf.float32)<br>
x = tf.placeholder(tf.float32)<br>
y = a*x + b<br>
realAnswer = tf.placeholder(tf.float32)</blockquote>
<p>Noticed we used tf.variable and not constants, since we want a good value for these parameters. The network will be told to change a and b according to preset x and y. We also need place holders to hold our predetermined x and y. Now we need to feed this train data.</p>
<blockquote>x_train = [1,2,3,4,5]<br>
y_train = [6,11,16,21,26]</blockquote>
<p>Next we need some kind of indication whether we are close to good values or not, this is done with a loss function, with the sum of squares.</p>
<blockquote>loss = tf.reduce_sum(tf.square(y - realAnswer)) # sum of the squares</blockquote>
<p>Next we need to define the "learning rate" or change rate of the variables for a and b. We do this by</p>
<blockquote>optimizer= tf.train.GradientDescentOptimizer(0.001)<br>
train = optimizer.minimize(loss)</blockquote>
<p>Pick a value for the Gradient Descent between 1 and 0.000001. You should change this value during training.<br>
Next we initialise the network and run it 1000 times.</p>
<blockquote>init = tf.global_variables_initializer()<br>
sess = tf.Session()<br>
sess.run(init)<br>
for i in range(1000) :<br>
sess.run(train, {x : x_train, realAnswer: y_train})<br>
curr_a, curr_b = sess.run([a,b], {x: x_train, realAnswer: y_train})<br>
print("a: %s, b: %s"%(curr_a, curr_b))</blockquote>
<p>Now your output will show something like 4.9xxx for a and 1.0xxxx for b. And that is close to correct, since I used y = 5x+1 to determine the test values.</p>
<p>Congrats! You just made your own AI!</p>
</html>