김성훈 교수님의 “모두를 위한 딥러닝”의 강좌를 참고 하였습니다.

ML lab 03 - Linear Regression 의 cost 최소화의 TensorFlow 구현 (new)

import tensorflow as tf
X_data = [1, 2, 3]
Y_data = [1, 2, 3]
W = tf.Variable(tf.random_normal([1]), name='weight')
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
hypothesis = W * X
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for epoch in range(1000):
        _, _cost = sess.run([optimizer, cost], feed_dict={X:X_data, Y:Y_data})
        if epoch % 100 == 0:
            print(_cost)
    output = sess.run(hypothesis, feed_dict={X:[4, 5, 6]})
    print(output)
3.2454426
1.0055484e-08
4.5119464e-13
4.5119464e-13
4.5119464e-13
4.5119464e-13
4.5119464e-13
4.5119464e-13
4.5119464e-13
4.5119464e-13
[3.9999988 4.9999986 5.999998 ]