Prediction with Tensorflow Neural Networks

Asked

Viewed 489 times

6

Hello, I’m having trouble implementing a neural network. My problem is that I can only implement it with an attribute 'X'

I need help with this code for example, how do I put two input attributes? In the case of this code only has the attribute X I wanted to put another attribute that also influences the formula sap something like: linear_model = W1 * X1 + W2 * x2 + b

What the code would look like?

 import tensorflow as tf
    W = tf.Variable([.3], dtype = tf.float32)
    b = tf.Variable([-.3], dtype = tf.float32)
    x = tf.placeholder(tf.float32)
    y = tf.placeholder(tf.float32)
    linear_model = W * x + b
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)

    squared_deltas = tf.square(linear_model - y)
    loss = tf.reduce_sum(squared_deltas)

    # Teste com aprendizado

    optimizer = tf.train.GradientDescentOptimizer(0.01)
    train = optimizer.minimize(loss)
    sess.run(init) #reset values to incorrect defaults
    for i in range(1000):
        sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})

    print(sess.run([W, b]))
  • 1

    I know this may sound like an exaggeration, but I’d like to ask you to put your code there. If not, it’s something very generic.

  • @Victorstafusa I passed below a better description with a simple code of my problem! Thank you.

  • When you say "how do I put two input attributes," are you talking about adding more classifiers to improve the model’s hit margin? Or put different dice to train?

1 answer

3


All data entry in Tensorflow is performed with placeholder. Just add another placeholder field

import tensorflow as tf

W1 = tf.Variable([.3], dtype = tf.float32)
b = tf.Variable([-.3], dtype = tf.float32)
x1 = tf.placeholder(tf.float32)

W2 = tf.Variable([.3], dtype = tf.float32)
x2 = tf.placeholder(tf.float32)

y = tf.placeholder(tf.float32)
linear_model = W1 * x1 + W2 * x2 + b

init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)

# Teste com aprendizado

optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) #reset values to incorrect defaults
for i in range(1000):
    dict = {
        x1: [1, 2, 3, 4],
        x2: [5, 6, 7, 8],
        y: [0, -1, -2, -3]

    }
    sess.run(train, dict)

print(sess.run([W1, b]))

Browser other questions tagged

You are not signed in. Login or sign up in order to post.