tensorflow learning
Part 1: Prepare data
This is common for all deep learning work. And it depends on the data. In python, many people like to use pickle to store data.
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/",one_hot = True)
In general, data include inputs and labels.
Part 2: build model
At most time, we could create a python file named model.py to define our deep learning model.
In general, the model include the input layers, hidden layers and loss function.
We use placeholder to set the input layers in tensorflow. (Take mnist as example)
inputs = tf.placeholder(dtype='float32',shape=[None,784],name='inputs')
labels = tf.placeholder(dtype='float32',shape=[None,10],name='labels')
And we define every layers by the code below:
in_size = 784
hid_size = 10
with tf.name_scope('layer1'):
Weights= tf.Variable(tf.random_normal([in_size,hid_size]),name='Weights')
biases = tf.Variable(tf.random_normal(1,hid_size),name = 'biases')
l1 = tf.add(tf.matmul(inputs,Weights),biases)
define loss:
loss = tf.loss.mean_square(l1,labels)
define summary:
tf.summary.scalar(loss,name='loss')
Part 3: train and save
define the training size
batch_size = 100
all_size = 50000
define the Session
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
initialize the parameters
tf.global_variables_initializer().run()
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('logs/',sess.graph)
for epoch in range(1000):
begin = epoch*batch_size%(all_size-batch_size)
end = begin + batch_size
batch = inputs[begin:end]
print(batch.shape)
batch_labels = labels[epoch*batch_size:(epoch+1) * batch_size]
input the training data
train_summary,loss_value=sess.run([train_step,loss],feed_dict={image_inputs:batch,points:batch_labels})
if (epoch%10==0):
train_writer.add_summary(train_summary,epoch)
print(loss_value)
train_writer.close()