网站建设资讯

NEWS

网站建设资讯

tensorflow学习教程之文本分类详析-创新互联

前言

专注于为中小企业提供成都网站设计、成都网站制作服务,电脑端+手机端+微信端的三站合一,更高效的管理,为中小企业河北免费做网站提供优质的服务。我们立足成都,凝聚了一批互联网行业人才,有力地推动了成百上千企业的稳健成长,帮助中小企业通过网站建设实现规模扩充和转变。

这几天caffe2发布了,支持移动端,我理解是类似单片机的物联网吧应该不是手机之类的,试想iphone7跑CNN,画面太美~

作为一个刚入坑的,甚至还没入坑的人,咱们还是老实研究下tensorflow吧,虽然它没有caffe好上手。tensorflow的特点我就不介绍了:

  • 基于Python,写的很快并且具有可读性。
  • 支持CPU和GPU,在多GPU系统上的运行更为顺畅。
  • 代码编译效率较高。
  • 社区发展的非常迅速并且活跃。
  • 能够生成显示网络拓扑结构和性能的可视化图。

tensorflow(tf)运算流程:

tensorflow的运行流程主要有2步,分别是构造模型和训练。

在构造模型阶段,我们需要构建一个图(Graph)来描述我们的模型,tensoflow的强大之处也在这了,支持tensorboard:

tensorflow学习教程之文本分类详析

就类似这样的图,有点像流程图,这里还推荐一个google的tensoflow游乐场,很有意思。

然后到了训练阶段,在构造模型阶段是不进行计算的,只有在tensoflow.Session.run()时会开始计算。

文本分类

先给出代码,然后我们在一一做解释

# -*- coding: utf-8 -*-

import pandas as pd
import numpy as np
import tensorflow as tf
from collections import Counter
from sklearn.datasets import fetch_20newsgroups

def get_word_2_index(vocab):
 word2index = {}
 for i,word in enumerate(vocab):
 word2index[word] = i
 return word2index


def get_batch(df,i,batch_size):
 batches = []
 results = []
 texts = df.data[i*batch_size : i*batch_size+batch_size]
 categories = df.target[i*batch_size : i*batch_size+batch_size]
 for text in texts:
 layer = np.zeros(total_words,dtype=float)
 for word in text.split(' '):
  layer[word2index[word.lower()]] += 1
 batches.append(layer)
 
 for category in categories:
 y = np.zeros((3),dtype=float)
 if category == 0:
  y[0] = 1.
 elif category == 1:
  y[1] = 1.
 else:
  y[2] = 1.
 results.append(y)
 return np.array(batches),np.array(results)

def multilayer_perceptron(input_tensor, weights, biases):
 #hidden层RELU函数激励
 layer_1_multiplication = tf.matmul(input_tensor, weights['h2'])
 layer_1_addition = tf.add(layer_1_multiplication, biases['b1'])
 layer_1 = tf.nn.relu(layer_1_addition)
 
 layer_2_multiplication = tf.matmul(layer_1, weights['h3'])
 layer_2_addition = tf.add(layer_2_multiplication, biases['b2'])
 layer_2 = tf.nn.relu(layer_2_addition)
 
 # Output layer 
 out_layer_multiplication = tf.matmul(layer_2, weights['out'])
 out_layer_addition = out_layer_multiplication + biases['out']
 return out_layer_addition

#main
#从sklearn.datas获取数据
cate = ["comp.graphics","sci.space","rec.sport.baseball"]
newsgroups_train = fetch_20newsgroups(subset='train', categories=cate)
newsgroups_test = fetch_20newsgroups(subset='test', categories=cate)

# 计算训练和测试数据总数
vocab = Counter()
for text in newsgroups_train.data:
 for word in text.split(' '):
 vocab[word.lower()]+=1
 
for text in newsgroups_test.data:
 for word in text.split(' '):
 vocab[word.lower()]+=1

total_words = len(vocab)
word2index = get_word_2_index(vocab)

n_hidden_1 = 100 # 一层hidden层神经元个数
n_hidden_2 = 100 # 二层hidden层神经元个数
n_input = total_words 
n_classes = 3  # graphics, sci.space and baseball 3层输出层即将文本分为三类
#占位
input_tensor = tf.placeholder(tf.float32,[None, n_input],name="input")
output_tensor = tf.placeholder(tf.float32,[None, n_classes],name="output") 
#正态分布存储权值和偏差值
weights = {
 'h2': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
 'h3': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
 'b1': tf.Variable(tf.random_normal([n_hidden_1])),
 'b2': tf.Variable(tf.random_normal([n_hidden_2])),
 'out': tf.Variable(tf.random_normal([n_classes]))
}

#初始化
prediction = multilayer_perceptron(input_tensor, weights, biases)

# 定义 loss and optimizer 采用softmax函数
# reduce_mean计算平均误差
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=output_tensor))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

#初始化所有变量
init = tf.global_variables_initializer()

#部署 graph
with tf.Session() as sess:
 sess.run(init)
 training_epochs = 100
 display_step = 5
 batch_size = 1000
 # Training
 for epoch in range(training_epochs):
 avg_cost = 0.
 total_batch = int(len(newsgroups_train.data) / batch_size)
 for i in range(total_batch):
  batch_x,batch_y = get_batch(newsgroups_train,i,batch_size)
  c,_ = sess.run([loss,optimizer], feed_dict={input_tensor: batch_x,output_tensor:batch_y})
  # 计算平均损失
  avg_cost += c / total_batch
 # 每5次epoch展示一次loss
 if epoch % display_step == 0:
  print("Epoch:", '%d' % (epoch+1), "loss=", "{:.6f}".format(avg_cost))
 print("Finished!")

 # Test model
 correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_tensor, 1))
 # 计算准确率
 accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
 total_test_data = len(newsgroups_test.target)
 batch_x_test,batch_y_test = get_batch(newsgroups_test,0,total_test_data)
 print("Accuracy:", accuracy.eval({input_tensor: batch_x_test, output_tensor: batch_y_test}))

另外有需要云服务器可以了解下创新互联scvps.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。


当前标题:tensorflow学习教程之文本分类详析-创新互联
网页路径:http://njwzjz.com/article/eogsj.html