中阶API
激活函数
一般作用于单个神经元结构的输出,我们需要利用非线性激活函数向模型中引入非线性特征,用来拟合非线性数据
1
| from tensorflow.keras import activations
|
模型层
构成模型的主要原件
1
| from tensorflow.keras import layers
|
损失函数
1
| from tensorflow.keras import losses
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
|
|
评估函数
1
| from tensorflow.keras import metrics
|
优化器
1
| from tensorflow.keras import optimizers
|
使用中阶API简化一元线性回归模型
1 2 3
|
data = np.loadtxt('line_fit_data.csv', skiprows=1, delimiter=',')
|
1 2 3
| x_data = tf.constant(data[:, 0], dtype=tf.float32) y_data = tf.constant(data[:, 1], dtype=tf.float32)
|
1 2 3
| params = tf.Variable(tf.random.normal(shape=(2,))) params
|
<tf.Variable 'Variable:0' shape=(2,) dtype=float32, numpy=array([-0.97801447, -0.74548423], dtype=float32)>
1 2 3 4
| yita = 0.5
epochs = 100
|
1 2 3
| def line_model(x, w, b): return x*w + b
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
| for epoch in range(epochs): for i in range(len(x_data)): x = x_data[i:i+1] y = y_data[i:i+1] with tf.GradientTape() as grad: y_pred = line_model(x, params[0], params[1]) error = losses.mse(y, y_pred) d_params = grad.gradient(error, params) sgd = optimizers.SGD(learning_rate=yita) sgd.apply_gradients(grads_and_vars=[(d_params, params)]) if error < 1e-20: print('Early Stop!') break print('epoch:{}, w:{}, b:{}, error:{}'.format(epoch, params.numpy()[0], params.value()[1], error))
|
epoch:0, w:2.5038626194000244, b:3.9996683597564697, error:2.991348537761951e-07
epoch:1, w:2.5000088214874268, b:3.9999992847442627, error:9.094947017729282e-13
Early Stop!