• 周四. 12 月 12th, 2024

5G编程聚合网

5G时代下一个聚合的编程学习网

热门标签

多元线性回归记录

admin

11 月 28, 2021
#读入数据并转化为numpy格式
import pandas as pd
df = pd.read_csv("C:/Users/WTSRUVF/Desktop/data/boston.csv") df = df.values
from sklearn.preprocessing import scale
x_data = tf.cast(scale(x_data), dtype = tf.float32)
#将数据归一化,并转化为float形式,以便与w 进行matmul

%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import scale

df = pd.read_csv("C:/Users/WTSRUVF/Desktop/data/boston.csv")
df = df.values
x_data = df[:, : 12]
y_data = df[:, 12]
x_data = tf.cast(scale(x_data), dtype=tf.float32)

w = tf.Variable(tf.random.normal([12, 1],mean = 0.0, stddev = 1.0, dtype = tf.float32))
b = tf.Variable(1.0, tf.float32)

x_train_data = x_data[ : 400, : ]
x_test_data = x_data[400 : , : ]
y_train_data = y_data[ : 400]
y_test_data = y_data[400 : ]
def model(x, w, b):
    return tf.matmul(x, w) * b

def loss_fun(x, y, w, b):
    return tf.reduce_mean(tf.square(y - model(x, w, b)))

def grad(x, y, w, b):
    with tf.GradientTape() as tap:
        loss_ = loss_fun(x, y, w, b)
    return tap.gradient(loss_, [w, b])

learn_rate = 0.001
optimizer = tf.keras.optimizers.SGD(learn_rate)

for i in range(50):
    for j in range(8):
        xs = x_train_data[j * 50 : (j + 1) * 50, : ]
        ys = y_train_data[j * 50 : (j + 1) * 50]
        grads = grad(xs, ys, w, b)
        optimizer.apply_gradients(zip(grads, [w, b]))
    loss_ = loss_fun(x_train_data, y_train_data, w, b)
    print("Train: %d, Loss = %f" %(i + 1, loss_))

id = np.random.randint(0, 106)

pred = model(x_test_data, w, b)[id]
pred = tf.reshape(pred, ()).numpy()

print(pred, y_test_data[id])





    
    
    

View Code

自己选择的路,跪着也要走完。朋友们,虽然这个世界日益浮躁起来,只要能够为了当时纯粹的梦想和感动坚持努力下去,不管其它人怎么样,我们也能够保持自己的本色走下去。

发表回复