메뉴 건너뛰기

목록
2022.11.23 20:59

확인용

profile
조회 수 98 댓글 2 예스잼 0 노잼 0

No Attached Image

import torch
import torch.optim as optim
import torch.nn as nn
import numpy as np
import pandas as pd
from torch.utils.data import DataLoader, TensorDataset
from sklearn.preprocessing import MinMaxScaler
 
import matplotlib.pyplot as plt
 
torch.manual_seed(0)
 
device = torch.device("cuda:0" if torch.cuda.is_available()
                      else "cpu")
 
seq_length = 7
data_dim = 8
hidden_dim = 10
output_dim = 1
learning_rate = 0.01
epochs = 500
batch_size = 100
 
def build_dataset(data, seq_len):
    dataX = []
    dataY = []
    for i in range(len(data)-seq_len):
        x = data[i:i+seq_len, :]
        y = data[i+seq_len, [-1]]
        dataX.append(x)
        dataY.append(y)
    return np.array(dataX), np.array(dataY)
 
df = pd.read_csv('------------')
 
df = df[::-1]
df = df[['입맛에 맞게 고치세요']]
 
train_size = int(len(df)*0.7)
train_set = df[0:train_size]
test_set = df[train_size-seq_length:]
 
scaler_x = MinMaxScaler()
scaler_x.fit(train_set.iloc[:,:-1])
 
train_set.iloc[:,:-1] = scaler_x.transform(train_set.iloc[:,:-1])
test_set.iloc[:,:-1] =scaler_x.transform(test_set.iloc[:,:-1])
 
scaler_y = MinMaxScaler()
scaler_y.fit(train_set.iloc[:,[-1]])
 
trainX, trainY = build_dataset(np.array(train_set), seq_length)
testX, testY = build_dataset(np.array(test_set), seq_length)
 
trainX_tensor = torch.LongTensor(trainX).to(device)
trainY_tensor = torch.LongTensor(trainY).to(device)
 
testX_tensor = torch.LongTensor(testX).to(device)
testY_tensor = torch.LongTensor(testY).to(device)
 
dataset = TensorDataset(trainX_tensor, trainY_tensor)
 
dataloader = DataLoader(dataset,
                        batch_size=batch_size,
                        shuffle=False,
                        drop_last=True)
 
class LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim, seq_len, output_dim, layers):
        super(LSTM, self).__init__()
        self.hidden_dim = hidden_dim
        self.seq_len = seq_len
        self.output_dim = output_dim
        self.layers = layers
        self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers=layers,
                            batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim, bias=True)
    def reset_hidden_state(self):
        self.hidden = (
            torch.zeros(self.layers, self.seq_len, self.hidden_dim),
            torch.zeros(self.layers, self.seq_len, self.hidden_dim)            
        )
   
    def forward(self, x):
        x, _status = self.lstm(x)
        x = self.fc(x[:,-1])
        return x;
LSTM = LSTM(data_dim, hidden_dim, seq_length, output_dim, 1).to(device)
 
def train_model(mode, train_df, epochs=None, lr=None, verbose=10,
                patience=10):
    criterion = nn.MSELoss().to(device)
   
    optimizer = optim.Adam(model.parameters(), lr=learning_rate)
 
    train_hist = np.zeros(epochs)
    for epoch in range(epochs):
        avg_cost = 0
        total_batch = len(train_df)
       
        for batch_idx, samples in enumerate(train_df):
            x_train, y_train = samples
            model.reset_hidden_state()
           
            outputs =model(x_train)
           
            loss = criterion(outputs, y_train)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
           
            avg_cost += loss/total_batch
        train_hist[epoch] = avg_cost
       
        if epoch%verbose==0:
            print('Epoch: ', '%04d' % (epoch),
                  'train loss : ', '{:.4f}'.format(avg_cost))
         
        if(epoch%patience==0) % (epoch):
            if train_hist[epoch-patience] < train_hist[epoch]:
                print('\n Early Stopping')
                break
        return model.eval(), train_hist
   
model, train_hist = train_model(LSTM, dataloader, epochs=epochs,
                                lr=learning_rate, verbose=20, patience=10)

with torch.no_grad():
    pred = []
    for pr in range(len(testX_tensor)):
        model.reset_hidden_state()
       
        predicted = model(torch.unsqueeze(testX_tensor[pr], 0))
        predicted = torch.flatten(predicted).item()
        pred.append(predicted)
       
    pred_inverse = scaler_y.inverse_transform(np.array(pred).reshape(-1,1))
    testY_inverse = scaler_y.inverse_transform(testY_tensor)
def MAE(true, pred):
    return np.mean(np.abs(true-pred))
print('MAE SCORE: ',MAE(pred_inverse, testY_inverse))
 
length = len(test_set)
target = np.array(test_set)[length-seq_length:]
 
target = torch.LongTensor(target)
target = target.reshape([1,seq_length, data_dim])
 
out = model(target)
pre = torch.flattern(out).item()
 
pre = round(pre, 8)
pre_inverse = scaler_y.inverse_transform(np.array(pre).reshape(-1,1))
print(pre_inverse.reshape([3])[0])

List of Articles
번호 제목 글쓴이 날짜 조회 수 추천
공지 수용소닷컴 이용약관 file asuka 2020.05.16 3862 1
1102 뻑난 UMPC에 탑재된 ROM 또치면과락 2024.03.22 186 0
1101 구글콘솔개발자계정 해지당함 2 file 말랑이 2024.03.21 236 0
1100 채널과 버스를 헷갈려버렸다 1 또치면과락 2024.03.16 165 0
1099 고독한 벼락치기 또치면과락 2024.03.12 159 0
1098 내일 몰아서 공부할것 또치면과락 2024.03.10 124 0
1097 오늘의 공부 또치면과락 2024.03.07 97 0
1096 오늘 공부 안함 또치면과락 2024.03.05 106 0
1095 오늘의 공부 또치면과락 2024.03.04 93 0
1094 스프린트... 또치면과락 2024.03.04 101 0
1093 노트북에 우분투데스크탑깔앗는대 file 말랑이 2024.03.03 85 0
1092 이번주 목표 설정 또치면과락 2024.03.03 64 0
1091 쿠다랑 엔비디아 다 날리고 새로 까는 중 2 삼육두유 2024.03.01 148 0
1090 내일 워드프로세서 시험보러감 또치면과락 2024.02.28 111 0
1089 왜 힙이 느릴까... 또치면과락 2024.02.24 154 0
1088 필기시험 떨어짐 3 또치면과락 2024.02.21 172 0
1087 내일은 기출문제... 3년치 돌리기 2 말폭도 2024.02.18 179 0
1086 다이소 무선마우스 보드 file 말랑이 2024.02.18 168 0
1085 우분투 파티션 자동구성시 에러 말랑이 2024.02.17 174 0
1084 정보처리기사.. ADsP... 정보보안기사 1 말폭도 2024.02.13 172 0
1083 스택은 왜 빠른가 3 말폭도 2024.02.06 84 0
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 60 Next
/ 60

Sketchbook5, 스케치북5

Sketchbook5, 스케치북5