Nice to meet you.This is my first time studying deep learning.
Deep learning itself is a field I've never touched before, and I don't understand it at all.
Assuming a certain EC site, I would like to learn the purchase history from the CSV of the user's purchase history in the chain, and decide whether the CSV of the user for input will buy or not at 0,1.
I found samples to load CSV on the web and wrote them in my own way, but I got the following error, so I'm in trouble.
The actual CSVs are as follows:
Gender, "Vocational (1=Student, 2=Working, 3=Housewife), Age, Annual Income, Available Money, Date of Purchase, Time of Purchase, Item id previously viewed, "Season (0=Spring, 1=Summer, 2=Autumn, 3=Winter), Region (JIS), Points of Purchase, Outdoor Temperature, Climate (0=Cloud, 2=Rain), Dollar (Foreign Exchange, Yen), Yen, =Foreign Exchange, ( ( " ) 方法 ) , 送料, ネット, 口コミ (, 円 検索 , 1 ( 知った 外為 ) 2 " ,
1,3,23,400,3000000,1456758000,79200,2,0,1,0.03,16,1,103,103,0,1,1,0
1,2,34,1000,5000000,1468508400,68400,3,1,13,0.01,16,0,112,114,100,1,2,0
1,2,60,1000,5000000,1470150000,61200,2,1,13,0.05,18,0,103,112,100,2,1,1
2,1,22,400,300000,1470236400,61200,3,1,13,0.05,24,0,100,109,100,2,2,2
2,2,22,400,300000,1481641200,57600,2,3,13,0.05,3,2,101,100,0,1,1,1
The input data is as follows
1,3,23,400,3000000,1456758000,79200,2,0,1,0.03,16,1,103,103,0,1,1,0
Below is the code.
#-*-coding:utf-8-*-
import numpy as np
import chain
from chain import cuda, function, gradient_check, report, training, utils, variable
from chain import data sets, iterators, optimizers, serializers
from chain import Link, Chain, ChainList
import chain.functions as F
import chain.links as L
from chain.training import extensions
import pandas aspd
mnist=pd.read_csv('./data/bough_history_utf8.csv')
mnist_data, mnist_label = np.split(mnist, [1], Axis=1)
x_train, x_test=np.split(mnist_data, [50000])
y_train, y_test=np.split(mnist_label, [50000])
x_train=np.array(x_train,dtype=np.float32)
y_train=np.array(y_train,dtype=np.int32)
x_test=np.array(x_test,dtype=np.float32)
y_test=np.array(y_test,dtype=np.int32)
print('x_train:'+str(x_train.shape))
print('y_train:'+str(y_train.shape))
print('x_test:'+str(x_test.shape))
print('y_test:'+str(y_test.shape))
# train someone a hundred times
train_iter=iterators.SerialIterator(x_train,batch_size=100)
test_iter=iterators.SerialIterator(x_test, batch_size=100, repeat=False, shuffle=False)
class MLP (chain.Chain):
def_init__(self, n_units, n_out):
super(MLP,self).__init__(
# The size of the inputs to each layer will be inferred
l1 = L. Linear (None, n_units), #n_in->n_units
l2 = L. Linear (None, n_units), #n_units ->n_units
l3 = L. Linear (None, n_out), #n_units ->n_out
)
def__call__(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
model = L. Classifier (MLP(1000,1))
# Setup an optimizer
optimizer=chainer.optimizers.Adam()
optimizer.setup(model)
updater=training.StandardUpdater(train_iter, optimizer)
trainer=training.Trainer(updater, (20, 'epoch'), out='result')
trainer.extend(extensions.Evaluator(test_iter,model))
trainer.run()
Error Contents
x_train:(5,1)
y_train: (5,18)
x_test —(0,1)
y_test —(0,18)
Traceback (most recent call last):
File "DynamicPricing.py", line 55, in <module>
trainer.run()
File"/usr/lib64/python 2.7/site-packages/chainer/training/trainer.py", line 266, in run
update()
File"/usr/lib64/python 2.7/site-packages/chainer/training/updater.py", line 170, in update
self.update_core()
File"/usr/lib64/python 2.7/site-packages/chainer/training/updater.py", line 189, in update_core
optimizer.update(loss_func, in_var)
File"/usr/lib64/python 2.7/site-packages/chainer/optimizer.py", line 392, in update
loss=lossfun(*args,**kwds)
File"/usr/lib64/python 2.7/site-packages/chainer/links/model/classifier.py", line61, in__call_
assertlen(args)>=2
AssertionError
I'm sorry to trouble you, but I would appreciate it if you could advise me on how to resolve the error and how to output 0 and 1.
Thank you for your cooperation.
First of all, if it's a classification issue, we need data on who bought it and who didn't.
train_iter=iterators.SerialIterator(x_train,batch_size=100)
Use chain.datasets.tuple_dataset.TupleDataset
to pass the x_train to .
L.Classifier
must receive input-output pairs from the iterator.
601 GDB gets version error when attempting to debug with the Presense SDK (IDE)
897 When building Fast API+Uvicorn environment with PyInstaller, console=False results in an error
573 Understanding How to Configure Google API Key
611 Uncaught (inpromise) Error on Electron: An object could not be cloned
© 2024 OneMinuteCode. All rights reserved.