I want to train my model on a GPU and placed all my input tensors on that device
device = torch.device('cuda')
X_train = torch.tensor(data_train[features].to_numpy(), device=device)
X_test = torch.tensor(data_test[features].to_numpy(), device=device)
y_train = torch.tensor(data_train.target.to_numpy(), device=device)
y_test = torch.tensor(data_test.target.to_numpy(), device=device)
Now my model takes as inputs X_num and X_cat, where X_num are my numerical features and X_cat are my categorical features. In my case, I only have categorical features. So I would like to call
model(None, X_train)
Then I get the exception
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
It seems that the None input is not on the GPU. Do you guys have a solution for this?