I write this code to download my model :
args = parser.parse_args()
use_cuda = torch.cuda.is_available()
state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()
if use_cuda:
print('Using GPU')
model.cuda()
else:
print('Using CPU')
But my terminal returns the following error RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
So then I tried to write without really understanding too much :
args = parser.parse_args()
map_location=torch.device('cpu')
state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()
But I still have the same mistake. Do you see please how I can correct it? (actually I want to load my model with my CPU).
question from:https://stackoverflow.com/questions/65842425/pytorch-cuda-cpu-error-and-map-location