Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I'm using some ready-made scripts for distributed training of my model, and not understanding very well the mechanics behind it. Basically, it uses torch.distributed and the master script spawns multiple processes, but updating happens in the same model (script). However, validation on the validation dataset is done separately on the different processes, and I don't get an "overall" validation loss (see here). How can I combine the validation results from the different processes to get an overall result?

question from:https://stackoverflow.com/questions/65878299/distributed-evaluation-of-validation-dataset-in-pytorch

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
844 views
Welcome To Ask or Share your Answers For Others

1 Answer

Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...