Hi,大家好,我是编程小6,很荣幸遇见你,我把这些年在开发过程中遇到的问题或想法写出来,今天说一说settingwithcopywarning_thread.start,希望能够帮助你!!!。
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
(pytorch1.3-cuda10.2) [huanghaiyang@dgx02 semantic-segmentation-main]$ python -m runx.runx scripts/eval_cityscapes.yml -i
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
None
None
NoneNone
None
None
Global Rank: 5 Local Rank: 5
Global Rank: 1 Local Rank: 1
Global Rank: 2 Local Rank: 2
Global Rank: 3 Local Rank: 3
Global Rank: 0 Local Rank: 0
Global Rank: 6 Local Rank: 6
None
Global Rank: 7 Local Rank: 7
None
Global Rank: 4 Local Rank: 4
跑代码的时候出现上述问题,这是由于pytouch分布式训练的问题。别人的代码设置了多个GPU并行,然而你跑的时候只用了一个或者两个,这个参数需要指定
python -m torch.distributed.launch --nproc_per_node=1 --master_port 88888 train.py
--nproc_per_node=1 这个1表示你实际的GPU数量,
--master_port 88888 这个表示端口,一般不用设置,或者随便设置一个数字就行。当出现
runtimeerror: address already in use
这时候加--master_port 12345 就行
上一篇
已是最后文章
下一篇
已是最新文章